00:00:00.000 Started by upstream project "autotest-per-patch" build number 127143 00:00:00.000 originally caused by: 00:00:00.001 Started by upstream project "jbp-per-patch" build number 24289 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.096 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:02.542 The recommended git tool is: git 00:00:02.542 using credential 00000000-0000-0000-0000-000000000002 00:00:02.544 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:02.558 Fetching changes from the remote Git repository 00:00:02.559 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:02.572 Using shallow fetch with depth 1 00:00:02.572 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:02.572 > git --version # timeout=10 00:00:02.582 > git --version # 'git version 2.39.2' 00:00:02.582 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:02.593 Setting http proxy: proxy-dmz.intel.com:911 00:00:02.593 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/changes/10/24310/5 # timeout=5 00:00:07.364 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.375 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.386 Checking out Revision 571d49b51a09ef9417806101d0b05bbb896ef7c3 (FETCH_HEAD) 00:00:07.386 > git config core.sparsecheckout # timeout=10 00:00:07.396 > git read-tree -mu HEAD # timeout=10 00:00:07.413 > git checkout -f 571d49b51a09ef9417806101d0b05bbb896ef7c3 # timeout=5 00:00:07.436 Commit message: "jenkins/autotest: remove redundant RAID flags" 00:00:07.436 > git rev-list --no-walk 178f233a2a13202f6c9967830fd93e30560100d5 # timeout=10 00:00:07.566 [Pipeline] Start of Pipeline 00:00:07.582 [Pipeline] library 00:00:07.583 Loading library shm_lib@master 00:00:07.583 Library shm_lib@master is cached. Copying from home. 00:00:07.603 [Pipeline] node 00:00:07.618 Running on GP8 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:07.619 [Pipeline] { 00:00:07.629 [Pipeline] catchError 00:00:07.630 [Pipeline] { 00:00:07.639 [Pipeline] wrap 00:00:07.647 [Pipeline] { 00:00:07.654 [Pipeline] stage 00:00:07.655 [Pipeline] { (Prologue) 00:00:07.824 [Pipeline] sh 00:00:08.101 + logger -p user.info -t JENKINS-CI 00:00:08.115 [Pipeline] echo 00:00:08.116 Node: GP8 00:00:08.122 [Pipeline] sh 00:00:08.408 [Pipeline] setCustomBuildProperty 00:00:08.418 [Pipeline] echo 00:00:08.420 Cleanup processes 00:00:08.423 [Pipeline] sh 00:00:08.696 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.696 219252 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.710 [Pipeline] sh 00:00:08.990 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.990 ++ grep -v 'sudo pgrep' 00:00:08.990 ++ awk '{print $1}' 00:00:08.990 + sudo kill -9 00:00:08.990 + true 00:00:09.005 [Pipeline] cleanWs 00:00:09.014 [WS-CLEANUP] Deleting project workspace... 00:00:09.014 [WS-CLEANUP] Deferred wipeout is used... 00:00:09.020 [WS-CLEANUP] done 00:00:09.025 [Pipeline] setCustomBuildProperty 00:00:09.042 [Pipeline] sh 00:00:09.316 + sudo git config --global --replace-all safe.directory '*' 00:00:09.415 [Pipeline] httpRequest 00:00:09.438 [Pipeline] echo 00:00:09.440 Sorcerer 10.211.164.101 is alive 00:00:09.447 [Pipeline] httpRequest 00:00:09.452 HttpMethod: GET 00:00:09.452 URL: http://10.211.164.101/packages/jbp_571d49b51a09ef9417806101d0b05bbb896ef7c3.tar.gz 00:00:09.453 Sending request to url: http://10.211.164.101/packages/jbp_571d49b51a09ef9417806101d0b05bbb896ef7c3.tar.gz 00:00:09.465 Response Code: HTTP/1.1 200 OK 00:00:09.465 Success: Status code 200 is in the accepted range: 200,404 00:00:09.466 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_571d49b51a09ef9417806101d0b05bbb896ef7c3.tar.gz 00:00:14.834 [Pipeline] sh 00:00:15.113 + tar --no-same-owner -xf jbp_571d49b51a09ef9417806101d0b05bbb896ef7c3.tar.gz 00:00:15.130 [Pipeline] httpRequest 00:00:15.158 [Pipeline] echo 00:00:15.160 Sorcerer 10.211.164.101 is alive 00:00:15.170 [Pipeline] httpRequest 00:00:15.175 HttpMethod: GET 00:00:15.175 URL: http://10.211.164.101/packages/spdk_70425709083377aa0c23e3a0918902ddf3d34357.tar.gz 00:00:15.176 Sending request to url: http://10.211.164.101/packages/spdk_70425709083377aa0c23e3a0918902ddf3d34357.tar.gz 00:00:15.180 Response Code: HTTP/1.1 200 OK 00:00:15.181 Success: Status code 200 is in the accepted range: 200,404 00:00:15.182 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_70425709083377aa0c23e3a0918902ddf3d34357.tar.gz 00:01:45.015 [Pipeline] sh 00:01:45.296 + tar --no-same-owner -xf spdk_70425709083377aa0c23e3a0918902ddf3d34357.tar.gz 00:01:50.578 [Pipeline] sh 00:01:50.861 + git -C spdk log --oneline -n5 00:01:50.861 704257090 lib/reduce: fix the incorrect calculation method for the number of io_unit required for metadata. 00:01:50.861 fc2398dfa raid: clear base bdev configure_cb after executing 00:01:50.861 5558f3f50 raid: complete bdev_raid_create after sb is written 00:01:50.861 d005e023b raid: fix empty slot not updated in sb after resize 00:01:50.861 f41dbc235 nvme: always specify CC_CSS_NVM when CAP_CSS_IOCS is not set 00:01:50.872 [Pipeline] } 00:01:50.890 [Pipeline] // stage 00:01:50.899 [Pipeline] stage 00:01:50.902 [Pipeline] { (Prepare) 00:01:50.924 [Pipeline] writeFile 00:01:50.942 [Pipeline] sh 00:01:51.225 + logger -p user.info -t JENKINS-CI 00:01:51.239 [Pipeline] sh 00:01:51.522 + logger -p user.info -t JENKINS-CI 00:01:51.535 [Pipeline] sh 00:01:51.820 + cat autorun-spdk.conf 00:01:51.820 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:51.820 SPDK_TEST_NVMF=1 00:01:51.820 SPDK_TEST_NVME_CLI=1 00:01:51.820 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:51.820 SPDK_TEST_NVMF_NICS=e810 00:01:51.820 SPDK_TEST_VFIOUSER=1 00:01:51.820 SPDK_RUN_UBSAN=1 00:01:51.820 NET_TYPE=phy 00:01:51.828 RUN_NIGHTLY=0 00:01:51.833 [Pipeline] readFile 00:01:51.860 [Pipeline] withEnv 00:01:51.862 [Pipeline] { 00:01:51.877 [Pipeline] sh 00:01:52.192 + set -ex 00:01:52.192 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:52.192 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:52.192 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:52.192 ++ SPDK_TEST_NVMF=1 00:01:52.192 ++ SPDK_TEST_NVME_CLI=1 00:01:52.192 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:52.192 ++ SPDK_TEST_NVMF_NICS=e810 00:01:52.192 ++ SPDK_TEST_VFIOUSER=1 00:01:52.192 ++ SPDK_RUN_UBSAN=1 00:01:52.192 ++ NET_TYPE=phy 00:01:52.192 ++ RUN_NIGHTLY=0 00:01:52.192 + case $SPDK_TEST_NVMF_NICS in 00:01:52.192 + DRIVERS=ice 00:01:52.192 + [[ tcp == \r\d\m\a ]] 00:01:52.193 + [[ -n ice ]] 00:01:52.193 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:52.193 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:52.193 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:52.193 rmmod: ERROR: Module irdma is not currently loaded 00:01:52.193 rmmod: ERROR: Module i40iw is not currently loaded 00:01:52.193 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:52.193 + true 00:01:52.193 + for D in $DRIVERS 00:01:52.193 + sudo modprobe ice 00:01:52.193 + exit 0 00:01:52.201 [Pipeline] } 00:01:52.217 [Pipeline] // withEnv 00:01:52.221 [Pipeline] } 00:01:52.236 [Pipeline] // stage 00:01:52.244 [Pipeline] catchError 00:01:52.245 [Pipeline] { 00:01:52.257 [Pipeline] timeout 00:01:52.258 Timeout set to expire in 50 min 00:01:52.259 [Pipeline] { 00:01:52.272 [Pipeline] stage 00:01:52.274 [Pipeline] { (Tests) 00:01:52.286 [Pipeline] sh 00:01:52.567 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:52.567 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:52.567 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:52.567 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:52.567 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:52.567 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:52.567 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:52.567 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:52.567 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:52.567 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:52.567 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:52.567 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:52.567 + source /etc/os-release 00:01:52.567 ++ NAME='Fedora Linux' 00:01:52.567 ++ VERSION='38 (Cloud Edition)' 00:01:52.567 ++ ID=fedora 00:01:52.567 ++ VERSION_ID=38 00:01:52.567 ++ VERSION_CODENAME= 00:01:52.567 ++ PLATFORM_ID=platform:f38 00:01:52.567 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:52.567 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:52.567 ++ LOGO=fedora-logo-icon 00:01:52.567 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:52.567 ++ HOME_URL=https://fedoraproject.org/ 00:01:52.567 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:52.567 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:52.567 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:52.567 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:52.567 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:52.567 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:52.567 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:52.567 ++ SUPPORT_END=2024-05-14 00:01:52.567 ++ VARIANT='Cloud Edition' 00:01:52.567 ++ VARIANT_ID=cloud 00:01:52.567 + uname -a 00:01:52.567 Linux spdk-gp-08 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:52.567 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:53.943 Hugepages 00:01:53.943 node hugesize free / total 00:01:53.943 node0 1048576kB 0 / 0 00:01:53.943 node0 2048kB 0 / 0 00:01:53.943 node1 1048576kB 0 / 0 00:01:53.943 node1 2048kB 0 / 0 00:01:53.943 00:01:53.943 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:53.943 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:01:53.943 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:01:53.943 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:01:53.943 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:01:53.943 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:01:53.943 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:01:53.943 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:01:53.943 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:01:53.943 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:01:53.943 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:01:53.943 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:01:53.943 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:01:53.943 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:01:53.943 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:01:53.943 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:01:53.943 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:01:53.943 NVMe 0000:82:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:53.943 + rm -f /tmp/spdk-ld-path 00:01:53.943 + source autorun-spdk.conf 00:01:53.943 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:53.943 ++ SPDK_TEST_NVMF=1 00:01:53.943 ++ SPDK_TEST_NVME_CLI=1 00:01:53.943 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:53.943 ++ SPDK_TEST_NVMF_NICS=e810 00:01:53.943 ++ SPDK_TEST_VFIOUSER=1 00:01:53.943 ++ SPDK_RUN_UBSAN=1 00:01:53.943 ++ NET_TYPE=phy 00:01:53.943 ++ RUN_NIGHTLY=0 00:01:53.943 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:53.943 + [[ -n '' ]] 00:01:53.943 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:53.943 + for M in /var/spdk/build-*-manifest.txt 00:01:53.943 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:53.943 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:53.943 + for M in /var/spdk/build-*-manifest.txt 00:01:53.943 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:53.943 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:53.943 ++ uname 00:01:53.943 + [[ Linux == \L\i\n\u\x ]] 00:01:53.943 + sudo dmesg -T 00:01:53.943 + sudo dmesg --clear 00:01:53.943 + dmesg_pid=219956 00:01:53.943 + [[ Fedora Linux == FreeBSD ]] 00:01:53.943 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:53.943 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:53.943 + sudo dmesg -Tw 00:01:53.943 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:53.943 + [[ -x /usr/src/fio-static/fio ]] 00:01:53.943 + export FIO_BIN=/usr/src/fio-static/fio 00:01:53.943 + FIO_BIN=/usr/src/fio-static/fio 00:01:53.943 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:53.943 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:53.943 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:53.943 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:53.943 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:53.943 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:53.943 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:53.943 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:53.943 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:53.943 Test configuration: 00:01:53.943 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:53.943 SPDK_TEST_NVMF=1 00:01:53.943 SPDK_TEST_NVME_CLI=1 00:01:53.943 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:53.943 SPDK_TEST_NVMF_NICS=e810 00:01:53.943 SPDK_TEST_VFIOUSER=1 00:01:53.943 SPDK_RUN_UBSAN=1 00:01:53.943 NET_TYPE=phy 00:01:53.943 RUN_NIGHTLY=0 09:50:39 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:53.943 09:50:39 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:53.943 09:50:39 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:53.943 09:50:39 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:53.943 09:50:39 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:53.944 09:50:39 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:53.944 09:50:39 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:53.944 09:50:39 -- paths/export.sh@5 -- $ export PATH 00:01:53.944 09:50:39 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:53.944 09:50:39 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:53.944 09:50:39 -- common/autobuild_common.sh@447 -- $ date +%s 00:01:53.944 09:50:39 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721893839.XXXXXX 00:01:53.944 09:50:39 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721893839.b2Zkrr 00:01:53.944 09:50:39 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:01:53.944 09:50:39 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:01:53.944 09:50:39 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:53.944 09:50:39 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:53.944 09:50:39 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:53.944 09:50:39 -- common/autobuild_common.sh@463 -- $ get_config_params 00:01:53.944 09:50:39 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:01:53.944 09:50:39 -- common/autotest_common.sh@10 -- $ set +x 00:01:53.944 09:50:39 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:53.944 09:50:39 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:01:53.944 09:50:39 -- pm/common@17 -- $ local monitor 00:01:53.944 09:50:39 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:53.944 09:50:39 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:53.944 09:50:39 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:53.944 09:50:39 -- pm/common@21 -- $ date +%s 00:01:53.944 09:50:39 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:53.944 09:50:39 -- pm/common@21 -- $ date +%s 00:01:53.944 09:50:39 -- pm/common@25 -- $ sleep 1 00:01:53.944 09:50:39 -- pm/common@21 -- $ date +%s 00:01:53.944 09:50:39 -- pm/common@21 -- $ date +%s 00:01:53.944 09:50:39 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721893839 00:01:53.944 09:50:39 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721893839 00:01:53.944 09:50:39 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721893839 00:01:53.944 09:50:39 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721893839 00:01:54.202 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721893839_collect-vmstat.pm.log 00:01:54.202 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721893839_collect-cpu-load.pm.log 00:01:54.202 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721893839_collect-cpu-temp.pm.log 00:01:54.202 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721893839_collect-bmc-pm.bmc.pm.log 00:01:55.138 09:50:40 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:01:55.138 09:50:40 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:55.138 09:50:40 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:55.138 09:50:40 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:55.138 09:50:40 -- spdk/autobuild.sh@16 -- $ date -u 00:01:55.138 Thu Jul 25 07:50:40 AM UTC 2024 00:01:55.138 09:50:40 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:55.138 v24.09-pre-321-g704257090 00:01:55.138 09:50:40 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:55.138 09:50:40 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:55.138 09:50:40 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:55.138 09:50:40 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:55.138 09:50:40 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:55.138 09:50:40 -- common/autotest_common.sh@10 -- $ set +x 00:01:55.138 ************************************ 00:01:55.138 START TEST ubsan 00:01:55.138 ************************************ 00:01:55.138 09:50:40 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:01:55.138 using ubsan 00:01:55.138 00:01:55.138 real 0m0.000s 00:01:55.138 user 0m0.000s 00:01:55.138 sys 0m0.000s 00:01:55.138 09:50:40 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:01:55.138 09:50:40 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:55.138 ************************************ 00:01:55.138 END TEST ubsan 00:01:55.138 ************************************ 00:01:55.138 09:50:40 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:55.138 09:50:40 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:55.138 09:50:40 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:55.138 09:50:40 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:55.138 09:50:40 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:55.138 09:50:40 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:55.138 09:50:40 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:55.138 09:50:40 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:55.139 09:50:40 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:01:55.139 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:55.139 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:55.706 Using 'verbs' RDMA provider 00:02:11.521 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:23.740 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:23.740 Creating mk/config.mk...done. 00:02:23.740 Creating mk/cc.flags.mk...done. 00:02:23.740 Type 'make' to build. 00:02:23.740 09:51:07 -- spdk/autobuild.sh@69 -- $ run_test make make -j48 00:02:23.740 09:51:07 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:02:23.740 09:51:07 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:02:23.740 09:51:07 -- common/autotest_common.sh@10 -- $ set +x 00:02:23.740 ************************************ 00:02:23.740 START TEST make 00:02:23.740 ************************************ 00:02:23.740 09:51:07 make -- common/autotest_common.sh@1125 -- $ make -j48 00:02:23.740 make[1]: Nothing to be done for 'all'. 00:02:24.313 The Meson build system 00:02:24.314 Version: 1.3.1 00:02:24.314 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:02:24.314 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:24.314 Build type: native build 00:02:24.314 Project name: libvfio-user 00:02:24.314 Project version: 0.0.1 00:02:24.314 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:24.314 C linker for the host machine: cc ld.bfd 2.39-16 00:02:24.314 Host machine cpu family: x86_64 00:02:24.314 Host machine cpu: x86_64 00:02:24.314 Run-time dependency threads found: YES 00:02:24.314 Library dl found: YES 00:02:24.314 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:24.314 Run-time dependency json-c found: YES 0.17 00:02:24.314 Run-time dependency cmocka found: YES 1.1.7 00:02:24.314 Program pytest-3 found: NO 00:02:24.314 Program flake8 found: NO 00:02:24.314 Program misspell-fixer found: NO 00:02:24.314 Program restructuredtext-lint found: NO 00:02:24.314 Program valgrind found: YES (/usr/bin/valgrind) 00:02:24.314 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:24.314 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:24.314 Compiler for C supports arguments -Wwrite-strings: YES 00:02:24.314 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:24.314 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:02:24.314 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:02:24.314 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:24.314 Build targets in project: 8 00:02:24.314 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:24.314 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:24.314 00:02:24.314 libvfio-user 0.0.1 00:02:24.314 00:02:24.314 User defined options 00:02:24.314 buildtype : debug 00:02:24.314 default_library: shared 00:02:24.314 libdir : /usr/local/lib 00:02:24.314 00:02:24.314 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:25.261 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:25.261 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:02:25.261 [2/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:25.261 [3/37] Compiling C object samples/null.p/null.c.o 00:02:25.564 [4/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:25.564 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:02:25.564 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:02:25.564 [7/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:25.564 [8/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:25.564 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:02:25.564 [10/37] Compiling C object test/unit_tests.p/mocks.c.o 00:02:25.564 [11/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:25.564 [12/37] Compiling C object samples/lspci.p/lspci.c.o 00:02:25.564 [13/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:25.564 [14/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:25.564 [15/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:02:25.564 [16/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:25.564 [17/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:25.564 [18/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:25.564 [19/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:25.564 [20/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:02:25.564 [21/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:25.565 [22/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:02:25.565 [23/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:25.565 [24/37] Compiling C object samples/client.p/client.c.o 00:02:25.565 [25/37] Compiling C object samples/server.p/server.c.o 00:02:25.565 [26/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:25.565 [27/37] Linking target samples/client 00:02:25.839 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:02:25.839 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:25.839 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:02:25.839 [31/37] Linking target test/unit_tests 00:02:26.101 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:02:26.101 [33/37] Linking target samples/gpio-pci-idio-16 00:02:26.101 [34/37] Linking target samples/null 00:02:26.101 [35/37] Linking target samples/server 00:02:26.101 [36/37] Linking target samples/shadow_ioeventfd_server 00:02:26.101 [37/37] Linking target samples/lspci 00:02:26.101 INFO: autodetecting backend as ninja 00:02:26.101 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:26.101 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:27.044 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:27.044 ninja: no work to do. 00:02:32.377 The Meson build system 00:02:32.377 Version: 1.3.1 00:02:32.377 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:02:32.377 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:02:32.377 Build type: native build 00:02:32.377 Program cat found: YES (/usr/bin/cat) 00:02:32.377 Project name: DPDK 00:02:32.377 Project version: 24.03.0 00:02:32.377 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:02:32.377 C linker for the host machine: cc ld.bfd 2.39-16 00:02:32.377 Host machine cpu family: x86_64 00:02:32.377 Host machine cpu: x86_64 00:02:32.377 Message: ## Building in Developer Mode ## 00:02:32.377 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:32.377 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:02:32.377 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:32.377 Program python3 found: YES (/usr/bin/python3) 00:02:32.377 Program cat found: YES (/usr/bin/cat) 00:02:32.377 Compiler for C supports arguments -march=native: YES 00:02:32.377 Checking for size of "void *" : 8 00:02:32.377 Checking for size of "void *" : 8 (cached) 00:02:32.377 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:02:32.377 Library m found: YES 00:02:32.377 Library numa found: YES 00:02:32.377 Has header "numaif.h" : YES 00:02:32.377 Library fdt found: NO 00:02:32.377 Library execinfo found: NO 00:02:32.377 Has header "execinfo.h" : YES 00:02:32.377 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:02:32.377 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:32.377 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:32.377 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:32.377 Run-time dependency openssl found: YES 3.0.9 00:02:32.377 Run-time dependency libpcap found: YES 1.10.4 00:02:32.377 Has header "pcap.h" with dependency libpcap: YES 00:02:32.377 Compiler for C supports arguments -Wcast-qual: YES 00:02:32.377 Compiler for C supports arguments -Wdeprecated: YES 00:02:32.377 Compiler for C supports arguments -Wformat: YES 00:02:32.377 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:32.377 Compiler for C supports arguments -Wformat-security: NO 00:02:32.377 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:32.377 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:32.377 Compiler for C supports arguments -Wnested-externs: YES 00:02:32.377 Compiler for C supports arguments -Wold-style-definition: YES 00:02:32.377 Compiler for C supports arguments -Wpointer-arith: YES 00:02:32.377 Compiler for C supports arguments -Wsign-compare: YES 00:02:32.377 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:32.377 Compiler for C supports arguments -Wundef: YES 00:02:32.377 Compiler for C supports arguments -Wwrite-strings: YES 00:02:32.377 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:32.377 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:32.377 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:32.377 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:32.377 Program objdump found: YES (/usr/bin/objdump) 00:02:32.377 Compiler for C supports arguments -mavx512f: YES 00:02:32.377 Checking if "AVX512 checking" compiles: YES 00:02:32.377 Fetching value of define "__SSE4_2__" : 1 00:02:32.377 Fetching value of define "__AES__" : 1 00:02:32.377 Fetching value of define "__AVX__" : 1 00:02:32.377 Fetching value of define "__AVX2__" : (undefined) 00:02:32.377 Fetching value of define "__AVX512BW__" : (undefined) 00:02:32.377 Fetching value of define "__AVX512CD__" : (undefined) 00:02:32.378 Fetching value of define "__AVX512DQ__" : (undefined) 00:02:32.378 Fetching value of define "__AVX512F__" : (undefined) 00:02:32.378 Fetching value of define "__AVX512VL__" : (undefined) 00:02:32.378 Fetching value of define "__PCLMUL__" : 1 00:02:32.378 Fetching value of define "__RDRND__" : 1 00:02:32.378 Fetching value of define "__RDSEED__" : (undefined) 00:02:32.378 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:32.378 Fetching value of define "__znver1__" : (undefined) 00:02:32.378 Fetching value of define "__znver2__" : (undefined) 00:02:32.378 Fetching value of define "__znver3__" : (undefined) 00:02:32.378 Fetching value of define "__znver4__" : (undefined) 00:02:32.378 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:32.378 Message: lib/log: Defining dependency "log" 00:02:32.378 Message: lib/kvargs: Defining dependency "kvargs" 00:02:32.378 Message: lib/telemetry: Defining dependency "telemetry" 00:02:32.378 Checking for function "getentropy" : NO 00:02:32.378 Message: lib/eal: Defining dependency "eal" 00:02:32.378 Message: lib/ring: Defining dependency "ring" 00:02:32.378 Message: lib/rcu: Defining dependency "rcu" 00:02:32.378 Message: lib/mempool: Defining dependency "mempool" 00:02:32.378 Message: lib/mbuf: Defining dependency "mbuf" 00:02:32.378 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:32.378 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:02:32.378 Compiler for C supports arguments -mpclmul: YES 00:02:32.378 Compiler for C supports arguments -maes: YES 00:02:32.378 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:32.378 Compiler for C supports arguments -mavx512bw: YES 00:02:32.378 Compiler for C supports arguments -mavx512dq: YES 00:02:32.378 Compiler for C supports arguments -mavx512vl: YES 00:02:32.378 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:32.378 Compiler for C supports arguments -mavx2: YES 00:02:32.378 Compiler for C supports arguments -mavx: YES 00:02:32.378 Message: lib/net: Defining dependency "net" 00:02:32.378 Message: lib/meter: Defining dependency "meter" 00:02:32.378 Message: lib/ethdev: Defining dependency "ethdev" 00:02:32.378 Message: lib/pci: Defining dependency "pci" 00:02:32.378 Message: lib/cmdline: Defining dependency "cmdline" 00:02:32.378 Message: lib/hash: Defining dependency "hash" 00:02:32.378 Message: lib/timer: Defining dependency "timer" 00:02:32.378 Message: lib/compressdev: Defining dependency "compressdev" 00:02:32.378 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:32.378 Message: lib/dmadev: Defining dependency "dmadev" 00:02:32.378 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:32.378 Message: lib/power: Defining dependency "power" 00:02:32.378 Message: lib/reorder: Defining dependency "reorder" 00:02:32.378 Message: lib/security: Defining dependency "security" 00:02:32.378 Has header "linux/userfaultfd.h" : YES 00:02:32.378 Has header "linux/vduse.h" : YES 00:02:32.378 Message: lib/vhost: Defining dependency "vhost" 00:02:32.378 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:32.378 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:32.378 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:32.378 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:32.378 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:32.378 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:32.378 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:32.378 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:32.378 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:32.378 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:32.378 Program doxygen found: YES (/usr/bin/doxygen) 00:02:32.378 Configuring doxy-api-html.conf using configuration 00:02:32.378 Configuring doxy-api-man.conf using configuration 00:02:32.378 Program mandb found: YES (/usr/bin/mandb) 00:02:32.378 Program sphinx-build found: NO 00:02:32.378 Configuring rte_build_config.h using configuration 00:02:32.378 Message: 00:02:32.378 ================= 00:02:32.378 Applications Enabled 00:02:32.378 ================= 00:02:32.378 00:02:32.378 apps: 00:02:32.378 00:02:32.378 00:02:32.378 Message: 00:02:32.378 ================= 00:02:32.378 Libraries Enabled 00:02:32.378 ================= 00:02:32.378 00:02:32.378 libs: 00:02:32.378 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:32.378 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:32.378 cryptodev, dmadev, power, reorder, security, vhost, 00:02:32.378 00:02:32.378 Message: 00:02:32.378 =============== 00:02:32.378 Drivers Enabled 00:02:32.378 =============== 00:02:32.378 00:02:32.378 common: 00:02:32.378 00:02:32.378 bus: 00:02:32.378 pci, vdev, 00:02:32.378 mempool: 00:02:32.378 ring, 00:02:32.378 dma: 00:02:32.378 00:02:32.378 net: 00:02:32.378 00:02:32.378 crypto: 00:02:32.378 00:02:32.378 compress: 00:02:32.378 00:02:32.378 vdpa: 00:02:32.378 00:02:32.378 00:02:32.378 Message: 00:02:32.378 ================= 00:02:32.378 Content Skipped 00:02:32.378 ================= 00:02:32.378 00:02:32.378 apps: 00:02:32.378 dumpcap: explicitly disabled via build config 00:02:32.378 graph: explicitly disabled via build config 00:02:32.378 pdump: explicitly disabled via build config 00:02:32.378 proc-info: explicitly disabled via build config 00:02:32.378 test-acl: explicitly disabled via build config 00:02:32.378 test-bbdev: explicitly disabled via build config 00:02:32.378 test-cmdline: explicitly disabled via build config 00:02:32.378 test-compress-perf: explicitly disabled via build config 00:02:32.378 test-crypto-perf: explicitly disabled via build config 00:02:32.378 test-dma-perf: explicitly disabled via build config 00:02:32.378 test-eventdev: explicitly disabled via build config 00:02:32.378 test-fib: explicitly disabled via build config 00:02:32.378 test-flow-perf: explicitly disabled via build config 00:02:32.378 test-gpudev: explicitly disabled via build config 00:02:32.378 test-mldev: explicitly disabled via build config 00:02:32.378 test-pipeline: explicitly disabled via build config 00:02:32.378 test-pmd: explicitly disabled via build config 00:02:32.378 test-regex: explicitly disabled via build config 00:02:32.378 test-sad: explicitly disabled via build config 00:02:32.378 test-security-perf: explicitly disabled via build config 00:02:32.378 00:02:32.378 libs: 00:02:32.378 argparse: explicitly disabled via build config 00:02:32.378 metrics: explicitly disabled via build config 00:02:32.378 acl: explicitly disabled via build config 00:02:32.378 bbdev: explicitly disabled via build config 00:02:32.378 bitratestats: explicitly disabled via build config 00:02:32.378 bpf: explicitly disabled via build config 00:02:32.378 cfgfile: explicitly disabled via build config 00:02:32.378 distributor: explicitly disabled via build config 00:02:32.378 efd: explicitly disabled via build config 00:02:32.378 eventdev: explicitly disabled via build config 00:02:32.378 dispatcher: explicitly disabled via build config 00:02:32.378 gpudev: explicitly disabled via build config 00:02:32.378 gro: explicitly disabled via build config 00:02:32.378 gso: explicitly disabled via build config 00:02:32.378 ip_frag: explicitly disabled via build config 00:02:32.378 jobstats: explicitly disabled via build config 00:02:32.378 latencystats: explicitly disabled via build config 00:02:32.378 lpm: explicitly disabled via build config 00:02:32.378 member: explicitly disabled via build config 00:02:32.378 pcapng: explicitly disabled via build config 00:02:32.378 rawdev: explicitly disabled via build config 00:02:32.378 regexdev: explicitly disabled via build config 00:02:32.378 mldev: explicitly disabled via build config 00:02:32.378 rib: explicitly disabled via build config 00:02:32.378 sched: explicitly disabled via build config 00:02:32.378 stack: explicitly disabled via build config 00:02:32.378 ipsec: explicitly disabled via build config 00:02:32.378 pdcp: explicitly disabled via build config 00:02:32.378 fib: explicitly disabled via build config 00:02:32.378 port: explicitly disabled via build config 00:02:32.378 pdump: explicitly disabled via build config 00:02:32.378 table: explicitly disabled via build config 00:02:32.378 pipeline: explicitly disabled via build config 00:02:32.378 graph: explicitly disabled via build config 00:02:32.378 node: explicitly disabled via build config 00:02:32.378 00:02:32.378 drivers: 00:02:32.378 common/cpt: not in enabled drivers build config 00:02:32.378 common/dpaax: not in enabled drivers build config 00:02:32.378 common/iavf: not in enabled drivers build config 00:02:32.378 common/idpf: not in enabled drivers build config 00:02:32.378 common/ionic: not in enabled drivers build config 00:02:32.378 common/mvep: not in enabled drivers build config 00:02:32.378 common/octeontx: not in enabled drivers build config 00:02:32.378 bus/auxiliary: not in enabled drivers build config 00:02:32.378 bus/cdx: not in enabled drivers build config 00:02:32.378 bus/dpaa: not in enabled drivers build config 00:02:32.378 bus/fslmc: not in enabled drivers build config 00:02:32.378 bus/ifpga: not in enabled drivers build config 00:02:32.378 bus/platform: not in enabled drivers build config 00:02:32.378 bus/uacce: not in enabled drivers build config 00:02:32.378 bus/vmbus: not in enabled drivers build config 00:02:32.378 common/cnxk: not in enabled drivers build config 00:02:32.378 common/mlx5: not in enabled drivers build config 00:02:32.378 common/nfp: not in enabled drivers build config 00:02:32.378 common/nitrox: not in enabled drivers build config 00:02:32.378 common/qat: not in enabled drivers build config 00:02:32.378 common/sfc_efx: not in enabled drivers build config 00:02:32.378 mempool/bucket: not in enabled drivers build config 00:02:32.378 mempool/cnxk: not in enabled drivers build config 00:02:32.378 mempool/dpaa: not in enabled drivers build config 00:02:32.378 mempool/dpaa2: not in enabled drivers build config 00:02:32.378 mempool/octeontx: not in enabled drivers build config 00:02:32.378 mempool/stack: not in enabled drivers build config 00:02:32.378 dma/cnxk: not in enabled drivers build config 00:02:32.378 dma/dpaa: not in enabled drivers build config 00:02:32.379 dma/dpaa2: not in enabled drivers build config 00:02:32.379 dma/hisilicon: not in enabled drivers build config 00:02:32.379 dma/idxd: not in enabled drivers build config 00:02:32.379 dma/ioat: not in enabled drivers build config 00:02:32.379 dma/skeleton: not in enabled drivers build config 00:02:32.379 net/af_packet: not in enabled drivers build config 00:02:32.379 net/af_xdp: not in enabled drivers build config 00:02:32.379 net/ark: not in enabled drivers build config 00:02:32.379 net/atlantic: not in enabled drivers build config 00:02:32.379 net/avp: not in enabled drivers build config 00:02:32.379 net/axgbe: not in enabled drivers build config 00:02:32.379 net/bnx2x: not in enabled drivers build config 00:02:32.379 net/bnxt: not in enabled drivers build config 00:02:32.379 net/bonding: not in enabled drivers build config 00:02:32.379 net/cnxk: not in enabled drivers build config 00:02:32.379 net/cpfl: not in enabled drivers build config 00:02:32.379 net/cxgbe: not in enabled drivers build config 00:02:32.379 net/dpaa: not in enabled drivers build config 00:02:32.379 net/dpaa2: not in enabled drivers build config 00:02:32.379 net/e1000: not in enabled drivers build config 00:02:32.379 net/ena: not in enabled drivers build config 00:02:32.379 net/enetc: not in enabled drivers build config 00:02:32.379 net/enetfec: not in enabled drivers build config 00:02:32.379 net/enic: not in enabled drivers build config 00:02:32.379 net/failsafe: not in enabled drivers build config 00:02:32.379 net/fm10k: not in enabled drivers build config 00:02:32.379 net/gve: not in enabled drivers build config 00:02:32.379 net/hinic: not in enabled drivers build config 00:02:32.379 net/hns3: not in enabled drivers build config 00:02:32.379 net/i40e: not in enabled drivers build config 00:02:32.379 net/iavf: not in enabled drivers build config 00:02:32.379 net/ice: not in enabled drivers build config 00:02:32.379 net/idpf: not in enabled drivers build config 00:02:32.379 net/igc: not in enabled drivers build config 00:02:32.379 net/ionic: not in enabled drivers build config 00:02:32.379 net/ipn3ke: not in enabled drivers build config 00:02:32.379 net/ixgbe: not in enabled drivers build config 00:02:32.379 net/mana: not in enabled drivers build config 00:02:32.379 net/memif: not in enabled drivers build config 00:02:32.379 net/mlx4: not in enabled drivers build config 00:02:32.379 net/mlx5: not in enabled drivers build config 00:02:32.379 net/mvneta: not in enabled drivers build config 00:02:32.379 net/mvpp2: not in enabled drivers build config 00:02:32.379 net/netvsc: not in enabled drivers build config 00:02:32.379 net/nfb: not in enabled drivers build config 00:02:32.379 net/nfp: not in enabled drivers build config 00:02:32.379 net/ngbe: not in enabled drivers build config 00:02:32.379 net/null: not in enabled drivers build config 00:02:32.379 net/octeontx: not in enabled drivers build config 00:02:32.379 net/octeon_ep: not in enabled drivers build config 00:02:32.379 net/pcap: not in enabled drivers build config 00:02:32.379 net/pfe: not in enabled drivers build config 00:02:32.379 net/qede: not in enabled drivers build config 00:02:32.379 net/ring: not in enabled drivers build config 00:02:32.379 net/sfc: not in enabled drivers build config 00:02:32.379 net/softnic: not in enabled drivers build config 00:02:32.379 net/tap: not in enabled drivers build config 00:02:32.379 net/thunderx: not in enabled drivers build config 00:02:32.379 net/txgbe: not in enabled drivers build config 00:02:32.379 net/vdev_netvsc: not in enabled drivers build config 00:02:32.379 net/vhost: not in enabled drivers build config 00:02:32.379 net/virtio: not in enabled drivers build config 00:02:32.379 net/vmxnet3: not in enabled drivers build config 00:02:32.379 raw/*: missing internal dependency, "rawdev" 00:02:32.379 crypto/armv8: not in enabled drivers build config 00:02:32.379 crypto/bcmfs: not in enabled drivers build config 00:02:32.379 crypto/caam_jr: not in enabled drivers build config 00:02:32.379 crypto/ccp: not in enabled drivers build config 00:02:32.379 crypto/cnxk: not in enabled drivers build config 00:02:32.379 crypto/dpaa_sec: not in enabled drivers build config 00:02:32.379 crypto/dpaa2_sec: not in enabled drivers build config 00:02:32.379 crypto/ipsec_mb: not in enabled drivers build config 00:02:32.379 crypto/mlx5: not in enabled drivers build config 00:02:32.379 crypto/mvsam: not in enabled drivers build config 00:02:32.379 crypto/nitrox: not in enabled drivers build config 00:02:32.379 crypto/null: not in enabled drivers build config 00:02:32.379 crypto/octeontx: not in enabled drivers build config 00:02:32.379 crypto/openssl: not in enabled drivers build config 00:02:32.379 crypto/scheduler: not in enabled drivers build config 00:02:32.379 crypto/uadk: not in enabled drivers build config 00:02:32.379 crypto/virtio: not in enabled drivers build config 00:02:32.379 compress/isal: not in enabled drivers build config 00:02:32.379 compress/mlx5: not in enabled drivers build config 00:02:32.379 compress/nitrox: not in enabled drivers build config 00:02:32.379 compress/octeontx: not in enabled drivers build config 00:02:32.379 compress/zlib: not in enabled drivers build config 00:02:32.379 regex/*: missing internal dependency, "regexdev" 00:02:32.379 ml/*: missing internal dependency, "mldev" 00:02:32.379 vdpa/ifc: not in enabled drivers build config 00:02:32.379 vdpa/mlx5: not in enabled drivers build config 00:02:32.379 vdpa/nfp: not in enabled drivers build config 00:02:32.379 vdpa/sfc: not in enabled drivers build config 00:02:32.379 event/*: missing internal dependency, "eventdev" 00:02:32.379 baseband/*: missing internal dependency, "bbdev" 00:02:32.379 gpu/*: missing internal dependency, "gpudev" 00:02:32.379 00:02:32.379 00:02:32.379 Build targets in project: 85 00:02:32.379 00:02:32.379 DPDK 24.03.0 00:02:32.379 00:02:32.379 User defined options 00:02:32.379 buildtype : debug 00:02:32.379 default_library : shared 00:02:32.379 libdir : lib 00:02:32.379 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:02:32.379 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:32.379 c_link_args : 00:02:32.379 cpu_instruction_set: native 00:02:32.379 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:32.379 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:32.379 enable_docs : false 00:02:32.379 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:02:32.379 enable_kmods : false 00:02:32.379 max_lcores : 128 00:02:32.379 tests : false 00:02:32.379 00:02:32.379 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:32.644 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:02:32.906 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:32.906 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:32.906 [3/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:32.906 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:32.906 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:32.906 [6/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:32.906 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:32.906 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:32.906 [9/268] Linking static target lib/librte_kvargs.a 00:02:32.906 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:32.906 [11/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:32.906 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:32.906 [13/268] Linking static target lib/librte_log.a 00:02:32.906 [14/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:32.906 [15/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:32.906 [16/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:33.479 [17/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.739 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:33.739 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:33.739 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:33.739 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:33.739 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:33.739 [23/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:33.739 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:33.739 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:33.739 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:33.739 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:33.739 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:33.739 [29/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:33.739 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:33.739 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:33.739 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:33.739 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:33.739 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:33.739 [35/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:33.739 [36/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:33.739 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:33.739 [38/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:33.739 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:33.739 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:33.739 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:33.739 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:33.739 [43/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:33.739 [44/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:33.739 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:33.739 [46/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:33.739 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:33.739 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:33.739 [49/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:33.739 [50/268] Linking static target lib/librte_telemetry.a 00:02:33.739 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:33.739 [52/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:34.002 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:34.002 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:34.002 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:34.002 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:34.002 [57/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.002 [58/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:34.002 [59/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:34.002 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:34.002 [61/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:34.003 [62/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:34.003 [63/268] Linking target lib/librte_log.so.24.1 00:02:34.003 [64/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:34.003 [65/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:34.262 [66/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:34.262 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:34.262 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:34.262 [69/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:34.262 [70/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:34.262 [71/268] Linking static target lib/librte_pci.a 00:02:34.262 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:34.262 [73/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:34.526 [74/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:34.526 [75/268] Linking target lib/librte_kvargs.so.24.1 00:02:34.526 [76/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:34.526 [77/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:34.526 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:34.526 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:34.526 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:34.526 [81/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:34.788 [82/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:34.788 [83/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:34.788 [84/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:34.788 [85/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:34.789 [86/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:34.789 [87/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:34.789 [88/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:34.789 [89/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:34.789 [90/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:34.789 [91/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:34.789 [92/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:34.789 [93/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:34.789 [94/268] Linking static target lib/librte_meter.a 00:02:34.789 [95/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:34.789 [96/268] Linking static target lib/librte_ring.a 00:02:34.789 [97/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:34.789 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:34.789 [99/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:34.789 [100/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.789 [101/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:34.789 [102/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:34.789 [103/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:34.789 [104/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:34.789 [105/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:34.789 [106/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:35.052 [107/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:35.052 [108/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:35.052 [109/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:35.052 [110/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:35.052 [111/268] Linking static target lib/librte_rcu.a 00:02:35.052 [112/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:35.052 [113/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.052 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:35.052 [115/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:35.052 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:35.052 [117/268] Linking static target lib/librte_mempool.a 00:02:35.052 [118/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:35.052 [119/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:35.052 [120/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:35.052 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:35.052 [122/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:35.052 [123/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:35.052 [124/268] Linking target lib/librte_telemetry.so.24.1 00:02:35.052 [125/268] Linking static target lib/librte_eal.a 00:02:35.052 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:35.052 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:35.052 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:35.313 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:35.313 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:35.313 [131/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:35.313 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:35.313 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:35.313 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:35.313 [135/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.313 [136/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:35.313 [137/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:35.314 [138/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.314 [139/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:35.314 [140/268] Linking static target lib/librte_net.a 00:02:35.314 [141/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:35.573 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:35.573 [143/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:35.573 [144/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.573 [145/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:35.573 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:35.573 [147/268] Linking static target lib/librte_cmdline.a 00:02:35.573 [148/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:35.573 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:35.836 [150/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:35.836 [151/268] Linking static target lib/librte_timer.a 00:02:35.836 [152/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:35.836 [153/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:35.836 [154/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:35.836 [155/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:35.836 [156/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:35.836 [157/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.836 [158/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:35.836 [159/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:36.095 [160/268] Linking static target lib/librte_dmadev.a 00:02:36.095 [161/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:36.095 [162/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:36.095 [163/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:36.095 [164/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:36.095 [165/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:36.095 [166/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:36.095 [167/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.095 [168/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:36.095 [169/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:36.095 [170/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:36.095 [171/268] Linking static target lib/librte_power.a 00:02:36.095 [172/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:36.353 [173/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.353 [174/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:36.353 [175/268] Linking static target lib/librte_hash.a 00:02:36.353 [176/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:36.353 [177/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:36.353 [178/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:36.353 [179/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:36.353 [180/268] Linking static target lib/librte_compressdev.a 00:02:36.353 [181/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:36.353 [182/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:36.353 [183/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:36.353 [184/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:36.353 [185/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:36.353 [186/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:36.353 [187/268] Linking static target lib/librte_mbuf.a 00:02:36.353 [188/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:36.353 [189/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:36.353 [190/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.611 [191/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.611 [192/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:36.611 [193/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:36.611 [194/268] Linking static target lib/librte_reorder.a 00:02:36.611 [195/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:36.611 [196/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:36.611 [197/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:36.611 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:36.611 [199/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:36.611 [200/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:36.611 [201/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:36.611 [202/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:36.611 [203/268] Linking static target drivers/librte_bus_vdev.a 00:02:36.611 [204/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:36.611 [205/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.611 [206/268] Linking static target lib/librte_security.a 00:02:36.869 [207/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.869 [208/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.869 [209/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:36.869 [210/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:36.869 [211/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:36.869 [212/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:36.869 [213/268] Linking static target drivers/librte_bus_pci.a 00:02:36.869 [214/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.869 [215/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:36.869 [216/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.869 [217/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:36.869 [218/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:36.869 [219/268] Linking static target drivers/librte_mempool_ring.a 00:02:36.869 [220/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.127 [221/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:37.127 [222/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.127 [223/268] Linking static target lib/librte_cryptodev.a 00:02:37.127 [224/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:37.127 [225/268] Linking static target lib/librte_ethdev.a 00:02:37.385 [226/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.320 [227/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.255 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:42.540 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.540 [230/268] Linking target lib/librte_eal.so.24.1 00:02:42.540 [231/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:42.540 [232/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:42.540 [233/268] Linking target lib/librte_ring.so.24.1 00:02:42.540 [234/268] Linking target lib/librte_timer.so.24.1 00:02:42.540 [235/268] Linking target lib/librte_pci.so.24.1 00:02:42.540 [236/268] Linking target lib/librte_meter.so.24.1 00:02:42.540 [237/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:42.540 [238/268] Linking target lib/librte_dmadev.so.24.1 00:02:42.540 [239/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:42.540 [240/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:42.540 [241/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:42.540 [242/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:42.540 [243/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:42.540 [244/268] Linking target lib/librte_rcu.so.24.1 00:02:42.540 [245/268] Linking target lib/librte_mempool.so.24.1 00:02:42.540 [246/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:42.798 [247/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:42.798 [248/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:42.798 [249/268] Linking target lib/librte_mbuf.so.24.1 00:02:42.798 [250/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:42.798 [251/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:43.061 [252/268] Linking target lib/librte_net.so.24.1 00:02:43.062 [253/268] Linking target lib/librte_compressdev.so.24.1 00:02:43.062 [254/268] Linking target lib/librte_reorder.so.24.1 00:02:43.062 [255/268] Linking target lib/librte_cryptodev.so.24.1 00:02:43.062 [256/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:43.062 [257/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:43.358 [258/268] Linking target lib/librte_security.so.24.1 00:02:43.358 [259/268] Linking target lib/librte_cmdline.so.24.1 00:02:43.358 [260/268] Linking target lib/librte_hash.so.24.1 00:02:43.358 [261/268] Linking target lib/librte_ethdev.so.24.1 00:02:43.358 [262/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:43.358 [263/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:43.622 [264/268] Linking target lib/librte_power.so.24.1 00:02:47.806 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:47.806 [266/268] Linking static target lib/librte_vhost.a 00:02:48.378 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:48.635 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:48.635 INFO: autodetecting backend as ninja 00:02:48.635 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 48 00:02:50.536 CC lib/ut/ut.o 00:02:50.536 CC lib/ut_mock/mock.o 00:02:50.536 CC lib/log/log.o 00:02:50.536 CC lib/log/log_flags.o 00:02:50.536 CC lib/log/log_deprecated.o 00:02:50.536 LIB libspdk_log.a 00:02:50.536 LIB libspdk_ut.a 00:02:50.536 SO libspdk_log.so.7.0 00:02:50.536 SO libspdk_ut.so.2.0 00:02:50.536 LIB libspdk_ut_mock.a 00:02:50.536 SO libspdk_ut_mock.so.6.0 00:02:50.536 SYMLINK libspdk_ut.so 00:02:50.536 SYMLINK libspdk_log.so 00:02:50.536 SYMLINK libspdk_ut_mock.so 00:02:50.795 CC lib/ioat/ioat.o 00:02:50.795 CXX lib/trace_parser/trace.o 00:02:50.795 CC lib/util/base64.o 00:02:50.795 CC lib/dma/dma.o 00:02:50.795 CC lib/util/bit_array.o 00:02:50.795 CC lib/util/cpuset.o 00:02:50.795 CC lib/util/crc16.o 00:02:50.795 CC lib/util/crc32.o 00:02:50.795 CC lib/util/crc32c.o 00:02:50.795 CC lib/util/crc32_ieee.o 00:02:50.795 CC lib/util/crc64.o 00:02:50.795 CC lib/util/dif.o 00:02:50.795 CC lib/util/fd.o 00:02:50.795 CC lib/util/fd_group.o 00:02:50.795 CC lib/util/file.o 00:02:50.795 CC lib/util/hexlify.o 00:02:50.795 CC lib/util/iov.o 00:02:50.795 CC lib/util/math.o 00:02:50.795 CC lib/util/net.o 00:02:50.795 CC lib/util/pipe.o 00:02:50.795 CC lib/util/strerror_tls.o 00:02:50.795 CC lib/util/string.o 00:02:50.795 CC lib/util/uuid.o 00:02:50.795 CC lib/util/zipf.o 00:02:50.795 CC lib/util/xor.o 00:02:50.795 CC lib/vfio_user/host/vfio_user_pci.o 00:02:50.795 CC lib/vfio_user/host/vfio_user.o 00:02:51.054 LIB libspdk_dma.a 00:02:51.054 SO libspdk_dma.so.4.0 00:02:51.054 SYMLINK libspdk_dma.so 00:02:51.054 LIB libspdk_ioat.a 00:02:51.054 SO libspdk_ioat.so.7.0 00:02:51.054 SYMLINK libspdk_ioat.so 00:02:51.313 LIB libspdk_vfio_user.a 00:02:51.313 SO libspdk_vfio_user.so.5.0 00:02:51.313 SYMLINK libspdk_vfio_user.so 00:02:51.313 LIB libspdk_util.a 00:02:51.572 SO libspdk_util.so.10.0 00:02:51.830 SYMLINK libspdk_util.so 00:02:52.089 LIB libspdk_trace_parser.a 00:02:52.089 SO libspdk_trace_parser.so.5.0 00:02:52.089 CC lib/conf/conf.o 00:02:52.089 CC lib/rdma_provider/common.o 00:02:52.089 CC lib/json/json_parse.o 00:02:52.089 CC lib/idxd/idxd.o 00:02:52.089 CC lib/rdma_utils/rdma_utils.o 00:02:52.089 CC lib/json/json_util.o 00:02:52.089 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:52.089 CC lib/idxd/idxd_user.o 00:02:52.089 CC lib/json/json_write.o 00:02:52.089 CC lib/idxd/idxd_kernel.o 00:02:52.089 CC lib/vmd/vmd.o 00:02:52.089 CC lib/vmd/led.o 00:02:52.089 CC lib/env_dpdk/env.o 00:02:52.089 CC lib/env_dpdk/memory.o 00:02:52.089 CC lib/env_dpdk/pci.o 00:02:52.089 CC lib/env_dpdk/init.o 00:02:52.089 CC lib/env_dpdk/threads.o 00:02:52.089 CC lib/env_dpdk/pci_ioat.o 00:02:52.089 CC lib/env_dpdk/pci_virtio.o 00:02:52.089 CC lib/env_dpdk/pci_vmd.o 00:02:52.089 CC lib/env_dpdk/pci_idxd.o 00:02:52.089 CC lib/env_dpdk/pci_event.o 00:02:52.089 CC lib/env_dpdk/sigbus_handler.o 00:02:52.089 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:52.089 CC lib/env_dpdk/pci_dpdk.o 00:02:52.089 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:52.089 SYMLINK libspdk_trace_parser.so 00:02:52.347 LIB libspdk_rdma_provider.a 00:02:52.347 SO libspdk_rdma_provider.so.6.0 00:02:52.347 LIB libspdk_conf.a 00:02:52.347 SO libspdk_conf.so.6.0 00:02:52.347 LIB libspdk_rdma_utils.a 00:02:52.347 SYMLINK libspdk_rdma_provider.so 00:02:52.347 SO libspdk_rdma_utils.so.1.0 00:02:52.347 LIB libspdk_json.a 00:02:52.347 SYMLINK libspdk_conf.so 00:02:52.347 SO libspdk_json.so.6.0 00:02:52.347 SYMLINK libspdk_rdma_utils.so 00:02:52.605 SYMLINK libspdk_json.so 00:02:52.605 LIB libspdk_idxd.a 00:02:52.605 SO libspdk_idxd.so.12.0 00:02:52.605 CC lib/jsonrpc/jsonrpc_server.o 00:02:52.605 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:52.605 CC lib/jsonrpc/jsonrpc_client.o 00:02:52.605 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:52.864 LIB libspdk_vmd.a 00:02:52.864 SYMLINK libspdk_idxd.so 00:02:52.864 SO libspdk_vmd.so.6.0 00:02:52.864 SYMLINK libspdk_vmd.so 00:02:52.864 LIB libspdk_jsonrpc.a 00:02:53.123 SO libspdk_jsonrpc.so.6.0 00:02:53.123 SYMLINK libspdk_jsonrpc.so 00:02:53.381 CC lib/rpc/rpc.o 00:02:53.640 LIB libspdk_rpc.a 00:02:53.898 SO libspdk_rpc.so.6.0 00:02:53.898 SYMLINK libspdk_rpc.so 00:02:54.156 CC lib/notify/notify.o 00:02:54.156 CC lib/notify/notify_rpc.o 00:02:54.156 CC lib/trace/trace.o 00:02:54.156 CC lib/keyring/keyring.o 00:02:54.156 CC lib/keyring/keyring_rpc.o 00:02:54.156 CC lib/trace/trace_flags.o 00:02:54.156 CC lib/trace/trace_rpc.o 00:02:54.156 LIB libspdk_env_dpdk.a 00:02:54.156 SO libspdk_env_dpdk.so.15.0 00:02:54.156 LIB libspdk_notify.a 00:02:54.415 SO libspdk_notify.so.6.0 00:02:54.415 SYMLINK libspdk_env_dpdk.so 00:02:54.415 SYMLINK libspdk_notify.so 00:02:54.415 LIB libspdk_trace.a 00:02:54.415 LIB libspdk_keyring.a 00:02:54.415 SO libspdk_keyring.so.1.0 00:02:54.415 SO libspdk_trace.so.10.0 00:02:54.415 SYMLINK libspdk_keyring.so 00:02:54.415 SYMLINK libspdk_trace.so 00:02:54.674 CC lib/sock/sock.o 00:02:54.674 CC lib/sock/sock_rpc.o 00:02:54.674 CC lib/thread/thread.o 00:02:54.674 CC lib/thread/iobuf.o 00:02:55.240 LIB libspdk_sock.a 00:02:55.240 SO libspdk_sock.so.10.0 00:02:55.499 SYMLINK libspdk_sock.so 00:02:55.499 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:55.499 CC lib/nvme/nvme_ctrlr.o 00:02:55.499 CC lib/nvme/nvme_ns_cmd.o 00:02:55.499 CC lib/nvme/nvme_fabric.o 00:02:55.499 CC lib/nvme/nvme_ns.o 00:02:55.499 CC lib/nvme/nvme_pcie_common.o 00:02:55.499 CC lib/nvme/nvme_pcie.o 00:02:55.499 CC lib/nvme/nvme_qpair.o 00:02:55.499 CC lib/nvme/nvme.o 00:02:55.499 CC lib/nvme/nvme_quirks.o 00:02:55.499 CC lib/nvme/nvme_transport.o 00:02:55.499 CC lib/nvme/nvme_discovery.o 00:02:55.499 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:55.499 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:55.499 CC lib/nvme/nvme_tcp.o 00:02:55.499 CC lib/nvme/nvme_opal.o 00:02:55.499 CC lib/nvme/nvme_io_msg.o 00:02:55.499 CC lib/nvme/nvme_poll_group.o 00:02:55.499 CC lib/nvme/nvme_zns.o 00:02:55.499 CC lib/nvme/nvme_stubs.o 00:02:55.499 CC lib/nvme/nvme_auth.o 00:02:55.499 CC lib/nvme/nvme_cuse.o 00:02:55.499 CC lib/nvme/nvme_vfio_user.o 00:02:55.499 CC lib/nvme/nvme_rdma.o 00:02:56.875 LIB libspdk_thread.a 00:02:56.875 SO libspdk_thread.so.10.1 00:02:56.875 SYMLINK libspdk_thread.so 00:02:57.132 CC lib/blob/blobstore.o 00:02:57.132 CC lib/blob/request.o 00:02:57.132 CC lib/blob/zeroes.o 00:02:57.132 CC lib/init/json_config.o 00:02:57.132 CC lib/blob/blob_bs_dev.o 00:02:57.132 CC lib/init/subsystem.o 00:02:57.132 CC lib/vfu_tgt/tgt_endpoint.o 00:02:57.132 CC lib/init/subsystem_rpc.o 00:02:57.132 CC lib/init/rpc.o 00:02:57.132 CC lib/vfu_tgt/tgt_rpc.o 00:02:57.132 CC lib/accel/accel.o 00:02:57.132 CC lib/accel/accel_rpc.o 00:02:57.132 CC lib/accel/accel_sw.o 00:02:57.132 CC lib/virtio/virtio.o 00:02:57.132 CC lib/virtio/virtio_vhost_user.o 00:02:57.132 CC lib/virtio/virtio_vfio_user.o 00:02:57.132 CC lib/virtio/virtio_pci.o 00:02:57.391 LIB libspdk_init.a 00:02:57.391 SO libspdk_init.so.5.0 00:02:57.391 LIB libspdk_vfu_tgt.a 00:02:57.391 LIB libspdk_virtio.a 00:02:57.391 SO libspdk_vfu_tgt.so.3.0 00:02:57.391 SYMLINK libspdk_init.so 00:02:57.391 SO libspdk_virtio.so.7.0 00:02:57.648 SYMLINK libspdk_vfu_tgt.so 00:02:57.648 SYMLINK libspdk_virtio.so 00:02:57.648 CC lib/event/reactor.o 00:02:57.648 CC lib/event/app.o 00:02:57.648 CC lib/event/app_rpc.o 00:02:57.648 CC lib/event/log_rpc.o 00:02:57.648 CC lib/event/scheduler_static.o 00:02:58.214 LIB libspdk_accel.a 00:02:58.214 LIB libspdk_nvme.a 00:02:58.214 SO libspdk_accel.so.16.0 00:02:58.473 LIB libspdk_event.a 00:02:58.473 SYMLINK libspdk_accel.so 00:02:58.473 SO libspdk_event.so.14.0 00:02:58.473 SO libspdk_nvme.so.13.1 00:02:58.473 SYMLINK libspdk_event.so 00:02:58.730 CC lib/bdev/bdev_rpc.o 00:02:58.730 CC lib/bdev/bdev.o 00:02:58.730 CC lib/bdev/bdev_zone.o 00:02:58.730 CC lib/bdev/part.o 00:02:58.730 CC lib/bdev/scsi_nvme.o 00:02:58.730 SYMLINK libspdk_nvme.so 00:03:00.659 LIB libspdk_blob.a 00:03:00.659 SO libspdk_blob.so.11.0 00:03:00.659 SYMLINK libspdk_blob.so 00:03:00.917 CC lib/blobfs/blobfs.o 00:03:00.917 CC lib/blobfs/tree.o 00:03:00.917 CC lib/lvol/lvol.o 00:03:01.174 LIB libspdk_bdev.a 00:03:01.174 SO libspdk_bdev.so.16.0 00:03:01.174 SYMLINK libspdk_bdev.so 00:03:01.436 CC lib/ublk/ublk.o 00:03:01.436 CC lib/nvmf/ctrlr.o 00:03:01.436 CC lib/ublk/ublk_rpc.o 00:03:01.436 CC lib/nvmf/ctrlr_discovery.o 00:03:01.436 CC lib/scsi/dev.o 00:03:01.436 CC lib/nvmf/ctrlr_bdev.o 00:03:01.436 CC lib/nvmf/subsystem.o 00:03:01.436 CC lib/scsi/lun.o 00:03:01.436 CC lib/nvmf/nvmf.o 00:03:01.436 CC lib/nvmf/nvmf_rpc.o 00:03:01.436 CC lib/scsi/port.o 00:03:01.436 CC lib/nbd/nbd.o 00:03:01.436 CC lib/nvmf/transport.o 00:03:01.436 CC lib/scsi/scsi.o 00:03:01.436 CC lib/nvmf/tcp.o 00:03:01.436 CC lib/nbd/nbd_rpc.o 00:03:01.436 CC lib/nvmf/stubs.o 00:03:01.436 CC lib/scsi/scsi_bdev.o 00:03:01.436 CC lib/nvmf/mdns_server.o 00:03:01.436 CC lib/scsi/scsi_pr.o 00:03:01.436 CC lib/nvmf/vfio_user.o 00:03:01.436 CC lib/scsi/scsi_rpc.o 00:03:01.436 CC lib/nvmf/rdma.o 00:03:01.436 CC lib/nvmf/auth.o 00:03:01.436 CC lib/scsi/task.o 00:03:01.436 CC lib/ftl/ftl_core.o 00:03:01.436 CC lib/ftl/ftl_init.o 00:03:01.436 CC lib/ftl/ftl_layout.o 00:03:01.436 CC lib/ftl/ftl_debug.o 00:03:01.436 CC lib/ftl/ftl_io.o 00:03:01.436 CC lib/ftl/ftl_sb.o 00:03:01.436 CC lib/ftl/ftl_l2p.o 00:03:01.436 CC lib/ftl/ftl_l2p_flat.o 00:03:01.436 CC lib/ftl/ftl_nv_cache.o 00:03:01.436 CC lib/ftl/ftl_band.o 00:03:01.436 CC lib/ftl/ftl_band_ops.o 00:03:01.436 CC lib/ftl/ftl_writer.o 00:03:01.436 CC lib/ftl/ftl_reloc.o 00:03:01.436 CC lib/ftl/ftl_rq.o 00:03:01.436 CC lib/ftl/ftl_l2p_cache.o 00:03:01.436 CC lib/ftl/ftl_p2l.o 00:03:01.436 CC lib/ftl/mngt/ftl_mngt.o 00:03:01.436 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:01.436 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:01.436 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:01.436 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:01.699 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:01.699 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:01.699 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:01.962 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:01.962 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:01.962 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:01.962 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:01.962 CC lib/ftl/utils/ftl_conf.o 00:03:01.962 CC lib/ftl/utils/ftl_md.o 00:03:01.962 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:01.962 CC lib/ftl/utils/ftl_mempool.o 00:03:01.962 CC lib/ftl/utils/ftl_bitmap.o 00:03:01.962 CC lib/ftl/utils/ftl_property.o 00:03:01.962 LIB libspdk_blobfs.a 00:03:01.962 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:01.962 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:01.962 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:01.962 SO libspdk_blobfs.so.10.0 00:03:01.962 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:01.962 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:01.962 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:01.962 SYMLINK libspdk_blobfs.so 00:03:01.962 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:02.222 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:02.222 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:02.222 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:02.222 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:02.222 CC lib/ftl/base/ftl_base_dev.o 00:03:02.222 CC lib/ftl/base/ftl_base_bdev.o 00:03:02.222 CC lib/ftl/ftl_trace.o 00:03:02.222 LIB libspdk_lvol.a 00:03:02.222 SO libspdk_lvol.so.10.0 00:03:02.223 LIB libspdk_nbd.a 00:03:02.223 SO libspdk_nbd.so.7.0 00:03:02.223 SYMLINK libspdk_lvol.so 00:03:02.481 SYMLINK libspdk_nbd.so 00:03:02.481 LIB libspdk_scsi.a 00:03:02.481 SO libspdk_scsi.so.9.0 00:03:02.481 LIB libspdk_ublk.a 00:03:02.481 SO libspdk_ublk.so.3.0 00:03:02.481 SYMLINK libspdk_scsi.so 00:03:02.481 SYMLINK libspdk_ublk.so 00:03:02.740 CC lib/vhost/vhost.o 00:03:02.740 CC lib/iscsi/conn.o 00:03:02.740 CC lib/iscsi/init_grp.o 00:03:02.740 CC lib/vhost/vhost_rpc.o 00:03:02.740 CC lib/iscsi/iscsi.o 00:03:02.740 CC lib/vhost/vhost_scsi.o 00:03:02.740 CC lib/iscsi/md5.o 00:03:02.740 CC lib/vhost/vhost_blk.o 00:03:02.740 CC lib/vhost/rte_vhost_user.o 00:03:02.740 CC lib/iscsi/param.o 00:03:02.740 CC lib/iscsi/portal_grp.o 00:03:02.740 CC lib/iscsi/tgt_node.o 00:03:02.740 CC lib/iscsi/iscsi_subsystem.o 00:03:02.740 CC lib/iscsi/iscsi_rpc.o 00:03:02.740 CC lib/iscsi/task.o 00:03:02.998 LIB libspdk_ftl.a 00:03:03.256 SO libspdk_ftl.so.9.0 00:03:03.514 SYMLINK libspdk_ftl.so 00:03:04.080 LIB libspdk_vhost.a 00:03:04.080 SO libspdk_vhost.so.8.0 00:03:04.338 SYMLINK libspdk_vhost.so 00:03:04.338 LIB libspdk_iscsi.a 00:03:04.338 LIB libspdk_nvmf.a 00:03:04.338 SO libspdk_iscsi.so.8.0 00:03:04.596 SO libspdk_nvmf.so.19.0 00:03:04.596 SYMLINK libspdk_iscsi.so 00:03:04.854 SYMLINK libspdk_nvmf.so 00:03:05.113 CC module/env_dpdk/env_dpdk_rpc.o 00:03:05.113 CC module/vfu_device/vfu_virtio.o 00:03:05.113 CC module/vfu_device/vfu_virtio_blk.o 00:03:05.113 CC module/vfu_device/vfu_virtio_scsi.o 00:03:05.113 CC module/vfu_device/vfu_virtio_rpc.o 00:03:05.113 CC module/sock/posix/posix.o 00:03:05.113 CC module/blob/bdev/blob_bdev.o 00:03:05.113 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:05.113 CC module/accel/dsa/accel_dsa.o 00:03:05.113 CC module/accel/dsa/accel_dsa_rpc.o 00:03:05.113 CC module/scheduler/gscheduler/gscheduler.o 00:03:05.113 CC module/accel/ioat/accel_ioat.o 00:03:05.113 CC module/keyring/file/keyring.o 00:03:05.113 CC module/accel/ioat/accel_ioat_rpc.o 00:03:05.113 CC module/keyring/file/keyring_rpc.o 00:03:05.113 CC module/accel/error/accel_error.o 00:03:05.113 CC module/accel/error/accel_error_rpc.o 00:03:05.113 CC module/accel/iaa/accel_iaa.o 00:03:05.113 CC module/keyring/linux/keyring.o 00:03:05.113 CC module/keyring/linux/keyring_rpc.o 00:03:05.113 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:05.113 CC module/accel/iaa/accel_iaa_rpc.o 00:03:05.113 LIB libspdk_env_dpdk_rpc.a 00:03:05.113 SO libspdk_env_dpdk_rpc.so.6.0 00:03:05.371 SYMLINK libspdk_env_dpdk_rpc.so 00:03:05.371 LIB libspdk_keyring_file.a 00:03:05.371 LIB libspdk_keyring_linux.a 00:03:05.371 LIB libspdk_scheduler_dpdk_governor.a 00:03:05.371 LIB libspdk_scheduler_gscheduler.a 00:03:05.371 SO libspdk_keyring_file.so.1.0 00:03:05.371 LIB libspdk_accel_error.a 00:03:05.371 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:05.371 SO libspdk_keyring_linux.so.1.0 00:03:05.371 SO libspdk_scheduler_gscheduler.so.4.0 00:03:05.372 LIB libspdk_accel_ioat.a 00:03:05.372 SO libspdk_accel_error.so.2.0 00:03:05.372 LIB libspdk_accel_iaa.a 00:03:05.372 LIB libspdk_scheduler_dynamic.a 00:03:05.372 SYMLINK libspdk_keyring_file.so 00:03:05.372 SO libspdk_accel_ioat.so.6.0 00:03:05.372 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:05.372 SYMLINK libspdk_scheduler_gscheduler.so 00:03:05.372 SO libspdk_scheduler_dynamic.so.4.0 00:03:05.372 SYMLINK libspdk_keyring_linux.so 00:03:05.372 SO libspdk_accel_iaa.so.3.0 00:03:05.372 SYMLINK libspdk_accel_error.so 00:03:05.372 SYMLINK libspdk_accel_ioat.so 00:03:05.372 SYMLINK libspdk_scheduler_dynamic.so 00:03:05.372 LIB libspdk_accel_dsa.a 00:03:05.372 SYMLINK libspdk_accel_iaa.so 00:03:05.372 LIB libspdk_blob_bdev.a 00:03:05.630 SO libspdk_accel_dsa.so.5.0 00:03:05.630 SO libspdk_blob_bdev.so.11.0 00:03:05.630 SYMLINK libspdk_blob_bdev.so 00:03:05.630 SYMLINK libspdk_accel_dsa.so 00:03:05.889 LIB libspdk_vfu_device.a 00:03:05.889 SO libspdk_vfu_device.so.3.0 00:03:05.889 CC module/blobfs/bdev/blobfs_bdev.o 00:03:05.889 CC module/bdev/nvme/bdev_nvme.o 00:03:05.889 CC module/bdev/malloc/bdev_malloc.o 00:03:05.889 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:05.889 CC module/bdev/error/vbdev_error.o 00:03:05.889 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:05.889 CC module/bdev/error/vbdev_error_rpc.o 00:03:05.889 CC module/bdev/nvme/nvme_rpc.o 00:03:05.889 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:05.889 CC module/bdev/nvme/bdev_mdns_client.o 00:03:05.889 CC module/bdev/delay/vbdev_delay.o 00:03:05.889 CC module/bdev/nvme/vbdev_opal.o 00:03:05.889 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:05.889 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:05.889 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:05.889 CC module/bdev/raid/bdev_raid.o 00:03:05.889 CC module/bdev/raid/bdev_raid_rpc.o 00:03:05.889 CC module/bdev/gpt/gpt.o 00:03:05.889 CC module/bdev/split/vbdev_split.o 00:03:05.889 CC module/bdev/raid/bdev_raid_sb.o 00:03:05.889 CC module/bdev/gpt/vbdev_gpt.o 00:03:05.889 CC module/bdev/raid/raid0.o 00:03:05.889 CC module/bdev/split/vbdev_split_rpc.o 00:03:05.889 CC module/bdev/aio/bdev_aio.o 00:03:05.889 CC module/bdev/null/bdev_null.o 00:03:05.889 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:05.889 CC module/bdev/raid/raid1.o 00:03:05.889 CC module/bdev/ftl/bdev_ftl.o 00:03:05.889 CC module/bdev/raid/concat.o 00:03:05.889 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:05.889 CC module/bdev/null/bdev_null_rpc.o 00:03:05.889 CC module/bdev/passthru/vbdev_passthru.o 00:03:05.889 CC module/bdev/aio/bdev_aio_rpc.o 00:03:05.889 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:05.889 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:05.889 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:05.889 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:05.889 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:05.889 CC module/bdev/lvol/vbdev_lvol.o 00:03:05.889 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:05.889 CC module/bdev/iscsi/bdev_iscsi.o 00:03:05.889 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:06.147 SYMLINK libspdk_vfu_device.so 00:03:06.147 LIB libspdk_sock_posix.a 00:03:06.147 SO libspdk_sock_posix.so.6.0 00:03:06.147 SYMLINK libspdk_sock_posix.so 00:03:06.405 LIB libspdk_bdev_error.a 00:03:06.405 LIB libspdk_blobfs_bdev.a 00:03:06.405 SO libspdk_bdev_error.so.6.0 00:03:06.405 SO libspdk_blobfs_bdev.so.6.0 00:03:06.405 LIB libspdk_bdev_split.a 00:03:06.405 SYMLINK libspdk_bdev_error.so 00:03:06.405 SO libspdk_bdev_split.so.6.0 00:03:06.405 SYMLINK libspdk_blobfs_bdev.so 00:03:06.405 SYMLINK libspdk_bdev_split.so 00:03:06.405 LIB libspdk_bdev_passthru.a 00:03:06.405 LIB libspdk_bdev_null.a 00:03:06.405 LIB libspdk_bdev_gpt.a 00:03:06.405 SO libspdk_bdev_null.so.6.0 00:03:06.405 SO libspdk_bdev_passthru.so.6.0 00:03:06.405 LIB libspdk_bdev_ftl.a 00:03:06.405 SO libspdk_bdev_gpt.so.6.0 00:03:06.405 LIB libspdk_bdev_zone_block.a 00:03:06.405 SO libspdk_bdev_ftl.so.6.0 00:03:06.405 LIB libspdk_bdev_aio.a 00:03:06.405 SO libspdk_bdev_zone_block.so.6.0 00:03:06.663 SYMLINK libspdk_bdev_null.so 00:03:06.663 SYMLINK libspdk_bdev_passthru.so 00:03:06.663 SO libspdk_bdev_aio.so.6.0 00:03:06.663 SYMLINK libspdk_bdev_gpt.so 00:03:06.663 LIB libspdk_bdev_delay.a 00:03:06.663 LIB libspdk_bdev_malloc.a 00:03:06.663 SYMLINK libspdk_bdev_ftl.so 00:03:06.663 SYMLINK libspdk_bdev_zone_block.so 00:03:06.663 LIB libspdk_bdev_iscsi.a 00:03:06.663 SO libspdk_bdev_delay.so.6.0 00:03:06.663 SO libspdk_bdev_malloc.so.6.0 00:03:06.663 SO libspdk_bdev_iscsi.so.6.0 00:03:06.663 SYMLINK libspdk_bdev_aio.so 00:03:06.663 SYMLINK libspdk_bdev_malloc.so 00:03:06.663 SYMLINK libspdk_bdev_delay.so 00:03:06.663 SYMLINK libspdk_bdev_iscsi.so 00:03:06.663 LIB libspdk_bdev_lvol.a 00:03:06.663 LIB libspdk_bdev_virtio.a 00:03:06.663 SO libspdk_bdev_lvol.so.6.0 00:03:06.922 SO libspdk_bdev_virtio.so.6.0 00:03:06.922 SYMLINK libspdk_bdev_lvol.so 00:03:06.922 SYMLINK libspdk_bdev_virtio.so 00:03:07.855 LIB libspdk_bdev_raid.a 00:03:07.855 SO libspdk_bdev_raid.so.6.0 00:03:07.855 SYMLINK libspdk_bdev_raid.so 00:03:08.791 LIB libspdk_bdev_nvme.a 00:03:08.791 SO libspdk_bdev_nvme.so.7.0 00:03:09.049 SYMLINK libspdk_bdev_nvme.so 00:03:09.344 CC module/event/subsystems/iobuf/iobuf.o 00:03:09.344 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:09.344 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:09.344 CC module/event/subsystems/scheduler/scheduler.o 00:03:09.344 CC module/event/subsystems/keyring/keyring.o 00:03:09.344 CC module/event/subsystems/vmd/vmd.o 00:03:09.344 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:09.344 CC module/event/subsystems/sock/sock.o 00:03:09.344 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:09.602 LIB libspdk_event_vhost_blk.a 00:03:09.602 LIB libspdk_event_vmd.a 00:03:09.602 LIB libspdk_event_keyring.a 00:03:09.602 LIB libspdk_event_scheduler.a 00:03:09.602 LIB libspdk_event_vfu_tgt.a 00:03:09.602 LIB libspdk_event_sock.a 00:03:09.602 SO libspdk_event_vhost_blk.so.3.0 00:03:09.602 SO libspdk_event_scheduler.so.4.0 00:03:09.602 SO libspdk_event_keyring.so.1.0 00:03:09.602 SO libspdk_event_vmd.so.6.0 00:03:09.602 LIB libspdk_event_iobuf.a 00:03:09.602 SO libspdk_event_vfu_tgt.so.3.0 00:03:09.602 SO libspdk_event_sock.so.5.0 00:03:09.602 SO libspdk_event_iobuf.so.3.0 00:03:09.602 SYMLINK libspdk_event_vhost_blk.so 00:03:09.602 SYMLINK libspdk_event_scheduler.so 00:03:09.602 SYMLINK libspdk_event_keyring.so 00:03:09.602 SYMLINK libspdk_event_vfu_tgt.so 00:03:09.602 SYMLINK libspdk_event_vmd.so 00:03:09.602 SYMLINK libspdk_event_sock.so 00:03:09.602 SYMLINK libspdk_event_iobuf.so 00:03:09.862 CC module/event/subsystems/accel/accel.o 00:03:10.120 LIB libspdk_event_accel.a 00:03:10.120 SO libspdk_event_accel.so.6.0 00:03:10.120 SYMLINK libspdk_event_accel.so 00:03:10.378 CC module/event/subsystems/bdev/bdev.o 00:03:10.944 LIB libspdk_event_bdev.a 00:03:10.944 SO libspdk_event_bdev.so.6.0 00:03:10.944 SYMLINK libspdk_event_bdev.so 00:03:11.204 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:11.204 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:11.204 CC module/event/subsystems/ublk/ublk.o 00:03:11.204 CC module/event/subsystems/scsi/scsi.o 00:03:11.204 CC module/event/subsystems/nbd/nbd.o 00:03:11.204 LIB libspdk_event_ublk.a 00:03:11.204 LIB libspdk_event_scsi.a 00:03:11.204 SO libspdk_event_ublk.so.3.0 00:03:11.204 SO libspdk_event_scsi.so.6.0 00:03:11.463 SYMLINK libspdk_event_ublk.so 00:03:11.463 LIB libspdk_event_nbd.a 00:03:11.463 SYMLINK libspdk_event_scsi.so 00:03:11.463 SO libspdk_event_nbd.so.6.0 00:03:11.463 SYMLINK libspdk_event_nbd.so 00:03:11.463 LIB libspdk_event_nvmf.a 00:03:11.463 SO libspdk_event_nvmf.so.6.0 00:03:11.463 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:11.463 CC module/event/subsystems/iscsi/iscsi.o 00:03:11.722 SYMLINK libspdk_event_nvmf.so 00:03:11.980 LIB libspdk_event_vhost_scsi.a 00:03:11.980 LIB libspdk_event_iscsi.a 00:03:11.980 SO libspdk_event_iscsi.so.6.0 00:03:11.980 SO libspdk_event_vhost_scsi.so.3.0 00:03:11.980 SYMLINK libspdk_event_iscsi.so 00:03:11.980 SYMLINK libspdk_event_vhost_scsi.so 00:03:12.238 SO libspdk.so.6.0 00:03:12.238 SYMLINK libspdk.so 00:03:12.238 CC app/spdk_top/spdk_top.o 00:03:12.238 CXX app/trace/trace.o 00:03:12.238 CC app/trace_record/trace_record.o 00:03:12.238 CC app/spdk_nvme_identify/identify.o 00:03:12.238 CC app/spdk_lspci/spdk_lspci.o 00:03:12.238 CC app/spdk_nvme_discover/discovery_aer.o 00:03:12.238 CC app/spdk_nvme_perf/perf.o 00:03:12.238 TEST_HEADER include/spdk/accel.h 00:03:12.238 TEST_HEADER include/spdk/accel_module.h 00:03:12.238 TEST_HEADER include/spdk/barrier.h 00:03:12.238 TEST_HEADER include/spdk/assert.h 00:03:12.501 TEST_HEADER include/spdk/base64.h 00:03:12.501 TEST_HEADER include/spdk/bdev.h 00:03:12.501 CC test/rpc_client/rpc_client_test.o 00:03:12.501 TEST_HEADER include/spdk/bdev_module.h 00:03:12.501 TEST_HEADER include/spdk/bdev_zone.h 00:03:12.501 TEST_HEADER include/spdk/bit_array.h 00:03:12.501 TEST_HEADER include/spdk/bit_pool.h 00:03:12.501 TEST_HEADER include/spdk/blob_bdev.h 00:03:12.501 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:12.501 TEST_HEADER include/spdk/blobfs.h 00:03:12.501 TEST_HEADER include/spdk/blob.h 00:03:12.501 TEST_HEADER include/spdk/conf.h 00:03:12.501 TEST_HEADER include/spdk/config.h 00:03:12.501 TEST_HEADER include/spdk/cpuset.h 00:03:12.501 TEST_HEADER include/spdk/crc16.h 00:03:12.501 TEST_HEADER include/spdk/crc32.h 00:03:12.501 TEST_HEADER include/spdk/crc64.h 00:03:12.501 TEST_HEADER include/spdk/dif.h 00:03:12.501 TEST_HEADER include/spdk/dma.h 00:03:12.501 TEST_HEADER include/spdk/endian.h 00:03:12.501 TEST_HEADER include/spdk/env_dpdk.h 00:03:12.501 TEST_HEADER include/spdk/env.h 00:03:12.501 TEST_HEADER include/spdk/event.h 00:03:12.501 TEST_HEADER include/spdk/fd_group.h 00:03:12.501 TEST_HEADER include/spdk/fd.h 00:03:12.501 TEST_HEADER include/spdk/file.h 00:03:12.501 TEST_HEADER include/spdk/ftl.h 00:03:12.501 TEST_HEADER include/spdk/gpt_spec.h 00:03:12.501 TEST_HEADER include/spdk/hexlify.h 00:03:12.501 TEST_HEADER include/spdk/histogram_data.h 00:03:12.501 TEST_HEADER include/spdk/idxd.h 00:03:12.501 TEST_HEADER include/spdk/idxd_spec.h 00:03:12.501 TEST_HEADER include/spdk/init.h 00:03:12.501 TEST_HEADER include/spdk/ioat.h 00:03:12.501 TEST_HEADER include/spdk/ioat_spec.h 00:03:12.501 TEST_HEADER include/spdk/iscsi_spec.h 00:03:12.501 TEST_HEADER include/spdk/jsonrpc.h 00:03:12.501 TEST_HEADER include/spdk/json.h 00:03:12.501 TEST_HEADER include/spdk/keyring.h 00:03:12.501 TEST_HEADER include/spdk/keyring_module.h 00:03:12.501 TEST_HEADER include/spdk/likely.h 00:03:12.501 TEST_HEADER include/spdk/log.h 00:03:12.501 TEST_HEADER include/spdk/lvol.h 00:03:12.501 TEST_HEADER include/spdk/memory.h 00:03:12.501 TEST_HEADER include/spdk/nbd.h 00:03:12.501 TEST_HEADER include/spdk/mmio.h 00:03:12.501 TEST_HEADER include/spdk/net.h 00:03:12.501 TEST_HEADER include/spdk/notify.h 00:03:12.501 TEST_HEADER include/spdk/nvme.h 00:03:12.501 TEST_HEADER include/spdk/nvme_intel.h 00:03:12.501 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:12.501 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:12.501 TEST_HEADER include/spdk/nvme_spec.h 00:03:12.501 TEST_HEADER include/spdk/nvme_zns.h 00:03:12.501 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:12.501 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:12.501 TEST_HEADER include/spdk/nvmf.h 00:03:12.501 TEST_HEADER include/spdk/nvmf_spec.h 00:03:12.501 TEST_HEADER include/spdk/nvmf_transport.h 00:03:12.501 TEST_HEADER include/spdk/opal.h 00:03:12.501 TEST_HEADER include/spdk/opal_spec.h 00:03:12.501 TEST_HEADER include/spdk/pci_ids.h 00:03:12.501 TEST_HEADER include/spdk/pipe.h 00:03:12.501 TEST_HEADER include/spdk/queue.h 00:03:12.501 TEST_HEADER include/spdk/reduce.h 00:03:12.501 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:12.501 TEST_HEADER include/spdk/rpc.h 00:03:12.501 TEST_HEADER include/spdk/scheduler.h 00:03:12.501 TEST_HEADER include/spdk/scsi.h 00:03:12.501 TEST_HEADER include/spdk/scsi_spec.h 00:03:12.501 TEST_HEADER include/spdk/sock.h 00:03:12.501 TEST_HEADER include/spdk/stdinc.h 00:03:12.501 TEST_HEADER include/spdk/string.h 00:03:12.501 TEST_HEADER include/spdk/thread.h 00:03:12.501 TEST_HEADER include/spdk/trace_parser.h 00:03:12.501 TEST_HEADER include/spdk/trace.h 00:03:12.501 TEST_HEADER include/spdk/tree.h 00:03:12.501 TEST_HEADER include/spdk/ublk.h 00:03:12.501 TEST_HEADER include/spdk/util.h 00:03:12.501 TEST_HEADER include/spdk/uuid.h 00:03:12.501 TEST_HEADER include/spdk/version.h 00:03:12.501 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:12.501 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:12.501 TEST_HEADER include/spdk/vhost.h 00:03:12.501 TEST_HEADER include/spdk/xor.h 00:03:12.501 TEST_HEADER include/spdk/vmd.h 00:03:12.501 TEST_HEADER include/spdk/zipf.h 00:03:12.501 CXX test/cpp_headers/accel.o 00:03:12.501 CXX test/cpp_headers/accel_module.o 00:03:12.501 CXX test/cpp_headers/assert.o 00:03:12.501 CC app/spdk_dd/spdk_dd.o 00:03:12.502 CXX test/cpp_headers/barrier.o 00:03:12.502 CXX test/cpp_headers/base64.o 00:03:12.502 CXX test/cpp_headers/bdev.o 00:03:12.502 CXX test/cpp_headers/bdev_module.o 00:03:12.502 CXX test/cpp_headers/bdev_zone.o 00:03:12.502 CXX test/cpp_headers/bit_array.o 00:03:12.502 CXX test/cpp_headers/bit_pool.o 00:03:12.502 CXX test/cpp_headers/blob_bdev.o 00:03:12.502 CC app/nvmf_tgt/nvmf_main.o 00:03:12.502 CXX test/cpp_headers/blobfs_bdev.o 00:03:12.502 CXX test/cpp_headers/blobfs.o 00:03:12.502 CXX test/cpp_headers/blob.o 00:03:12.502 CXX test/cpp_headers/conf.o 00:03:12.502 CXX test/cpp_headers/config.o 00:03:12.502 CXX test/cpp_headers/cpuset.o 00:03:12.502 CXX test/cpp_headers/crc16.o 00:03:12.502 CC app/iscsi_tgt/iscsi_tgt.o 00:03:12.502 CC examples/ioat/perf/perf.o 00:03:12.502 CC examples/util/zipf/zipf.o 00:03:12.502 CXX test/cpp_headers/crc32.o 00:03:12.502 CC examples/ioat/verify/verify.o 00:03:12.502 CC app/fio/nvme/fio_plugin.o 00:03:12.502 CC test/app/histogram_perf/histogram_perf.o 00:03:12.502 CC app/spdk_tgt/spdk_tgt.o 00:03:12.502 CC test/thread/poller_perf/poller_perf.o 00:03:12.502 CC test/app/stub/stub.o 00:03:12.502 CC test/env/vtophys/vtophys.o 00:03:12.502 CC test/env/pci/pci_ut.o 00:03:12.502 CC test/app/jsoncat/jsoncat.o 00:03:12.502 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:12.502 CC test/env/memory/memory_ut.o 00:03:12.502 CC app/fio/bdev/fio_plugin.o 00:03:12.502 CC test/dma/test_dma/test_dma.o 00:03:12.502 CC test/app/bdev_svc/bdev_svc.o 00:03:12.770 LINK spdk_lspci 00:03:12.770 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:12.770 CC test/env/mem_callbacks/mem_callbacks.o 00:03:12.770 LINK spdk_nvme_discover 00:03:12.770 LINK rpc_client_test 00:03:12.770 LINK jsoncat 00:03:12.770 LINK interrupt_tgt 00:03:12.770 LINK zipf 00:03:12.770 LINK vtophys 00:03:12.770 CXX test/cpp_headers/crc64.o 00:03:12.770 LINK nvmf_tgt 00:03:12.770 LINK poller_perf 00:03:12.770 LINK histogram_perf 00:03:12.770 CXX test/cpp_headers/dif.o 00:03:12.770 CXX test/cpp_headers/dma.o 00:03:13.057 CXX test/cpp_headers/endian.o 00:03:13.057 LINK env_dpdk_post_init 00:03:13.057 CXX test/cpp_headers/env_dpdk.o 00:03:13.057 CXX test/cpp_headers/env.o 00:03:13.057 CXX test/cpp_headers/event.o 00:03:13.057 CXX test/cpp_headers/fd_group.o 00:03:13.057 CXX test/cpp_headers/fd.o 00:03:13.057 LINK spdk_trace_record 00:03:13.057 CXX test/cpp_headers/file.o 00:03:13.057 LINK stub 00:03:13.057 CXX test/cpp_headers/ftl.o 00:03:13.057 CXX test/cpp_headers/gpt_spec.o 00:03:13.057 LINK iscsi_tgt 00:03:13.057 CXX test/cpp_headers/hexlify.o 00:03:13.057 LINK verify 00:03:13.057 LINK ioat_perf 00:03:13.057 CXX test/cpp_headers/histogram_data.o 00:03:13.057 CXX test/cpp_headers/idxd.o 00:03:13.057 CXX test/cpp_headers/idxd_spec.o 00:03:13.057 LINK spdk_tgt 00:03:13.057 LINK bdev_svc 00:03:13.057 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:13.057 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:13.057 CXX test/cpp_headers/init.o 00:03:13.057 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:13.057 CXX test/cpp_headers/ioat.o 00:03:13.319 CXX test/cpp_headers/ioat_spec.o 00:03:13.319 LINK spdk_trace 00:03:13.319 LINK spdk_dd 00:03:13.319 CXX test/cpp_headers/iscsi_spec.o 00:03:13.319 CXX test/cpp_headers/json.o 00:03:13.319 CXX test/cpp_headers/jsonrpc.o 00:03:13.319 CXX test/cpp_headers/keyring.o 00:03:13.319 CXX test/cpp_headers/keyring_module.o 00:03:13.319 CXX test/cpp_headers/likely.o 00:03:13.319 CXX test/cpp_headers/log.o 00:03:13.319 CXX test/cpp_headers/lvol.o 00:03:13.319 CXX test/cpp_headers/memory.o 00:03:13.319 CXX test/cpp_headers/mmio.o 00:03:13.319 CXX test/cpp_headers/nbd.o 00:03:13.319 CXX test/cpp_headers/net.o 00:03:13.319 CXX test/cpp_headers/notify.o 00:03:13.319 CXX test/cpp_headers/nvme.o 00:03:13.319 LINK test_dma 00:03:13.319 CXX test/cpp_headers/nvme_intel.o 00:03:13.319 CXX test/cpp_headers/nvme_ocssd.o 00:03:13.319 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:13.319 CXX test/cpp_headers/nvme_spec.o 00:03:13.319 LINK pci_ut 00:03:13.319 CXX test/cpp_headers/nvme_zns.o 00:03:13.319 CXX test/cpp_headers/nvmf_cmd.o 00:03:13.319 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:13.319 CXX test/cpp_headers/nvmf.o 00:03:13.586 CXX test/cpp_headers/nvmf_spec.o 00:03:13.586 CC test/event/reactor/reactor.o 00:03:13.586 LINK nvme_fuzz 00:03:13.586 CXX test/cpp_headers/nvmf_transport.o 00:03:13.586 CXX test/cpp_headers/opal.o 00:03:13.586 CXX test/cpp_headers/opal_spec.o 00:03:13.586 CC test/event/event_perf/event_perf.o 00:03:13.586 CC test/event/reactor_perf/reactor_perf.o 00:03:13.586 LINK spdk_bdev 00:03:13.586 LINK spdk_nvme 00:03:13.586 CC examples/sock/hello_world/hello_sock.o 00:03:13.586 CC examples/vmd/lsvmd/lsvmd.o 00:03:13.586 CXX test/cpp_headers/pci_ids.o 00:03:13.586 CC examples/vmd/led/led.o 00:03:13.586 CXX test/cpp_headers/pipe.o 00:03:13.586 CC test/event/app_repeat/app_repeat.o 00:03:13.586 CC examples/idxd/perf/perf.o 00:03:13.586 CC examples/thread/thread/thread_ex.o 00:03:13.586 CXX test/cpp_headers/queue.o 00:03:13.846 CXX test/cpp_headers/reduce.o 00:03:13.846 CXX test/cpp_headers/rpc.o 00:03:13.846 CXX test/cpp_headers/scheduler.o 00:03:13.846 CXX test/cpp_headers/scsi.o 00:03:13.846 CXX test/cpp_headers/scsi_spec.o 00:03:13.846 CXX test/cpp_headers/sock.o 00:03:13.846 CXX test/cpp_headers/stdinc.o 00:03:13.846 CXX test/cpp_headers/string.o 00:03:13.846 CXX test/cpp_headers/thread.o 00:03:13.846 CC test/event/scheduler/scheduler.o 00:03:13.846 CXX test/cpp_headers/trace.o 00:03:13.846 CXX test/cpp_headers/trace_parser.o 00:03:13.846 CXX test/cpp_headers/tree.o 00:03:13.846 CXX test/cpp_headers/ublk.o 00:03:13.846 CXX test/cpp_headers/util.o 00:03:13.846 CXX test/cpp_headers/uuid.o 00:03:13.846 CXX test/cpp_headers/version.o 00:03:13.846 CXX test/cpp_headers/vfio_user_pci.o 00:03:13.846 CXX test/cpp_headers/vfio_user_spec.o 00:03:13.846 CXX test/cpp_headers/vhost.o 00:03:13.846 CXX test/cpp_headers/vmd.o 00:03:13.846 CXX test/cpp_headers/xor.o 00:03:13.846 LINK reactor 00:03:13.846 CXX test/cpp_headers/zipf.o 00:03:13.846 LINK event_perf 00:03:13.846 LINK spdk_nvme_perf 00:03:13.846 LINK mem_callbacks 00:03:13.846 CC app/vhost/vhost.o 00:03:13.846 LINK reactor_perf 00:03:13.846 LINK lsvmd 00:03:13.846 LINK vhost_fuzz 00:03:13.846 LINK led 00:03:14.109 LINK spdk_nvme_identify 00:03:14.109 LINK app_repeat 00:03:14.109 LINK spdk_top 00:03:14.109 LINK hello_sock 00:03:14.109 CC test/nvme/reset/reset.o 00:03:14.109 CC test/nvme/sgl/sgl.o 00:03:14.109 CC test/nvme/aer/aer.o 00:03:14.109 LINK thread 00:03:14.109 CC test/nvme/e2edp/nvme_dp.o 00:03:14.109 CC test/nvme/overhead/overhead.o 00:03:14.109 CC test/nvme/err_injection/err_injection.o 00:03:14.109 CC test/nvme/startup/startup.o 00:03:14.367 CC test/blobfs/mkfs/mkfs.o 00:03:14.367 CC test/accel/dif/dif.o 00:03:14.367 CC test/nvme/reserve/reserve.o 00:03:14.367 CC test/nvme/simple_copy/simple_copy.o 00:03:14.367 LINK scheduler 00:03:14.367 CC test/nvme/connect_stress/connect_stress.o 00:03:14.367 CC test/nvme/compliance/nvme_compliance.o 00:03:14.367 CC test/nvme/boot_partition/boot_partition.o 00:03:14.367 LINK vhost 00:03:14.367 CC test/nvme/fdp/fdp.o 00:03:14.367 CC test/nvme/fused_ordering/fused_ordering.o 00:03:14.367 CC test/lvol/esnap/esnap.o 00:03:14.367 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:14.367 CC test/nvme/cuse/cuse.o 00:03:14.367 LINK idxd_perf 00:03:14.626 LINK err_injection 00:03:14.626 LINK connect_stress 00:03:14.626 LINK startup 00:03:14.626 LINK mkfs 00:03:14.626 LINK doorbell_aers 00:03:14.626 LINK reserve 00:03:14.626 LINK simple_copy 00:03:14.626 LINK reset 00:03:14.626 LINK fused_ordering 00:03:14.626 LINK aer 00:03:14.626 LINK boot_partition 00:03:14.626 LINK overhead 00:03:14.626 CC examples/nvme/reconnect/reconnect.o 00:03:14.626 CC examples/nvme/abort/abort.o 00:03:14.626 CC examples/nvme/hotplug/hotplug.o 00:03:14.626 CC examples/nvme/hello_world/hello_world.o 00:03:14.626 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:14.626 CC examples/nvme/arbitration/arbitration.o 00:03:14.626 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:14.626 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:14.626 LINK sgl 00:03:14.626 LINK memory_ut 00:03:14.626 LINK nvme_dp 00:03:14.626 LINK nvme_compliance 00:03:14.883 CC examples/accel/perf/accel_perf.o 00:03:14.883 LINK fdp 00:03:14.883 CC examples/blob/hello_world/hello_blob.o 00:03:14.883 CC examples/blob/cli/blobcli.o 00:03:14.883 LINK pmr_persistence 00:03:14.883 LINK cmb_copy 00:03:14.883 LINK hotplug 00:03:14.883 LINK hello_world 00:03:15.141 LINK dif 00:03:15.141 LINK arbitration 00:03:15.141 LINK reconnect 00:03:15.141 LINK hello_blob 00:03:15.141 LINK abort 00:03:15.141 LINK nvme_manage 00:03:15.398 LINK accel_perf 00:03:15.398 LINK blobcli 00:03:15.398 CC test/bdev/bdevio/bdevio.o 00:03:15.655 LINK iscsi_fuzz 00:03:15.655 CC examples/bdev/hello_world/hello_bdev.o 00:03:15.655 CC examples/bdev/bdevperf/bdevperf.o 00:03:15.911 LINK cuse 00:03:15.911 LINK bdevio 00:03:15.911 LINK hello_bdev 00:03:16.842 LINK bdevperf 00:03:17.406 CC examples/nvmf/nvmf/nvmf.o 00:03:17.971 LINK nvmf 00:03:24.532 LINK esnap 00:03:24.532 00:03:24.532 real 1m1.424s 00:03:24.532 user 10m39.185s 00:03:24.532 sys 2m37.283s 00:03:24.532 09:52:08 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:03:24.532 09:52:08 make -- common/autotest_common.sh@10 -- $ set +x 00:03:24.532 ************************************ 00:03:24.532 END TEST make 00:03:24.532 ************************************ 00:03:24.532 09:52:08 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:24.532 09:52:08 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:24.532 09:52:08 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:24.532 09:52:08 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:24.532 09:52:08 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:24.532 09:52:08 -- pm/common@44 -- $ pid=219991 00:03:24.532 09:52:08 -- pm/common@50 -- $ kill -TERM 219991 00:03:24.532 09:52:08 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:24.532 09:52:08 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:24.532 09:52:08 -- pm/common@44 -- $ pid=219993 00:03:24.532 09:52:08 -- pm/common@50 -- $ kill -TERM 219993 00:03:24.532 09:52:08 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:24.532 09:52:08 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:24.532 09:52:08 -- pm/common@44 -- $ pid=219995 00:03:24.532 09:52:08 -- pm/common@50 -- $ kill -TERM 219995 00:03:24.532 09:52:08 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:24.532 09:52:08 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:24.532 09:52:08 -- pm/common@44 -- $ pid=220023 00:03:24.532 09:52:08 -- pm/common@50 -- $ sudo -E kill -TERM 220023 00:03:24.532 09:52:09 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:24.532 09:52:09 -- nvmf/common.sh@7 -- # uname -s 00:03:24.532 09:52:09 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:24.532 09:52:09 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:24.532 09:52:09 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:24.532 09:52:09 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:24.532 09:52:09 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:24.532 09:52:09 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:24.532 09:52:09 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:24.532 09:52:09 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:24.532 09:52:09 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:24.532 09:52:09 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:24.532 09:52:09 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:03:24.532 09:52:09 -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:03:24.532 09:52:09 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:24.532 09:52:09 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:24.532 09:52:09 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:24.532 09:52:09 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:24.532 09:52:09 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:24.532 09:52:09 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:24.532 09:52:09 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:24.532 09:52:09 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:24.532 09:52:09 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:24.532 09:52:09 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:24.532 09:52:09 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:24.532 09:52:09 -- paths/export.sh@5 -- # export PATH 00:03:24.532 09:52:09 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:24.532 09:52:09 -- nvmf/common.sh@47 -- # : 0 00:03:24.532 09:52:09 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:24.532 09:52:09 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:24.532 09:52:09 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:24.532 09:52:09 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:24.532 09:52:09 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:24.532 09:52:09 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:24.532 09:52:09 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:24.532 09:52:09 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:24.532 09:52:09 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:24.532 09:52:09 -- spdk/autotest.sh@32 -- # uname -s 00:03:24.532 09:52:09 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:24.532 09:52:09 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:24.532 09:52:09 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:24.532 09:52:09 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:24.532 09:52:09 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:24.532 09:52:09 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:24.532 09:52:09 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:24.532 09:52:09 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:24.532 09:52:09 -- spdk/autotest.sh@48 -- # udevadm_pid=278286 00:03:24.532 09:52:09 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:24.532 09:52:09 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:24.532 09:52:09 -- pm/common@17 -- # local monitor 00:03:24.532 09:52:09 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:24.532 09:52:09 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:24.532 09:52:09 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:24.532 09:52:09 -- pm/common@21 -- # date +%s 00:03:24.532 09:52:09 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:24.532 09:52:09 -- pm/common@21 -- # date +%s 00:03:24.532 09:52:09 -- pm/common@25 -- # sleep 1 00:03:24.532 09:52:09 -- pm/common@21 -- # date +%s 00:03:24.532 09:52:09 -- pm/common@21 -- # date +%s 00:03:24.532 09:52:09 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721893929 00:03:24.532 09:52:09 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721893929 00:03:24.532 09:52:09 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721893929 00:03:24.532 09:52:09 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721893929 00:03:24.532 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721893929_collect-vmstat.pm.log 00:03:24.532 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721893929_collect-cpu-load.pm.log 00:03:24.532 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721893929_collect-cpu-temp.pm.log 00:03:24.532 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721893929_collect-bmc-pm.bmc.pm.log 00:03:25.099 09:52:10 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:25.099 09:52:10 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:25.099 09:52:10 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:25.099 09:52:10 -- common/autotest_common.sh@10 -- # set +x 00:03:25.099 09:52:10 -- spdk/autotest.sh@59 -- # create_test_list 00:03:25.099 09:52:10 -- common/autotest_common.sh@748 -- # xtrace_disable 00:03:25.099 09:52:10 -- common/autotest_common.sh@10 -- # set +x 00:03:25.099 09:52:10 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:03:25.099 09:52:10 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:25.099 09:52:10 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:25.099 09:52:10 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:25.099 09:52:10 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:25.099 09:52:10 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:25.099 09:52:10 -- common/autotest_common.sh@1455 -- # uname 00:03:25.099 09:52:10 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:25.099 09:52:10 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:25.099 09:52:10 -- common/autotest_common.sh@1475 -- # uname 00:03:25.099 09:52:10 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:25.099 09:52:10 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:03:25.099 09:52:10 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:03:25.099 09:52:10 -- spdk/autotest.sh@72 -- # hash lcov 00:03:25.099 09:52:10 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:25.099 09:52:10 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:03:25.099 --rc lcov_branch_coverage=1 00:03:25.099 --rc lcov_function_coverage=1 00:03:25.099 --rc genhtml_branch_coverage=1 00:03:25.099 --rc genhtml_function_coverage=1 00:03:25.099 --rc genhtml_legend=1 00:03:25.099 --rc geninfo_all_blocks=1 00:03:25.099 ' 00:03:25.099 09:52:10 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:03:25.099 --rc lcov_branch_coverage=1 00:03:25.099 --rc lcov_function_coverage=1 00:03:25.099 --rc genhtml_branch_coverage=1 00:03:25.099 --rc genhtml_function_coverage=1 00:03:25.099 --rc genhtml_legend=1 00:03:25.099 --rc geninfo_all_blocks=1 00:03:25.099 ' 00:03:25.099 09:52:10 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:03:25.099 --rc lcov_branch_coverage=1 00:03:25.099 --rc lcov_function_coverage=1 00:03:25.099 --rc genhtml_branch_coverage=1 00:03:25.099 --rc genhtml_function_coverage=1 00:03:25.099 --rc genhtml_legend=1 00:03:25.099 --rc geninfo_all_blocks=1 00:03:25.099 --no-external' 00:03:25.099 09:52:10 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:03:25.099 --rc lcov_branch_coverage=1 00:03:25.099 --rc lcov_function_coverage=1 00:03:25.099 --rc genhtml_branch_coverage=1 00:03:25.099 --rc genhtml_function_coverage=1 00:03:25.099 --rc genhtml_legend=1 00:03:25.099 --rc geninfo_all_blocks=1 00:03:25.099 --no-external' 00:03:25.099 09:52:10 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:25.099 lcov: LCOV version 1.14 00:03:25.099 09:52:10 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:03:51.646 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:51.646 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:04:09.749 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:04:09.749 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:04:09.749 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:04:09.749 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:04:09.749 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:04:09.749 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:04:09.749 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:04:09.749 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:04:09.749 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:04:09.749 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:04:09.749 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:04:09.749 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:04:09.749 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:04:09.749 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:04:09.749 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:04:09.749 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:04:09.749 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:04:09.749 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:04:09.749 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:04:09.749 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:04:09.749 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:04:09.749 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:04:09.749 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:04:09.749 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:04:09.749 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:04:09.749 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:04:09.749 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:04:09.749 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:04:09.749 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:04:09.749 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:04:09.749 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:04:09.749 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:04:09.749 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:04:09.749 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:04:09.749 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:04:09.749 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:04:09.749 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:04:09.749 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:04:09.749 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:04:09.749 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:04:09.749 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:04:09.749 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:04:09.749 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:04:09.749 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:04:09.749 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:04:09.749 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:04:09.749 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:04:09.749 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:04:09.749 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:04:09.749 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:04:09.749 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:04:09.749 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:04:09.749 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:04:09.749 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:04:09.749 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:04:09.749 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:04:09.749 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:04:09.749 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:04:09.749 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:04:09.749 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:04:09.749 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:04:09.749 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:04:09.749 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:04:09.749 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:04:09.749 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:04:09.749 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:04:09.749 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:04:09.749 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:04:09.749 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:04:09.749 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:04:09.749 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:04:09.749 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:04:09.749 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:04:09.749 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:04:09.749 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:04:09.749 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:04:09.749 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:04:09.749 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:04:09.749 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:04:09.749 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:04:09.749 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:04:09.749 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:04:09.749 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:04:09.749 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:04:09.749 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:04:09.749 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:04:09.749 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:04:09.749 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:04:09.749 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:04:09.749 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:04:09.749 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:04:09.749 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:04:09.749 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:04:09.749 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:04:09.749 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:04:09.749 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:04:09.749 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:04:09.749 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:04:09.749 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno:no functions found 00:04:09.750 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno 00:04:09.750 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:04:09.750 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:04:09.750 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:04:09.750 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:04:09.750 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:04:09.750 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:04:09.750 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:04:09.750 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:04:09.750 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:04:09.750 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:04:09.750 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:04:09.750 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:04:09.750 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:04:09.750 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:04:09.750 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:04:09.750 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:04:09.750 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:04:09.750 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:04:09.750 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:04:09.750 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:04:09.750 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:04:09.750 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:04:09.750 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:04:09.750 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:04:09.750 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:04:09.750 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:04:09.750 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:04:09.750 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:04:09.750 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:04:09.750 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:04:09.750 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:04:09.750 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:04:09.750 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:04:09.750 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:04:09.750 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:04:09.750 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:04:09.750 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:04:09.750 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:04:09.750 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:04:09.750 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:04:09.750 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:04:09.750 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:04:09.750 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:04:09.750 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:04:09.750 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:04:09.750 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:04:09.750 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:04:09.750 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:04:09.750 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:04:09.750 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:04:09.750 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:04:09.750 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:04:09.750 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:04:09.750 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:04:09.750 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:04:09.750 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:04:09.750 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:04:09.750 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:04:09.750 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:04:09.750 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:04:09.750 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:04:09.750 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:04:09.750 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:04:09.750 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:04:09.750 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:04:09.750 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:04:09.750 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:04:09.750 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:04:09.750 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:04:09.750 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:04:09.750 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:04:09.750 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:04:09.750 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:04:09.750 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:04:09.750 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:04:09.750 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:04:09.750 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:04:09.750 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:04:15.014 09:52:59 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:04:15.014 09:52:59 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:15.014 09:52:59 -- common/autotest_common.sh@10 -- # set +x 00:04:15.014 09:52:59 -- spdk/autotest.sh@91 -- # rm -f 00:04:15.014 09:52:59 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:16.389 0000:82:00.0 (8086 0a54): Already using the nvme driver 00:04:16.389 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:04:16.389 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:04:16.389 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:04:16.389 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:04:16.389 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:04:16.389 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:04:16.389 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:04:16.389 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:04:16.389 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:04:16.389 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:04:16.389 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:04:16.389 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:04:16.389 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:04:16.389 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:04:16.389 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:04:16.389 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:04:16.647 09:53:01 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:04:16.647 09:53:01 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:16.647 09:53:01 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:16.647 09:53:01 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:16.647 09:53:01 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:16.647 09:53:01 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:16.647 09:53:01 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:16.647 09:53:01 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:16.647 09:53:01 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:16.647 09:53:01 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:04:16.647 09:53:01 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:16.647 09:53:01 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:16.647 09:53:01 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:04:16.647 09:53:01 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:04:16.647 09:53:01 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:16.647 No valid GPT data, bailing 00:04:16.647 09:53:01 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:16.647 09:53:01 -- scripts/common.sh@391 -- # pt= 00:04:16.647 09:53:01 -- scripts/common.sh@392 -- # return 1 00:04:16.647 09:53:01 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:16.647 1+0 records in 00:04:16.648 1+0 records out 00:04:16.648 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00309536 s, 339 MB/s 00:04:16.648 09:53:01 -- spdk/autotest.sh@118 -- # sync 00:04:16.648 09:53:01 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:16.648 09:53:01 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:16.648 09:53:01 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:19.180 09:53:04 -- spdk/autotest.sh@124 -- # uname -s 00:04:19.180 09:53:04 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:04:19.180 09:53:04 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:04:19.180 09:53:04 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:19.180 09:53:04 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:19.180 09:53:04 -- common/autotest_common.sh@10 -- # set +x 00:04:19.180 ************************************ 00:04:19.180 START TEST setup.sh 00:04:19.180 ************************************ 00:04:19.180 09:53:04 setup.sh -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:04:19.180 * Looking for test storage... 00:04:19.180 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:19.180 09:53:04 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:04:19.180 09:53:04 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:19.180 09:53:04 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:04:19.180 09:53:04 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:19.180 09:53:04 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:19.180 09:53:04 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:19.180 ************************************ 00:04:19.180 START TEST acl 00:04:19.180 ************************************ 00:04:19.180 09:53:04 setup.sh.acl -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:04:19.180 * Looking for test storage... 00:04:19.180 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:19.180 09:53:04 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:04:19.180 09:53:04 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:19.180 09:53:04 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:19.180 09:53:04 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:19.180 09:53:04 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:19.180 09:53:04 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:19.180 09:53:04 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:19.180 09:53:04 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:19.180 09:53:04 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:19.180 09:53:04 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:04:19.180 09:53:04 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:04:19.180 09:53:04 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:04:19.180 09:53:04 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:04:19.180 09:53:04 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:04:19.181 09:53:04 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:19.181 09:53:04 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:21.084 09:53:06 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:04:21.084 09:53:06 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:04:21.084 09:53:06 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:21.084 09:53:06 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:04:21.084 09:53:06 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:04:21.084 09:53:06 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:22.461 Hugepages 00:04:22.461 node hugesize free / total 00:04:22.461 09:53:07 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:22.461 09:53:07 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:22.461 09:53:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:22.461 09:53:07 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:22.461 09:53:07 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:22.461 09:53:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:22.461 09:53:07 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:22.461 09:53:07 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:22.461 09:53:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:22.461 00:04:22.461 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:22.461 09:53:07 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:22.461 09:53:07 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:22.461 09:53:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:22.461 09:53:07 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:04:22.461 09:53:07 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:22.461 09:53:07 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:22.461 09:53:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:22.461 09:53:07 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:04:22.461 09:53:07 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:22.461 09:53:07 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:22.461 09:53:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:22.461 09:53:07 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:04:22.461 09:53:07 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:22.461 09:53:07 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:22.461 09:53:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:22.461 09:53:07 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:04:22.461 09:53:07 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:22.461 09:53:07 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:22.461 09:53:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:22.461 09:53:07 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:04:22.461 09:53:07 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:22.461 09:53:07 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:22.461 09:53:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:22.461 09:53:07 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:04:22.461 09:53:07 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:22.461 09:53:07 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:22.461 09:53:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:22.461 09:53:07 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:04:22.461 09:53:07 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:22.461 09:53:07 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:22.461 09:53:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:22.461 09:53:07 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:04:22.461 09:53:07 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:22.461 09:53:07 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:22.461 09:53:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:22.461 09:53:07 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:04:22.461 09:53:07 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:22.461 09:53:07 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:22.461 09:53:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:22.461 09:53:07 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:04:22.461 09:53:07 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:22.461 09:53:07 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:22.461 09:53:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:22.461 09:53:07 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:04:22.461 09:53:07 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:22.461 09:53:07 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:22.461 09:53:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:22.461 09:53:07 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:04:22.461 09:53:07 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:22.461 09:53:07 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:22.461 09:53:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:22.461 09:53:07 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:04:22.461 09:53:07 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:22.461 09:53:07 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:22.461 09:53:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:22.461 09:53:07 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:04:22.461 09:53:07 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:22.461 09:53:07 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:22.461 09:53:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:22.461 09:53:07 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:04:22.461 09:53:07 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:22.461 09:53:07 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:22.462 09:53:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:22.462 09:53:07 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:04:22.462 09:53:07 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:22.462 09:53:07 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:22.462 09:53:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:22.462 09:53:07 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:82:00.0 == *:*:*.* ]] 00:04:22.462 09:53:07 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:22.462 09:53:07 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\8\2\:\0\0\.\0* ]] 00:04:22.462 09:53:07 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:22.462 09:53:07 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:22.462 09:53:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:22.462 09:53:07 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:04:22.462 09:53:07 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:04:22.462 09:53:07 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:22.462 09:53:07 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:22.462 09:53:07 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:22.462 ************************************ 00:04:22.462 START TEST denied 00:04:22.462 ************************************ 00:04:22.462 09:53:07 setup.sh.acl.denied -- common/autotest_common.sh@1125 -- # denied 00:04:22.462 09:53:07 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:82:00.0' 00:04:22.462 09:53:07 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:04:22.462 09:53:07 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:82:00.0' 00:04:22.462 09:53:07 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:04:22.462 09:53:07 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:24.361 0000:82:00.0 (8086 0a54): Skipping denied controller at 0000:82:00.0 00:04:24.361 09:53:09 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:82:00.0 00:04:24.361 09:53:09 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:04:24.361 09:53:09 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:04:24.361 09:53:09 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:82:00.0 ]] 00:04:24.361 09:53:09 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:82:00.0/driver 00:04:24.361 09:53:09 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:24.361 09:53:09 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:24.361 09:53:09 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:04:24.361 09:53:09 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:24.361 09:53:09 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:26.919 00:04:26.919 real 0m4.212s 00:04:26.919 user 0m1.298s 00:04:26.919 sys 0m2.113s 00:04:26.919 09:53:11 setup.sh.acl.denied -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:26.919 09:53:11 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:04:26.919 ************************************ 00:04:26.919 END TEST denied 00:04:26.919 ************************************ 00:04:26.919 09:53:11 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:26.919 09:53:11 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:26.919 09:53:11 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:26.919 09:53:11 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:26.919 ************************************ 00:04:26.919 START TEST allowed 00:04:26.919 ************************************ 00:04:26.919 09:53:11 setup.sh.acl.allowed -- common/autotest_common.sh@1125 -- # allowed 00:04:26.919 09:53:11 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:82:00.0 00:04:26.919 09:53:11 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:04:26.919 09:53:11 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:82:00.0 .*: nvme -> .*' 00:04:26.919 09:53:11 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:04:26.919 09:53:11 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:29.448 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:04:29.448 09:53:14 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:04:29.448 09:53:14 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:04:29.448 09:53:14 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:04:29.448 09:53:14 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:29.448 09:53:14 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:31.351 00:04:31.351 real 0m4.209s 00:04:31.351 user 0m1.073s 00:04:31.351 sys 0m1.992s 00:04:31.351 09:53:16 setup.sh.acl.allowed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:31.351 09:53:16 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:04:31.351 ************************************ 00:04:31.351 END TEST allowed 00:04:31.351 ************************************ 00:04:31.351 00:04:31.351 real 0m11.961s 00:04:31.351 user 0m3.701s 00:04:31.351 sys 0m6.414s 00:04:31.351 09:53:16 setup.sh.acl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:31.351 09:53:16 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:31.351 ************************************ 00:04:31.351 END TEST acl 00:04:31.351 ************************************ 00:04:31.351 09:53:16 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:04:31.351 09:53:16 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:31.351 09:53:16 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:31.351 09:53:16 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:31.351 ************************************ 00:04:31.351 START TEST hugepages 00:04:31.351 ************************************ 00:04:31.351 09:53:16 setup.sh.hugepages -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:04:31.351 * Looking for test storage... 00:04:31.351 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:31.351 09:53:16 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:31.351 09:53:16 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:31.351 09:53:16 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:31.351 09:53:16 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:31.351 09:53:16 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:31.351 09:53:16 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:31.351 09:53:16 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:31.351 09:53:16 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:04:31.351 09:53:16 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:04:31.351 09:53:16 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:04:31.351 09:53:16 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:31.351 09:53:16 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:31.351 09:53:16 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:31.351 09:53:16 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:04:31.351 09:53:16 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:31.351 09:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.351 09:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.351 09:53:16 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 27145440 kB' 'MemAvailable: 30727068 kB' 'Buffers: 2704 kB' 'Cached: 10198788 kB' 'SwapCached: 0 kB' 'Active: 7205992 kB' 'Inactive: 3506828 kB' 'Active(anon): 6811168 kB' 'Inactive(anon): 0 kB' 'Active(file): 394824 kB' 'Inactive(file): 3506828 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 514600 kB' 'Mapped: 181096 kB' 'Shmem: 6299840 kB' 'KReclaimable: 184028 kB' 'Slab: 539132 kB' 'SReclaimable: 184028 kB' 'SUnreclaim: 355104 kB' 'KernelStack: 12576 kB' 'PageTables: 8072 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 28304780 kB' 'Committed_AS: 7931396 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195648 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 1752668 kB' 'DirectMap2M: 12847104 kB' 'DirectMap1G: 37748736 kB' 00:04:31.351 09:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.351 09:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.351 09:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.351 09:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.351 09:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.351 09:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.351 09:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.351 09:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.351 09:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.351 09:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.351 09:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.351 09:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.351 09:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.351 09:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.351 09:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.351 09:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.351 09:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.351 09:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.351 09:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.351 09:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.351 09:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.351 09:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.351 09:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.351 09:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.351 09:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.351 09:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.351 09:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.351 09:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.351 09:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.351 09:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.351 09:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.351 09:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.351 09:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.351 09:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.351 09:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.351 09:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.351 09:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.351 09:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.351 09:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.351 09:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.351 09:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.351 09:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.351 09:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.351 09:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.351 09:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.351 09:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.351 09:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.351 09:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.351 09:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.351 09:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.351 09:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.351 09:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.351 09:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.351 09:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.351 09:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.351 09:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.352 09:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.352 09:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.352 09:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.352 09:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.352 09:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.352 09:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.352 09:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.352 09:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.352 09:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.352 09:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.352 09:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.352 09:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.352 09:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.352 09:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.352 09:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.352 09:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.352 09:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.352 09:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.352 09:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.352 09:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.352 09:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.352 09:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.352 09:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.352 09:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.352 09:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.352 09:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.352 09:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.352 09:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.352 09:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.352 09:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.352 09:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.352 09:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.352 09:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.352 09:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.352 09:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.352 09:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.352 09:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.352 09:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.352 09:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.352 09:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.352 09:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.352 09:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.352 09:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.352 09:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.352 09:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.352 09:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.352 09:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.352 09:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.352 09:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.352 09:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.352 09:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.352 09:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.352 09:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.352 09:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.352 09:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.352 09:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.352 09:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.352 09:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.352 09:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.352 09:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.352 09:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.352 09:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.352 09:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.352 09:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.352 09:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.352 09:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.352 09:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.352 09:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.352 09:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.352 09:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.352 09:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.352 09:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.352 09:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.352 09:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.352 09:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.352 09:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.352 09:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.352 09:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.352 09:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.352 09:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.352 09:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.352 09:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.352 09:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.352 09:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.352 09:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.352 09:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.352 09:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.352 09:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.352 09:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.352 09:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.352 09:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.352 09:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.352 09:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.352 09:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.352 09:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.352 09:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.352 09:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.352 09:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.352 09:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.352 09:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.352 09:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.352 09:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.352 09:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.352 09:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.352 09:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.352 09:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.352 09:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.352 09:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.352 09:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.352 09:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.352 09:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.352 09:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.352 09:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.352 09:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.352 09:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.352 09:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.352 09:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.352 09:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.352 09:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.352 09:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.352 09:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.352 09:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.352 09:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.352 09:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.352 09:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.352 09:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.352 09:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.352 09:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.352 09:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.353 09:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.353 09:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.353 09:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.353 09:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.353 09:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.353 09:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.353 09:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.353 09:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.353 09:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.353 09:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.353 09:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.353 09:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.353 09:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.353 09:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.353 09:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.353 09:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.353 09:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.353 09:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.353 09:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.353 09:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.353 09:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:31.353 09:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:31.353 09:53:16 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:31.353 09:53:16 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:31.353 09:53:16 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:04:31.353 09:53:16 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:04:31.353 09:53:16 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:31.353 09:53:16 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:31.353 09:53:16 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:31.353 09:53:16 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:31.353 09:53:16 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:31.353 09:53:16 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:31.353 09:53:16 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:31.353 09:53:16 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:04:31.353 09:53:16 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:04:31.353 09:53:16 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:31.353 09:53:16 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:31.353 09:53:16 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:31.353 09:53:16 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:31.353 09:53:16 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:31.353 09:53:16 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:31.353 09:53:16 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:04:31.353 09:53:16 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:31.353 09:53:16 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:31.353 09:53:16 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:31.353 09:53:16 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:31.353 09:53:16 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:31.353 09:53:16 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:31.353 09:53:16 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:31.353 09:53:16 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:31.353 09:53:16 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:31.353 09:53:16 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:31.353 09:53:16 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:31.353 09:53:16 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:31.353 09:53:16 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:31.353 09:53:16 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:31.353 09:53:16 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:31.353 09:53:16 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:31.353 09:53:16 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:31.353 ************************************ 00:04:31.353 START TEST default_setup 00:04:31.353 ************************************ 00:04:31.353 09:53:16 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1125 -- # default_setup 00:04:31.353 09:53:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:31.353 09:53:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:04:31.353 09:53:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:31.353 09:53:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:04:31.353 09:53:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:31.353 09:53:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:04:31.353 09:53:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:31.353 09:53:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:31.353 09:53:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:31.353 09:53:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:31.353 09:53:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:04:31.353 09:53:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:31.353 09:53:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:31.353 09:53:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:31.353 09:53:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:31.353 09:53:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:31.353 09:53:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:31.353 09:53:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:31.353 09:53:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:04:31.353 09:53:16 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:04:31.353 09:53:16 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:04:31.353 09:53:16 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:32.731 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:32.731 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:32.731 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:32.731 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:32.731 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:32.731 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:32.731 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:32.731 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:32.731 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:32.731 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:32.731 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:32.731 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:32.731 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:32.731 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:32.731 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:32.990 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:33.934 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:04:33.934 09:53:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:33.934 09:53:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:04:33.934 09:53:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:04:33.934 09:53:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:04:33.934 09:53:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:04:33.934 09:53:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:04:33.934 09:53:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:04:33.934 09:53:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:33.934 09:53:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:33.934 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:33.934 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:33.934 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:33.934 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:33.934 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:33.934 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:33.934 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:33.934 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:33.934 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:33.934 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.934 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.934 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 29161996 kB' 'MemAvailable: 32743616 kB' 'Buffers: 2704 kB' 'Cached: 10198880 kB' 'SwapCached: 0 kB' 'Active: 7226116 kB' 'Inactive: 3506828 kB' 'Active(anon): 6831292 kB' 'Inactive(anon): 0 kB' 'Active(file): 394824 kB' 'Inactive(file): 3506828 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 534196 kB' 'Mapped: 181248 kB' 'Shmem: 6299932 kB' 'KReclaimable: 184012 kB' 'Slab: 538684 kB' 'SReclaimable: 184012 kB' 'SUnreclaim: 354672 kB' 'KernelStack: 12944 kB' 'PageTables: 9640 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353356 kB' 'Committed_AS: 7953836 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195936 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1752668 kB' 'DirectMap2M: 12847104 kB' 'DirectMap1G: 37748736 kB' 00:04:33.934 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.934 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.934 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.934 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.934 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.934 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.934 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.934 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.934 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.934 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.934 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.934 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.934 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.934 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.934 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.934 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.934 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.934 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.934 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.934 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.934 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.934 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.934 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.934 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.934 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.934 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.934 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.934 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.935 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.935 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.935 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.935 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.935 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.935 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.935 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.935 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.935 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.935 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.935 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.935 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.935 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.935 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.935 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.935 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.935 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.935 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.935 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.935 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.935 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.935 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.935 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.935 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.935 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.935 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.935 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.935 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.935 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.935 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.935 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.935 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.935 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.935 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.935 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.935 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.935 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.935 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.935 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.935 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.935 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.935 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.935 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.935 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.935 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.935 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.935 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.935 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.935 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.935 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.935 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.935 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.935 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.935 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.935 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.935 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.935 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.935 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.935 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.935 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.935 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.935 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.935 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.935 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.935 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.935 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.935 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.935 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.935 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.935 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.935 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.935 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.935 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.935 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.935 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.935 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.935 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.935 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.935 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.935 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.935 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.935 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.935 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.935 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.935 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.935 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.935 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.935 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.935 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.935 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.935 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.935 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.935 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.935 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.935 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.935 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.935 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.935 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.935 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.935 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.935 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.935 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.935 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.935 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.935 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.935 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.935 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.935 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.935 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.935 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.935 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.935 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.935 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.935 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.935 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.935 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.935 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.935 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.935 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.935 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.935 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.935 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.936 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.936 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.936 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.936 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.936 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.936 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.936 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.936 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.936 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.936 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.936 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:33.936 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:33.936 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:33.936 09:53:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:04:33.936 09:53:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:33.936 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:33.936 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:33.936 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:33.936 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:33.936 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:33.936 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:33.936 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:33.936 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:33.936 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:33.936 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.936 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.936 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 29164284 kB' 'MemAvailable: 32745904 kB' 'Buffers: 2704 kB' 'Cached: 10198884 kB' 'SwapCached: 0 kB' 'Active: 7225228 kB' 'Inactive: 3506828 kB' 'Active(anon): 6830404 kB' 'Inactive(anon): 0 kB' 'Active(file): 394824 kB' 'Inactive(file): 3506828 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 532912 kB' 'Mapped: 181248 kB' 'Shmem: 6299936 kB' 'KReclaimable: 184012 kB' 'Slab: 538684 kB' 'SReclaimable: 184012 kB' 'SUnreclaim: 354672 kB' 'KernelStack: 12848 kB' 'PageTables: 9196 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353356 kB' 'Committed_AS: 7952492 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195920 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1752668 kB' 'DirectMap2M: 12847104 kB' 'DirectMap1G: 37748736 kB' 00:04:33.936 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.936 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.936 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.936 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.936 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.936 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.936 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.936 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.936 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.936 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.936 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.936 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.936 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.936 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.936 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.936 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.936 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.936 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.936 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.936 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.936 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.936 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.936 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.936 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.936 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.936 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.936 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.936 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.936 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.936 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.936 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.936 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.936 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.936 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.936 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.936 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.936 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.936 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.936 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.936 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.936 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.936 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.936 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.936 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.936 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.936 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.936 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.936 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.936 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.936 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.936 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.936 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.936 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.936 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.936 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.936 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.936 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.936 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.936 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.936 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.936 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.936 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.936 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.936 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.936 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.936 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.936 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.936 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.936 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.936 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.936 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.936 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.936 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.936 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.936 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.936 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.936 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.936 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.936 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.936 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.936 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.937 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.937 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.937 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.937 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.937 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.937 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.937 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.937 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.937 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.937 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.937 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.937 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.937 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.937 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.937 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.937 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.937 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.937 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.937 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.937 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.937 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.937 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.937 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.937 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.937 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.937 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.937 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.937 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.937 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.937 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.937 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.937 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.937 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.937 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.937 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.937 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.937 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.937 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.937 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.937 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.937 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.937 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.937 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.937 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.937 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.937 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.937 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.937 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.937 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.937 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.937 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.937 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.937 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.937 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.937 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.937 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.937 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.937 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.937 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.937 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.937 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.937 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.937 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.937 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.937 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.937 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.937 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.937 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.937 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.937 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.937 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.937 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.937 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.937 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.937 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.937 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.937 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.937 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.937 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.937 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.937 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.937 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.937 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.937 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.937 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.937 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.937 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.937 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.937 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.937 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.937 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.937 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.937 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.937 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.937 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.937 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.937 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.937 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.937 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.937 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.937 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.937 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.937 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.937 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.937 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.937 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.937 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.937 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.937 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.937 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.937 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.937 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.937 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.937 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.937 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.937 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.937 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.937 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.937 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.937 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.937 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.937 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.938 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.938 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.938 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:33.938 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:33.938 09:53:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:04:33.938 09:53:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:33.938 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:33.938 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:33.938 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:33.938 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:33.938 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:33.938 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:33.938 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:33.938 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:33.938 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:33.938 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.938 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.938 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 29166348 kB' 'MemAvailable: 32747968 kB' 'Buffers: 2704 kB' 'Cached: 10198888 kB' 'SwapCached: 0 kB' 'Active: 7224576 kB' 'Inactive: 3506828 kB' 'Active(anon): 6829752 kB' 'Inactive(anon): 0 kB' 'Active(file): 394824 kB' 'Inactive(file): 3506828 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 533064 kB' 'Mapped: 181156 kB' 'Shmem: 6299940 kB' 'KReclaimable: 184012 kB' 'Slab: 538768 kB' 'SReclaimable: 184012 kB' 'SUnreclaim: 354756 kB' 'KernelStack: 12752 kB' 'PageTables: 8836 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353356 kB' 'Committed_AS: 7951516 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195744 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1752668 kB' 'DirectMap2M: 12847104 kB' 'DirectMap1G: 37748736 kB' 00:04:33.938 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.938 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.938 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.938 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.938 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.938 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.938 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.938 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.938 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.938 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.938 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.938 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.938 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.938 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.938 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.938 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.938 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.938 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.938 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.938 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.938 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.938 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.938 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.938 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.938 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.938 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.938 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.938 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.938 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.938 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.938 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.938 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.938 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.938 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.938 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.938 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.938 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.938 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.938 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.938 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.938 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.938 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.938 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.938 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.938 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.938 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.938 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.938 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.938 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.938 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.938 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.938 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.938 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.938 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.938 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.938 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.938 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.938 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.938 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.938 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.938 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.938 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.938 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.938 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.938 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.938 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.938 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.938 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.938 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.938 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.938 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.938 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.938 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.938 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.938 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.938 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.938 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.938 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.938 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.938 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.938 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.938 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.938 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.938 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.938 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.938 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.938 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.938 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.938 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.939 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.939 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.939 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.939 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.939 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.939 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.939 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.939 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.939 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.939 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.939 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.939 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.939 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.939 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.939 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.939 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.939 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.939 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.939 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.939 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.939 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.939 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.939 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.939 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.939 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.939 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.939 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.939 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.939 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.939 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.939 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.939 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.939 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.939 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.939 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.939 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.939 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.939 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.939 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.939 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.939 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.939 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.939 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.939 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.939 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.939 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.939 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.939 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.939 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.939 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.939 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.939 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.939 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.939 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.939 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.939 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.939 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.939 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.939 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.939 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.939 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.939 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.939 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.939 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.939 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.939 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.939 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.939 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.939 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.939 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.939 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.939 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.939 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.939 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.939 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.939 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.939 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.939 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.939 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.939 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.939 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.939 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.939 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.939 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.939 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.939 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.939 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.939 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.939 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.939 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.939 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.939 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.939 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.939 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.939 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.939 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.939 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.939 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.939 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.940 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.940 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.940 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.940 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.940 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.940 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.940 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.940 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.940 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.940 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.940 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.940 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.940 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:33.940 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:33.940 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:33.940 09:53:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:04:33.940 09:53:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:33.940 nr_hugepages=1024 00:04:33.940 09:53:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:33.940 resv_hugepages=0 00:04:33.940 09:53:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:33.940 surplus_hugepages=0 00:04:33.940 09:53:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:33.940 anon_hugepages=0 00:04:33.940 09:53:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:33.940 09:53:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:33.940 09:53:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:33.940 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:33.940 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:33.940 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:33.940 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:33.940 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:33.940 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:33.940 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:33.940 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:33.940 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:33.940 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.940 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.940 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 29167140 kB' 'MemAvailable: 32748760 kB' 'Buffers: 2704 kB' 'Cached: 10198924 kB' 'SwapCached: 0 kB' 'Active: 7224156 kB' 'Inactive: 3506828 kB' 'Active(anon): 6829332 kB' 'Inactive(anon): 0 kB' 'Active(file): 394824 kB' 'Inactive(file): 3506828 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 532592 kB' 'Mapped: 181156 kB' 'Shmem: 6299976 kB' 'KReclaimable: 184012 kB' 'Slab: 538768 kB' 'SReclaimable: 184012 kB' 'SUnreclaim: 354756 kB' 'KernelStack: 12512 kB' 'PageTables: 7996 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353356 kB' 'Committed_AS: 7951540 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195744 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1752668 kB' 'DirectMap2M: 12847104 kB' 'DirectMap1G: 37748736 kB' 00:04:33.940 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.940 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.940 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.940 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.940 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.940 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.940 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.940 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.940 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.940 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.940 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.940 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.940 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.940 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.940 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.940 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.940 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.940 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.940 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.940 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.940 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.940 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.940 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.940 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.940 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.940 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.940 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.940 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.940 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.940 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.940 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.940 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.940 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.940 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.940 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.940 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.940 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.940 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.940 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.940 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.940 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.940 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.940 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.940 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.940 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.940 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.940 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.940 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.940 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.940 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.940 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.940 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.940 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.940 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.940 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.940 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.940 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.940 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.940 09:53:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.940 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.940 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.940 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.940 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.940 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.940 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.940 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.940 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.940 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.940 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.940 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.940 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.940 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.941 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.941 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.941 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.941 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.941 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.941 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.941 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.941 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.941 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.941 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.941 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.941 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.941 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.941 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.941 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.941 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.941 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.941 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.941 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.941 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.941 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.941 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.941 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.941 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.941 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.941 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.941 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.941 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.941 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.941 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.941 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.941 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.941 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.941 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.941 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.941 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.941 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.941 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.941 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.941 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.941 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.941 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.941 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.941 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.941 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.941 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.941 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.941 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.941 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.941 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.941 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.941 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.941 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.941 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.941 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.941 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.941 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.941 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.941 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.941 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.941 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.941 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.941 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.941 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.941 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.941 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.941 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.941 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.941 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.941 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.941 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.941 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.941 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.941 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.941 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.941 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.941 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.941 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.941 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.941 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.941 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.941 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.941 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.941 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.941 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.941 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.941 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.941 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.941 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.941 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.941 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.941 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.941 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.941 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.941 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.941 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.941 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.941 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.941 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.941 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.941 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.941 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.941 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.941 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.941 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.941 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.941 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.941 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.941 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.941 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.941 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.941 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.941 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.941 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.941 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.941 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.941 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.941 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.941 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.941 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.942 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:33.942 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:04:33.942 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:33.942 09:53:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:33.942 09:53:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:04:33.942 09:53:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:04:33.942 09:53:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:33.942 09:53:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:33.942 09:53:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:33.942 09:53:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:33.942 09:53:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:33.942 09:53:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:33.942 09:53:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:33.942 09:53:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:33.942 09:53:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:33.942 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:33.942 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:04:33.942 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:33.942 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:33.942 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:33.942 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:33.942 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:33.942 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:33.942 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:33.942 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.942 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.942 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 24619412 kB' 'MemFree: 12372988 kB' 'MemUsed: 12246424 kB' 'SwapCached: 0 kB' 'Active: 5845584 kB' 'Inactive: 3329964 kB' 'Active(anon): 5586696 kB' 'Inactive(anon): 0 kB' 'Active(file): 258888 kB' 'Inactive(file): 3329964 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8872008 kB' 'Mapped: 80244 kB' 'AnonPages: 306756 kB' 'Shmem: 5283156 kB' 'KernelStack: 6872 kB' 'PageTables: 4180 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 118432 kB' 'Slab: 297032 kB' 'SReclaimable: 118432 kB' 'SUnreclaim: 178600 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:33.942 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.942 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.942 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.942 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.942 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.942 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.942 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.942 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.942 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.942 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.942 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.942 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.942 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.942 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.942 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.942 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.942 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.942 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.942 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.942 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.942 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.942 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.942 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.942 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.942 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.942 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.942 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.942 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.942 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.942 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.942 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.942 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.942 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.942 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.942 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.942 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.942 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.942 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.942 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.942 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.942 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.942 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.942 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.942 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.942 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.942 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.942 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.942 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.942 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.942 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.942 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.942 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.942 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.942 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.942 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.942 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.942 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.942 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.942 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.942 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.942 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.942 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.942 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.942 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.942 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.942 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.942 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.942 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.942 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.942 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.942 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.942 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.942 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.942 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.942 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.942 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.942 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.942 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.942 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.942 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.942 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.942 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.942 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.942 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.943 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.943 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.943 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.943 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.943 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.943 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.943 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.943 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.943 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.943 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.943 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.943 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.943 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.943 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.943 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.943 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.943 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.943 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.943 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.943 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.943 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.943 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.943 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.943 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.943 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.943 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.943 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.943 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.943 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.943 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.943 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.943 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.943 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.943 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.943 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.943 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.943 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.943 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.943 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.943 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.943 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.943 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.943 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.943 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.943 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.943 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.943 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.943 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.943 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.943 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.943 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.943 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.943 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.943 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.943 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.943 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.943 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.943 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:33.943 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:33.943 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:33.943 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:33.943 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:33.943 09:53:19 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:33.943 09:53:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:33.943 09:53:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:33.943 09:53:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:33.943 09:53:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:33.943 09:53:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:33.943 node0=1024 expecting 1024 00:04:33.943 09:53:19 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:33.943 00:04:33.943 real 0m2.724s 00:04:33.943 user 0m0.817s 00:04:33.943 sys 0m1.069s 00:04:33.943 09:53:19 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:33.943 09:53:19 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:04:33.943 ************************************ 00:04:33.943 END TEST default_setup 00:04:33.943 ************************************ 00:04:33.943 09:53:19 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:33.943 09:53:19 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:33.943 09:53:19 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:33.943 09:53:19 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:34.203 ************************************ 00:04:34.203 START TEST per_node_1G_alloc 00:04:34.203 ************************************ 00:04:34.203 09:53:19 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1125 -- # per_node_1G_alloc 00:04:34.203 09:53:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:04:34.203 09:53:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:04:34.203 09:53:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:34.203 09:53:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:04:34.203 09:53:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:04:34.203 09:53:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:04:34.203 09:53:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:34.203 09:53:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:34.203 09:53:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:34.203 09:53:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:04:34.203 09:53:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:04:34.203 09:53:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:34.203 09:53:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:34.203 09:53:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:34.203 09:53:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:34.203 09:53:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:34.203 09:53:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:04:34.203 09:53:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:34.203 09:53:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:34.203 09:53:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:34.203 09:53:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:34.203 09:53:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:34.203 09:53:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:34.203 09:53:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:04:34.203 09:53:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:04:34.203 09:53:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:34.203 09:53:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:35.588 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:35.588 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:35.588 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:35.588 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:35.588 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:35.588 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:35.588 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:35.588 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:35.588 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:35.588 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:35.588 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:35.588 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:35.588 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:35.588 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:35.588 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:35.588 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:35.588 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:35.588 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:04:35.588 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:35.588 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:35.588 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:35.588 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:35.588 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:35.588 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:35.588 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:35.588 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:35.588 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:35.588 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:35.588 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:35.588 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:35.588 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:35.588 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:35.588 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:35.588 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:35.588 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:35.588 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:35.588 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.588 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.588 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 29198372 kB' 'MemAvailable: 32780024 kB' 'Buffers: 2704 kB' 'Cached: 10198996 kB' 'SwapCached: 0 kB' 'Active: 7226164 kB' 'Inactive: 3506828 kB' 'Active(anon): 6831340 kB' 'Inactive(anon): 0 kB' 'Active(file): 394824 kB' 'Inactive(file): 3506828 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 534496 kB' 'Mapped: 181604 kB' 'Shmem: 6300048 kB' 'KReclaimable: 184076 kB' 'Slab: 539208 kB' 'SReclaimable: 184076 kB' 'SUnreclaim: 355132 kB' 'KernelStack: 12512 kB' 'PageTables: 8316 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353356 kB' 'Committed_AS: 7954260 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195904 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1752668 kB' 'DirectMap2M: 12847104 kB' 'DirectMap1G: 37748736 kB' 00:04:35.588 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.588 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.588 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.588 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.588 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.588 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.588 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.588 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.588 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.588 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.588 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.588 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.588 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.588 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.588 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.588 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.588 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.588 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.588 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.588 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.588 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.588 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.588 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.588 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.588 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.588 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.588 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.588 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.588 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.588 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.588 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.589 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.589 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.589 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.589 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.589 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.589 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.589 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.589 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.589 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.589 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.589 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.589 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.589 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.589 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.589 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.589 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.589 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.589 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.589 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.589 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.589 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.589 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.589 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.589 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.589 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.589 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.589 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.589 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.589 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.589 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.589 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.589 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.589 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.589 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.589 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.589 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.589 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.589 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.589 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.589 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.589 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.589 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.589 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.589 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.589 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.589 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.589 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.589 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.589 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.589 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.589 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.589 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.589 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.589 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.589 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.589 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.589 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.589 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.589 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.589 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.589 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.589 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.589 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.589 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.589 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.589 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.589 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.589 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.589 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.589 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.589 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.589 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.589 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.589 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.589 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.589 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.589 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.589 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.589 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.589 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.589 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.589 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.589 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.589 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.589 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.589 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.589 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.589 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.589 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.589 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.589 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.589 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.589 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.589 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.589 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.589 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.589 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.589 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.589 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.589 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.589 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.589 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.589 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.589 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.590 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.590 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.590 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.590 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.590 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.590 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.590 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.590 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.590 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.590 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.590 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.590 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.590 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.590 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.590 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.590 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.590 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.590 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.590 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.590 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.590 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.590 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.590 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.590 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.590 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.590 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:35.590 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:35.590 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:35.590 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:35.590 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:35.590 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:35.590 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:35.590 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:35.590 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:35.590 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:35.590 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:35.590 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:35.590 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:35.590 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:35.590 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.590 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.590 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 29199040 kB' 'MemAvailable: 32780676 kB' 'Buffers: 2704 kB' 'Cached: 10199000 kB' 'SwapCached: 0 kB' 'Active: 7228992 kB' 'Inactive: 3506828 kB' 'Active(anon): 6834168 kB' 'Inactive(anon): 0 kB' 'Active(file): 394824 kB' 'Inactive(file): 3506828 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 537376 kB' 'Mapped: 181604 kB' 'Shmem: 6300052 kB' 'KReclaimable: 184044 kB' 'Slab: 539164 kB' 'SReclaimable: 184044 kB' 'SUnreclaim: 355120 kB' 'KernelStack: 12528 kB' 'PageTables: 8340 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353356 kB' 'Committed_AS: 7956788 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195856 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1752668 kB' 'DirectMap2M: 12847104 kB' 'DirectMap1G: 37748736 kB' 00:04:35.590 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.590 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.590 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.590 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.590 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.590 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.590 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.590 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.590 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.590 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.590 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.590 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.590 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.590 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.590 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.590 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.590 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.590 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.590 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.590 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.590 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.590 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.590 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.590 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.590 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.590 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.590 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.590 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.590 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.590 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.590 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.590 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.590 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.590 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.590 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.590 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.590 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.590 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.590 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.590 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.590 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.590 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.590 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.590 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.590 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.590 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.590 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.590 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.590 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.590 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.590 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.590 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.590 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.590 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.590 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.590 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.590 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.590 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.590 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.590 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.590 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.590 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.591 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.591 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.591 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.591 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.591 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.591 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.591 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.591 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.591 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.591 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.591 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.591 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.591 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.591 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.591 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.591 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.591 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.591 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.591 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.591 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.591 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.591 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.591 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.591 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.591 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.591 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.591 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.591 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.591 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.591 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.591 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.591 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.591 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.591 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.591 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.591 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.591 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.591 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.591 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.591 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.591 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.591 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.591 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.591 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.591 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.591 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.591 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.591 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.591 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.591 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.591 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.591 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.591 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.591 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.591 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.591 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.591 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.591 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.591 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.591 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.591 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.591 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.591 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.591 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.591 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.591 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.591 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.591 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.591 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.591 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.591 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.591 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.591 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.591 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.591 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.591 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.591 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.591 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.591 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.591 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.591 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.591 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.591 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.591 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.591 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.591 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.591 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.591 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.591 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.591 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.591 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.591 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.591 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.591 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.591 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.591 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.591 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.591 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.591 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.591 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.591 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.591 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.591 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.591 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.591 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.591 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.591 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.591 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.591 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.591 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.592 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.592 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.592 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.592 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.592 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.592 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.592 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.592 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.592 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.592 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.592 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.592 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.592 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.592 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.592 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.592 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.592 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.592 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.592 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.592 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.592 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.592 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.592 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.592 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.592 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.592 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.592 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.592 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.592 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.592 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.592 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.592 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.592 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.592 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:35.592 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:35.592 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:35.592 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:35.592 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:35.592 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:35.592 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:35.592 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:35.592 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:35.592 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:35.592 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:35.592 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:35.592 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:35.592 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.592 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.592 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 29204480 kB' 'MemAvailable: 32786116 kB' 'Buffers: 2704 kB' 'Cached: 10199016 kB' 'SwapCached: 0 kB' 'Active: 7224996 kB' 'Inactive: 3506828 kB' 'Active(anon): 6830172 kB' 'Inactive(anon): 0 kB' 'Active(file): 394824 kB' 'Inactive(file): 3506828 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 533424 kB' 'Mapped: 182004 kB' 'Shmem: 6300068 kB' 'KReclaimable: 184044 kB' 'Slab: 539228 kB' 'SReclaimable: 184044 kB' 'SUnreclaim: 355184 kB' 'KernelStack: 12544 kB' 'PageTables: 8436 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353356 kB' 'Committed_AS: 7952848 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195856 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1752668 kB' 'DirectMap2M: 12847104 kB' 'DirectMap1G: 37748736 kB' 00:04:35.592 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.592 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.592 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.592 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.592 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.592 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.592 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.592 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.592 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.592 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.592 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.592 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.592 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.592 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.592 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.592 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.592 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.592 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.592 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.592 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.592 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.592 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.592 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.592 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.592 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.592 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.592 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.592 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.592 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.592 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.592 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.592 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.592 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.593 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.593 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.593 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.593 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.593 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.593 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.593 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.593 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.593 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.593 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.593 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.593 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.593 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.593 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.593 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.593 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.593 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.593 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.593 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.593 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.593 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.593 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.593 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.593 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.593 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.593 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.593 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.593 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.593 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.593 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.593 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.593 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.593 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.593 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.593 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.593 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.593 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.593 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.593 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.593 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.593 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.593 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.593 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.593 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.593 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.593 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.593 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.593 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.593 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.593 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.593 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.593 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.593 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.593 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.593 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.593 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.593 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.593 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.593 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.593 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.593 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.593 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.593 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.593 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.593 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.593 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.593 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.593 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.593 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.593 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.593 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.593 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.593 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.593 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.593 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.593 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.594 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.594 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.594 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.594 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.594 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.594 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.594 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.594 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.594 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.594 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.594 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.594 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.594 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.594 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.594 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.594 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.594 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.594 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.594 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.594 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.594 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.594 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.594 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.594 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.594 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.594 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.594 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.594 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.594 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.594 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.594 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.594 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.594 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.594 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.594 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.594 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.594 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.594 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.594 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.594 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.594 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.594 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.594 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.594 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.594 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.594 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.594 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.594 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.594 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.594 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.594 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.594 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.594 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.594 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.594 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.594 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.594 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.594 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.594 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.594 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.594 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.594 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.594 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.594 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.594 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.594 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.594 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.594 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.594 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.594 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.594 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.594 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.594 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.594 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.594 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.594 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.594 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.594 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.594 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.594 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.594 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.594 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.594 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.594 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.594 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.594 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.594 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.594 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.594 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.594 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.594 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.594 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:35.594 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:35.594 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:35.594 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:35.594 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:35.594 nr_hugepages=1024 00:04:35.594 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:35.594 resv_hugepages=0 00:04:35.594 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:35.594 surplus_hugepages=0 00:04:35.594 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:35.594 anon_hugepages=0 00:04:35.594 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:35.594 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:35.594 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:35.594 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:35.594 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:35.594 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:35.594 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:35.594 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:35.595 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:35.595 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:35.595 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:35.595 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:35.595 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.595 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.595 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 29203904 kB' 'MemAvailable: 32785540 kB' 'Buffers: 2704 kB' 'Cached: 10199040 kB' 'SwapCached: 0 kB' 'Active: 7230148 kB' 'Inactive: 3506828 kB' 'Active(anon): 6835324 kB' 'Inactive(anon): 0 kB' 'Active(file): 394824 kB' 'Inactive(file): 3506828 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 538592 kB' 'Mapped: 181956 kB' 'Shmem: 6300092 kB' 'KReclaimable: 184044 kB' 'Slab: 539220 kB' 'SReclaimable: 184044 kB' 'SUnreclaim: 355176 kB' 'KernelStack: 12528 kB' 'PageTables: 8384 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353356 kB' 'Committed_AS: 7957900 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195844 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1752668 kB' 'DirectMap2M: 12847104 kB' 'DirectMap1G: 37748736 kB' 00:04:35.595 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.595 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.595 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.595 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.595 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.595 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.595 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.595 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.595 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.595 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.595 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.595 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.595 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.595 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.595 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.595 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.595 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.595 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.595 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.595 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.595 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.595 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.595 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.595 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.595 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.595 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.595 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.595 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.595 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.595 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.595 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.595 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.595 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.595 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.595 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.595 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.595 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.595 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.595 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.595 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.595 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.595 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.595 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.595 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.595 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.595 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.595 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.595 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.595 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.595 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.595 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.595 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.595 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.595 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.595 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.595 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.595 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.595 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.595 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.595 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.595 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.595 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.595 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.595 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.595 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.595 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.595 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.595 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.595 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.595 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.595 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.595 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.596 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.596 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.596 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.596 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.596 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.596 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.596 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.596 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.596 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.596 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.596 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.596 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.596 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.596 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.596 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.596 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.596 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.596 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.596 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.596 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.596 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.596 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.596 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.596 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.596 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.596 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.596 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.596 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.596 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.596 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.596 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.596 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.596 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.596 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.596 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.596 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.596 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.596 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.596 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.596 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.596 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.596 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.596 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.596 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.596 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.596 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.596 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.596 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.596 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.596 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.596 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.596 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.596 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.596 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.596 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.596 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.596 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.596 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.596 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.596 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.596 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.596 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.596 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.596 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.596 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.596 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.596 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.596 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.596 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.596 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.596 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.596 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.596 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.596 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.596 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.596 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.596 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.596 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.596 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.596 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.596 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.596 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.596 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.596 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.596 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.596 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.596 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.596 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.596 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.596 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.596 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.596 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.596 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.596 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.596 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.596 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.596 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.596 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.596 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.596 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.596 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.596 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.596 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.596 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.596 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.596 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.596 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.596 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.596 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.596 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.596 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.596 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.596 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.596 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.596 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.596 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.596 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.597 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.597 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.597 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.597 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:35.597 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:35.597 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:35.597 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:35.597 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:35.597 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:35.597 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:35.597 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:35.597 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:35.597 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:35.597 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:35.597 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:35.597 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:35.597 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:35.597 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:35.597 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:35.597 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:04:35.597 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:35.597 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:35.597 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:35.597 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:35.597 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:35.597 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:35.597 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:35.597 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.597 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.597 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 24619412 kB' 'MemFree: 13443172 kB' 'MemUsed: 11176240 kB' 'SwapCached: 0 kB' 'Active: 5845396 kB' 'Inactive: 3329964 kB' 'Active(anon): 5586508 kB' 'Inactive(anon): 0 kB' 'Active(file): 258888 kB' 'Inactive(file): 3329964 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8872104 kB' 'Mapped: 80244 kB' 'AnonPages: 306412 kB' 'Shmem: 5283252 kB' 'KernelStack: 6840 kB' 'PageTables: 4396 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 118432 kB' 'Slab: 297224 kB' 'SReclaimable: 118432 kB' 'SUnreclaim: 178792 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:35.597 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.597 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.597 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.597 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.597 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.597 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.597 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.597 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.597 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.597 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.597 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.597 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.597 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.597 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.597 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.597 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.597 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.597 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.597 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.597 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.597 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.597 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.597 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.597 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.597 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.597 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.597 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.597 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.597 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.597 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.597 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.597 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.597 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.597 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.597 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.597 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.597 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.597 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.597 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.597 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.597 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.597 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.597 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.597 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.597 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.597 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.597 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.597 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.597 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.597 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.597 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.597 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.598 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.598 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.598 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.598 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.598 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.598 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.598 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.598 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.598 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.598 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.598 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.598 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.598 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.598 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.598 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.598 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.598 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.598 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.598 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.598 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.598 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.598 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.598 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.598 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.598 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.598 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.598 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.598 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.598 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.598 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.598 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.598 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.598 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.598 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.598 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.598 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.598 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.598 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.598 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.598 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.598 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.598 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.598 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.598 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.598 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.598 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.598 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.598 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.598 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.598 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.598 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.598 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.598 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.598 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.598 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.598 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.598 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.598 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.598 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.598 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.598 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.598 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.598 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.598 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.598 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.598 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.598 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.598 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.598 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.598 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.598 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.598 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.598 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.598 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.598 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.598 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.598 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.598 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.598 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.598 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.598 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.598 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.598 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.598 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.598 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.598 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.598 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.598 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.598 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.598 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.598 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.598 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.598 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.598 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:35.599 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:35.599 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:35.599 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:35.599 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:35.599 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:35.599 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:35.599 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:04:35.599 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:35.599 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:35.599 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:35.599 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:35.599 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:35.599 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:35.599 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:35.599 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.599 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.599 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 19407244 kB' 'MemFree: 15760480 kB' 'MemUsed: 3646764 kB' 'SwapCached: 0 kB' 'Active: 1379128 kB' 'Inactive: 176864 kB' 'Active(anon): 1243192 kB' 'Inactive(anon): 0 kB' 'Active(file): 135936 kB' 'Inactive(file): 176864 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1329664 kB' 'Mapped: 100924 kB' 'AnonPages: 226456 kB' 'Shmem: 1016864 kB' 'KernelStack: 5656 kB' 'PageTables: 3864 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 65612 kB' 'Slab: 241996 kB' 'SReclaimable: 65612 kB' 'SUnreclaim: 176384 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:35.599 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.599 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.599 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.599 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.599 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.599 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.599 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.599 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.599 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.599 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.599 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.599 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.599 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.599 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.599 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.599 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.599 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.599 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.599 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.599 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.599 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.599 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.599 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.599 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.599 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.599 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.599 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.599 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.599 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.599 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.599 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.599 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.858 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.858 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.858 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.858 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.858 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.858 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.858 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.858 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.858 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.858 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.858 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.858 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.858 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.858 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.858 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.858 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.858 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.858 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.858 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.858 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.858 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.858 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.858 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.858 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.858 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.858 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.858 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.858 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.858 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.859 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.859 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.859 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.859 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.859 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.859 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.859 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.859 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.859 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.859 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.859 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.859 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.859 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.859 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.859 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.859 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.859 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.859 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.859 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.859 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.859 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.859 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.859 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.859 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.859 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.859 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.859 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.859 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.859 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.859 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.859 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.859 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.859 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.859 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.859 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.859 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.859 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.859 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.859 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.859 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.859 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.859 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.859 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.859 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.859 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.859 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.859 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.859 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.859 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.859 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.859 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.859 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.859 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.859 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.859 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.859 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.859 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.859 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.859 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.859 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.859 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.859 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.859 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.859 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.859 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.859 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.859 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.859 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.859 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.859 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.859 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.859 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.859 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.859 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.859 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.859 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.859 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.859 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.859 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.859 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.859 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:35.859 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:35.859 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:35.859 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:35.859 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:35.859 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:35.859 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:35.859 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:35.859 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:35.859 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:35.859 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:35.859 node0=512 expecting 512 00:04:35.859 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:35.859 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:35.859 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:35.859 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:35.859 node1=512 expecting 512 00:04:35.859 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:35.859 00:04:35.859 real 0m1.647s 00:04:35.859 user 0m0.669s 00:04:35.859 sys 0m0.953s 00:04:35.859 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:35.859 09:53:20 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:35.859 ************************************ 00:04:35.859 END TEST per_node_1G_alloc 00:04:35.859 ************************************ 00:04:35.859 09:53:20 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:35.859 09:53:20 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:35.859 09:53:20 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:35.859 09:53:20 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:35.859 ************************************ 00:04:35.859 START TEST even_2G_alloc 00:04:35.859 ************************************ 00:04:35.859 09:53:20 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1125 -- # even_2G_alloc 00:04:35.859 09:53:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:35.859 09:53:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:35.859 09:53:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:35.859 09:53:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:35.859 09:53:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:35.859 09:53:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:35.860 09:53:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:35.860 09:53:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:35.860 09:53:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:35.860 09:53:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:35.860 09:53:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:35.860 09:53:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:35.860 09:53:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:35.860 09:53:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:35.860 09:53:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:35.860 09:53:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:35.860 09:53:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:04:35.860 09:53:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:35.860 09:53:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:35.860 09:53:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:35.860 09:53:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:35.860 09:53:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:35.860 09:53:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:35.860 09:53:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:35.860 09:53:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:35.860 09:53:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:04:35.860 09:53:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:35.860 09:53:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:37.241 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:37.241 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:37.241 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:37.241 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:37.241 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:37.241 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:37.241 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:37.241 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:37.241 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:37.241 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:37.241 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:37.241 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:37.241 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:37.241 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:37.241 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:37.241 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:37.241 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:37.241 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:37.241 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:37.241 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:37.241 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:37.241 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:37.241 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:37.241 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:37.241 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:37.241 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:37.241 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:37.241 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:37.241 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:37.241 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:37.241 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:37.241 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:37.241 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:37.241 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:37.241 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:37.241 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.241 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.241 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 29209688 kB' 'MemAvailable: 32791324 kB' 'Buffers: 2704 kB' 'Cached: 10199136 kB' 'SwapCached: 0 kB' 'Active: 7224896 kB' 'Inactive: 3506828 kB' 'Active(anon): 6830072 kB' 'Inactive(anon): 0 kB' 'Active(file): 394824 kB' 'Inactive(file): 3506828 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 533072 kB' 'Mapped: 181280 kB' 'Shmem: 6300188 kB' 'KReclaimable: 184044 kB' 'Slab: 539312 kB' 'SReclaimable: 184044 kB' 'SUnreclaim: 355268 kB' 'KernelStack: 12512 kB' 'PageTables: 7988 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353356 kB' 'Committed_AS: 7951988 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195888 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1752668 kB' 'DirectMap2M: 12847104 kB' 'DirectMap1G: 37748736 kB' 00:04:37.241 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.241 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.241 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.241 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.241 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.241 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.241 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.241 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.241 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.241 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.241 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.241 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.241 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.241 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.241 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.241 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.241 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.241 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.241 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.241 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.242 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.242 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.242 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.242 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.242 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.242 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.242 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.242 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.242 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.242 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.242 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.242 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.242 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.242 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.242 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.242 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.242 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.242 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.242 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.242 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.242 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.242 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.242 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.242 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.242 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.242 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.242 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.242 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.242 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.242 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.242 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.242 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.242 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.242 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.242 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.242 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.242 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.242 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.242 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.242 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.242 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.242 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.242 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.242 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.242 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.242 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.242 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.242 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.242 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.242 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.242 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.242 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.242 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.242 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.242 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.242 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.242 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.242 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.242 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.242 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.242 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.242 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.242 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.242 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.242 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.242 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.242 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.242 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.242 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.242 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.242 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.242 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.242 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.242 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.242 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.242 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.242 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.242 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.242 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.242 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.242 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.242 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.242 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.242 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.242 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.242 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.242 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.242 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.242 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.242 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.242 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.242 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.242 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.242 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.242 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.242 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.242 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.242 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.242 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.242 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.242 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.242 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.242 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.242 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.242 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.242 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.242 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.242 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.242 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.243 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.243 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.243 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.243 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.243 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.243 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.243 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.243 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.243 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.243 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.243 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.243 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.243 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.243 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.243 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.243 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.243 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.243 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.243 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.243 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.243 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.243 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.243 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.243 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.243 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.243 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.243 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.243 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.243 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.243 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.243 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.243 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:37.243 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:37.243 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:37.243 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:37.243 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:37.243 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:37.243 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:37.243 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:37.243 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:37.243 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:37.243 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:37.243 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:37.243 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:37.243 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:37.243 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.243 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.243 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 29215628 kB' 'MemAvailable: 32797264 kB' 'Buffers: 2704 kB' 'Cached: 10199140 kB' 'SwapCached: 0 kB' 'Active: 7224496 kB' 'Inactive: 3506828 kB' 'Active(anon): 6829672 kB' 'Inactive(anon): 0 kB' 'Active(file): 394824 kB' 'Inactive(file): 3506828 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 532708 kB' 'Mapped: 181188 kB' 'Shmem: 6300192 kB' 'KReclaimable: 184044 kB' 'Slab: 539292 kB' 'SReclaimable: 184044 kB' 'SUnreclaim: 355248 kB' 'KernelStack: 12480 kB' 'PageTables: 7880 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353356 kB' 'Committed_AS: 7952008 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195872 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1752668 kB' 'DirectMap2M: 12847104 kB' 'DirectMap1G: 37748736 kB' 00:04:37.243 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.243 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.243 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.243 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.243 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.243 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.243 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.243 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.243 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.243 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.243 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.243 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.243 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.243 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.243 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.243 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.243 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.243 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.243 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.243 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.243 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.243 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.243 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.243 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.243 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.243 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.243 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.243 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.243 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.243 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.243 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.243 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.243 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.243 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.243 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.243 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.243 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.243 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.243 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.243 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.243 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.243 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.243 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.243 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.243 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.243 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.243 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.243 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.243 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.243 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.243 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.243 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.243 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.243 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.243 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.244 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.244 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.244 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.244 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.244 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.244 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.244 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.244 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.244 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.244 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.244 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.244 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.244 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.244 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.244 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.244 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.244 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.244 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.244 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.244 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.244 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.244 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.244 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.244 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.244 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.244 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.244 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.244 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.244 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.244 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.244 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.244 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.244 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.244 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.244 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.244 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.244 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.244 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.244 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.244 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.244 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.244 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.244 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.244 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.244 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.244 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.244 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.244 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.244 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.244 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.244 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.244 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.244 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.244 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.244 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.244 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.244 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.244 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.244 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.244 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.244 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.244 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.244 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.244 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.244 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.244 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.244 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.244 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.244 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.244 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.244 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.244 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.244 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.244 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.244 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.244 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.244 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.244 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.244 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.244 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.244 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.244 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.244 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.244 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.244 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.244 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.244 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.244 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.244 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.244 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.244 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.244 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.244 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.244 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.244 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.244 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.244 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.244 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.244 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.244 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.244 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.244 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.244 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.244 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.244 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.244 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.244 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.244 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.244 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.244 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.244 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.244 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.244 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.244 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.245 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.245 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.245 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.245 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.245 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.245 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.245 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.245 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.245 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.245 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.245 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.245 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.245 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.245 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.245 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.245 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.245 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.245 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.245 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.245 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.245 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.245 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.245 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.245 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.245 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.245 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.245 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.245 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.245 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.245 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.245 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.245 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.245 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.245 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.245 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.245 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.245 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:37.245 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:37.245 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:37.245 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:37.245 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:37.245 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:37.245 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:37.245 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:37.245 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:37.245 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:37.245 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:37.245 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:37.245 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:37.245 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.245 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.245 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 29215152 kB' 'MemAvailable: 32796788 kB' 'Buffers: 2704 kB' 'Cached: 10199156 kB' 'SwapCached: 0 kB' 'Active: 7224624 kB' 'Inactive: 3506828 kB' 'Active(anon): 6829800 kB' 'Inactive(anon): 0 kB' 'Active(file): 394824 kB' 'Inactive(file): 3506828 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 532828 kB' 'Mapped: 181188 kB' 'Shmem: 6300208 kB' 'KReclaimable: 184044 kB' 'Slab: 539364 kB' 'SReclaimable: 184044 kB' 'SUnreclaim: 355320 kB' 'KernelStack: 12576 kB' 'PageTables: 7852 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353356 kB' 'Committed_AS: 7953396 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195920 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1752668 kB' 'DirectMap2M: 12847104 kB' 'DirectMap1G: 37748736 kB' 00:04:37.245 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.245 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.245 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.245 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.245 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.245 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.245 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.245 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.245 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.245 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.245 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.245 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.245 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.245 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.245 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.245 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.245 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.245 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.245 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.245 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.245 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.245 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.245 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.245 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.245 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.245 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.245 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.245 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.245 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.245 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.245 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.245 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.245 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.245 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.245 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.245 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.245 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.245 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.245 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.245 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.245 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.245 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.245 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.245 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.245 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.245 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.246 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.246 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.246 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.246 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.246 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.246 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.246 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.246 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.246 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.246 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.246 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.246 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.246 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.246 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.246 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.246 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.246 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.246 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.246 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.246 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.246 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.246 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.246 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.246 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.246 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.246 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.246 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.246 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.246 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.246 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.246 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.246 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.246 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.246 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.246 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.246 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.246 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.246 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.246 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.246 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.246 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.246 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.246 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.246 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.246 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.246 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.246 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.246 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.246 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.246 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.246 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.246 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.246 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.246 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.246 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.246 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.246 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.246 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.246 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.246 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.246 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.246 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.246 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.246 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.246 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.246 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.246 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.246 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.246 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.246 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.246 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.246 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.246 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.246 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.246 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.246 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.246 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.246 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.246 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.246 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.246 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.246 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.246 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.246 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.246 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.246 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.246 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.246 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.246 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.246 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.246 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.246 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.246 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.246 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.246 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.246 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.247 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.247 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.247 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.247 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.247 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.247 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.247 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.247 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.247 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.247 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.247 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.247 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.247 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.247 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.247 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.247 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.247 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.247 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.247 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.247 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.247 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.247 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.247 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.247 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.247 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.247 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.247 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.247 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.247 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.247 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.247 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.247 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.247 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.247 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.247 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.247 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.247 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.247 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.247 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.247 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.247 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.247 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.247 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.247 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.247 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.247 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.247 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.247 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.247 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.247 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.247 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.247 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.247 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.247 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.247 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.247 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.247 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.247 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.247 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:37.247 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:37.247 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:37.247 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:37.247 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:37.247 nr_hugepages=1024 00:04:37.247 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:37.247 resv_hugepages=0 00:04:37.247 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:37.247 surplus_hugepages=0 00:04:37.247 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:37.247 anon_hugepages=0 00:04:37.247 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:37.247 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:37.247 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:37.247 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:37.247 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:37.247 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:37.247 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:37.247 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:37.247 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:37.247 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:37.247 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:37.247 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:37.247 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.247 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.247 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 29216252 kB' 'MemAvailable: 32797888 kB' 'Buffers: 2704 kB' 'Cached: 10199180 kB' 'SwapCached: 0 kB' 'Active: 7225300 kB' 'Inactive: 3506828 kB' 'Active(anon): 6830476 kB' 'Inactive(anon): 0 kB' 'Active(file): 394824 kB' 'Inactive(file): 3506828 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 533512 kB' 'Mapped: 181188 kB' 'Shmem: 6300232 kB' 'KReclaimable: 184044 kB' 'Slab: 539364 kB' 'SReclaimable: 184044 kB' 'SUnreclaim: 355320 kB' 'KernelStack: 12720 kB' 'PageTables: 8504 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353356 kB' 'Committed_AS: 7954412 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196000 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1752668 kB' 'DirectMap2M: 12847104 kB' 'DirectMap1G: 37748736 kB' 00:04:37.247 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.247 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.247 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.247 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.247 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.247 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.247 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.247 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.247 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.247 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.247 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.247 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.247 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.247 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.247 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.247 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.247 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.247 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.247 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.248 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.248 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.248 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.248 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.248 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.248 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.248 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.248 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.248 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.248 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.248 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.248 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.248 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.248 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.248 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.248 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.248 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.248 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.248 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.248 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.248 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.248 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.248 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.248 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.248 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.248 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.248 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.248 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.248 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.248 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.248 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.248 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.248 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.248 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.248 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.248 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.248 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.248 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.248 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.248 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.248 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.248 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.248 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.248 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.248 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.248 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.248 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.248 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.248 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.248 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.248 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.248 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.248 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.248 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.248 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.248 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.248 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.248 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.248 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.248 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.248 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.248 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.248 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.248 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.248 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.248 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.248 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.248 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.248 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.248 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.248 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.248 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.248 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.248 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.248 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.248 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.248 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.248 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.248 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.248 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.248 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.248 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.248 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.248 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.248 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.248 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.248 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.248 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.248 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.248 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.248 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.248 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.248 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.248 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.248 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.248 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.248 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.248 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.248 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.248 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.248 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.248 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.248 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.248 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.248 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.248 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.248 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.248 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.248 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.248 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.248 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.248 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.248 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.248 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.248 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.248 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.248 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.248 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.249 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.249 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.249 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.249 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.249 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.249 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.249 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.249 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.249 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.249 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.249 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.249 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.249 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.249 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.249 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.249 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.249 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.249 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.249 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.249 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.249 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.249 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.249 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.249 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.249 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.249 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.249 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.249 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.249 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.249 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.249 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.249 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.249 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.249 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.249 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.249 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.249 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.249 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.249 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.249 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.249 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.249 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.249 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.249 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.249 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.249 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.249 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.249 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.249 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.249 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.249 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.249 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.249 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.249 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.249 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.249 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:37.249 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:37.249 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:37.249 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:37.249 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:37.249 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:37.249 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:37.249 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:37.249 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:37.249 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:37.249 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:37.249 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:37.249 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:37.249 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:37.249 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:37.249 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:37.249 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:04:37.249 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:37.249 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:37.249 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:37.249 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:37.249 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:37.249 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:37.249 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:37.249 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.249 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.249 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 24619412 kB' 'MemFree: 13456352 kB' 'MemUsed: 11163060 kB' 'SwapCached: 0 kB' 'Active: 5845308 kB' 'Inactive: 3329964 kB' 'Active(anon): 5586420 kB' 'Inactive(anon): 0 kB' 'Active(file): 258888 kB' 'Inactive(file): 3329964 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8872228 kB' 'Mapped: 80244 kB' 'AnonPages: 306220 kB' 'Shmem: 5283376 kB' 'KernelStack: 6808 kB' 'PageTables: 4032 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 118432 kB' 'Slab: 297292 kB' 'SReclaimable: 118432 kB' 'SUnreclaim: 178860 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:37.249 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.249 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.249 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.249 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.249 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.249 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.249 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.249 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.249 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.249 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.249 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.249 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.249 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.249 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.249 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.249 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.249 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.249 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.249 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.249 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.249 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.249 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.249 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.249 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.250 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.250 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.250 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.250 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.250 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.250 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.250 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.250 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.250 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.250 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.250 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.250 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.250 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.250 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.250 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.250 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.250 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.250 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.250 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.250 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.250 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.250 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.250 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.250 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.250 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.250 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.250 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.250 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.250 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.250 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.250 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.250 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.250 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.250 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.250 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.250 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.250 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.250 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.250 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.250 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.250 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.250 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.250 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.250 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.250 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.250 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.250 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.250 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.250 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.250 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.250 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.250 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.250 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.250 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.250 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.250 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.250 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.250 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.250 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.250 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.250 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.250 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.250 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.250 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.250 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.250 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.250 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.250 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.250 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.250 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.250 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.250 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.250 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.250 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.250 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.250 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.250 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.250 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.250 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.250 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.250 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.250 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.250 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.250 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.250 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.250 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.250 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.250 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.250 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.250 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.250 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.250 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.250 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.250 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.250 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.250 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.250 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.250 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.250 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.250 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.250 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.250 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.250 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.250 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.250 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.250 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.250 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.250 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.250 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.250 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.250 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.250 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.250 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.250 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.250 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.250 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.250 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.250 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.250 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.250 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.251 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.251 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:37.251 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:37.251 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:37.251 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:37.251 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:37.251 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:37.251 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:37.251 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:04:37.251 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:37.251 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:37.251 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:37.251 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:37.251 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:37.251 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:37.251 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:37.251 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.251 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.251 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 19407244 kB' 'MemFree: 15761220 kB' 'MemUsed: 3646024 kB' 'SwapCached: 0 kB' 'Active: 1380704 kB' 'Inactive: 176864 kB' 'Active(anon): 1244768 kB' 'Inactive(anon): 0 kB' 'Active(file): 135936 kB' 'Inactive(file): 176864 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1329676 kB' 'Mapped: 100944 kB' 'AnonPages: 227952 kB' 'Shmem: 1016876 kB' 'KernelStack: 5976 kB' 'PageTables: 5016 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 65612 kB' 'Slab: 242072 kB' 'SReclaimable: 65612 kB' 'SUnreclaim: 176460 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:37.251 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.251 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.251 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.251 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.251 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.251 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.251 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.251 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.251 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.251 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.251 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.251 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.251 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.251 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.251 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.251 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.251 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.251 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.251 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.251 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.251 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.251 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.251 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.251 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.251 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.251 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.251 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.251 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.251 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.251 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.251 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.251 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.251 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.251 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.251 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.251 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.251 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.251 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.251 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.251 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.251 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.251 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.251 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.251 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.251 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.251 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.251 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.251 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.251 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.251 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.251 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.251 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.251 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.251 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.251 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.251 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.251 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.251 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.251 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.251 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.252 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.252 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.252 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.252 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.252 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.252 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.252 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.252 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.252 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.252 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.252 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.252 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.252 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.252 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.252 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.252 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.252 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.252 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.252 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.252 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.252 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.252 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.252 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.252 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.252 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.252 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.252 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.252 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.252 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.252 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.252 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.252 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.252 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.252 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.252 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.252 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.252 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.252 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.252 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.252 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.252 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.252 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.252 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.252 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.252 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.252 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.252 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.252 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.252 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.252 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.252 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.252 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.252 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.252 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.252 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.252 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.252 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.252 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.252 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.252 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.252 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.252 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.252 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.252 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.252 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.252 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.252 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.252 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.252 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.252 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.252 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.252 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.252 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.512 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.512 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.512 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.512 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.512 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.512 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.512 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.512 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.512 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:37.512 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:37.512 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:37.512 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:37.512 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:37.512 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:37.512 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:37.512 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:37.512 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:37.512 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:37.512 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:37.512 node0=512 expecting 512 00:04:37.512 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:37.512 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:37.512 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:37.512 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:37.512 node1=512 expecting 512 00:04:37.512 09:53:22 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:37.512 00:04:37.512 real 0m1.586s 00:04:37.512 user 0m0.620s 00:04:37.512 sys 0m0.934s 00:04:37.512 09:53:22 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:37.512 09:53:22 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:37.512 ************************************ 00:04:37.512 END TEST even_2G_alloc 00:04:37.512 ************************************ 00:04:37.512 09:53:22 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:37.512 09:53:22 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:37.512 09:53:22 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:37.512 09:53:22 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:37.512 ************************************ 00:04:37.512 START TEST odd_alloc 00:04:37.512 ************************************ 00:04:37.512 09:53:22 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1125 -- # odd_alloc 00:04:37.512 09:53:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:37.512 09:53:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:04:37.512 09:53:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:37.512 09:53:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:37.512 09:53:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:37.512 09:53:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:37.512 09:53:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:37.512 09:53:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:37.512 09:53:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:37.512 09:53:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:37.512 09:53:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:37.512 09:53:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:37.512 09:53:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:37.512 09:53:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:37.512 09:53:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:37.512 09:53:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:37.512 09:53:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:04:37.512 09:53:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:37.512 09:53:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:37.512 09:53:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:04:37.512 09:53:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:37.512 09:53:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:37.512 09:53:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:37.512 09:53:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:37.512 09:53:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:37.512 09:53:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:04:37.512 09:53:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:37.512 09:53:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:38.895 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:38.895 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:38.895 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:38.895 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:38.895 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:38.895 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:38.895 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:38.895 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:38.895 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:38.895 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:38.895 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:38.895 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:38.895 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:38.895 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:38.895 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:38.895 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:38.895 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:38.895 09:53:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:38.895 09:53:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:04:38.895 09:53:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:38.895 09:53:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:38.895 09:53:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:38.895 09:53:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:38.895 09:53:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:38.895 09:53:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:38.895 09:53:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:38.895 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:38.895 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:38.895 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:38.895 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:38.895 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:38.895 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:38.895 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:38.895 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:38.895 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:38.895 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.895 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.895 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 29202564 kB' 'MemAvailable: 32784200 kB' 'Buffers: 2704 kB' 'Cached: 10199272 kB' 'SwapCached: 0 kB' 'Active: 7222240 kB' 'Inactive: 3506828 kB' 'Active(anon): 6827416 kB' 'Inactive(anon): 0 kB' 'Active(file): 394824 kB' 'Inactive(file): 3506828 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 530384 kB' 'Mapped: 180260 kB' 'Shmem: 6300324 kB' 'KReclaimable: 184044 kB' 'Slab: 539292 kB' 'SReclaimable: 184044 kB' 'SUnreclaim: 355248 kB' 'KernelStack: 12480 kB' 'PageTables: 7780 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29352332 kB' 'Committed_AS: 7940752 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195776 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1752668 kB' 'DirectMap2M: 12847104 kB' 'DirectMap1G: 37748736 kB' 00:04:38.895 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.895 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.895 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.895 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.895 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.895 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.895 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.895 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.895 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.895 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.895 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.895 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.895 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.895 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.895 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.895 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.895 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.895 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.895 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.895 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.895 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.895 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.895 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.895 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.895 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.895 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.895 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.895 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.895 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.895 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.895 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.895 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.895 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.895 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.895 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.895 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.895 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.895 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.895 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.895 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.895 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.895 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.895 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.895 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.895 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.895 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.895 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.895 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.895 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.895 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.895 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.895 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.895 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.895 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.895 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.895 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.895 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.895 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.895 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.895 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.896 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.896 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.896 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.896 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.896 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.896 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.896 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.896 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.896 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.896 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.896 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.896 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.896 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.896 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.896 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.896 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.896 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.896 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.896 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.896 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.896 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.896 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.896 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.896 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.896 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.896 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.896 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.896 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.896 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.896 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.896 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.896 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.896 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.896 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.896 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.896 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.896 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.896 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.896 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.896 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.896 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.896 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.896 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.896 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.896 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.896 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.896 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.896 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.896 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.896 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.896 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.896 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.896 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.896 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.896 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.896 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.896 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.896 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.896 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.896 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.896 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.896 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.896 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.896 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.896 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.896 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.896 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.896 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.896 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.896 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.896 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.896 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.896 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.896 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.896 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.896 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.896 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.896 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.896 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.896 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.896 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.896 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.896 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.896 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.896 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.896 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.896 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.896 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.896 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.896 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.896 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.896 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.896 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.896 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.896 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.896 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.896 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.896 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.896 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.896 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.896 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:38.896 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:38.896 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:38.896 09:53:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:38.896 09:53:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:38.896 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:38.896 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:38.896 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:38.896 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:38.896 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:38.896 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:38.896 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:38.896 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:38.896 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:38.896 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.896 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.897 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 29203220 kB' 'MemAvailable: 32784856 kB' 'Buffers: 2704 kB' 'Cached: 10199280 kB' 'SwapCached: 0 kB' 'Active: 7221680 kB' 'Inactive: 3506828 kB' 'Active(anon): 6826856 kB' 'Inactive(anon): 0 kB' 'Active(file): 394824 kB' 'Inactive(file): 3506828 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 529820 kB' 'Mapped: 180168 kB' 'Shmem: 6300332 kB' 'KReclaimable: 184044 kB' 'Slab: 539268 kB' 'SReclaimable: 184044 kB' 'SUnreclaim: 355224 kB' 'KernelStack: 12448 kB' 'PageTables: 7652 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29352332 kB' 'Committed_AS: 7940772 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195728 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1752668 kB' 'DirectMap2M: 12847104 kB' 'DirectMap1G: 37748736 kB' 00:04:38.897 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.897 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.897 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.897 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.897 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.897 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.897 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.897 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.897 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.897 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.897 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.897 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.897 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.897 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.897 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.897 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.897 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.897 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.897 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.897 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.897 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.897 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.897 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.897 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.897 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.897 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.897 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.897 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.897 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.897 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.897 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.897 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.897 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.897 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.897 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.897 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.897 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.897 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.897 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.897 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.897 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.897 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.897 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.897 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.897 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.897 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.897 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.897 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.897 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.897 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.897 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.897 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.897 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.897 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.897 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.897 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.897 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.897 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.897 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.897 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.897 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.897 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.897 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.897 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.897 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.897 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.897 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.897 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.897 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.897 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.897 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.897 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.897 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.897 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.897 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.897 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.897 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.897 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.897 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.897 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.897 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.897 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.897 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.897 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.897 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.897 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.897 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.897 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.897 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.897 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.897 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.897 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.897 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.897 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.897 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.897 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.897 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.897 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.897 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.897 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.897 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.897 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.897 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.897 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.897 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.897 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.897 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.897 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.897 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.897 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.897 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.897 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.897 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.897 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.897 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.897 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.897 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.897 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.898 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.898 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.898 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.898 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.898 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.898 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.898 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.898 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.898 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.898 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.898 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.898 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.898 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.898 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.898 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.898 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.898 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.898 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.898 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.898 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.898 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.898 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.898 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.898 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.898 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.898 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.898 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.898 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.898 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.898 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.898 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.898 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.898 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.898 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.898 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.898 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.898 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.898 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.898 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.898 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.898 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.898 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.898 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.898 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.898 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.898 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.898 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.898 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.898 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.898 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.898 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.898 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.898 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.898 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.898 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.898 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.898 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.898 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.898 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.898 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.898 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.898 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.898 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.898 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.898 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.898 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.898 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.898 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.898 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.898 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.898 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.898 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.898 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.898 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.898 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.898 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.898 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.898 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.898 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.898 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.898 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.898 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.898 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.898 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.898 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.898 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.898 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:38.898 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:38.898 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:38.898 09:53:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:38.898 09:53:23 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:38.898 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:38.898 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:38.898 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:38.898 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:38.898 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:38.898 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:38.898 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:38.898 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:38.898 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:38.898 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.898 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.898 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 29203956 kB' 'MemAvailable: 32785592 kB' 'Buffers: 2704 kB' 'Cached: 10199296 kB' 'SwapCached: 0 kB' 'Active: 7221724 kB' 'Inactive: 3506828 kB' 'Active(anon): 6826900 kB' 'Inactive(anon): 0 kB' 'Active(file): 394824 kB' 'Inactive(file): 3506828 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 529824 kB' 'Mapped: 180168 kB' 'Shmem: 6300348 kB' 'KReclaimable: 184044 kB' 'Slab: 539268 kB' 'SReclaimable: 184044 kB' 'SUnreclaim: 355224 kB' 'KernelStack: 12448 kB' 'PageTables: 7652 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29352332 kB' 'Committed_AS: 7940792 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195728 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1752668 kB' 'DirectMap2M: 12847104 kB' 'DirectMap1G: 37748736 kB' 00:04:38.898 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.898 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.898 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.898 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.898 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.898 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.898 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.899 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.899 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.899 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.899 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.899 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.899 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.899 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.899 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.899 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.899 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.899 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.899 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.899 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.899 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.899 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.899 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.899 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.899 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.899 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.899 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.899 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.899 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.899 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.899 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.899 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.899 09:53:23 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.899 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.899 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.899 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.899 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.899 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.899 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.899 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.899 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.899 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.899 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.899 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.899 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.899 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.899 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.899 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.899 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.899 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.899 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.899 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.899 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.899 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.899 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.899 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.899 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.899 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.899 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.899 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.899 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.899 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.899 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.899 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.899 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.899 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.899 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.899 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.899 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.899 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.899 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.899 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.899 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.899 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.899 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.899 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.899 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.899 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.899 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.899 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.899 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.899 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.899 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.899 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.899 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.899 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.899 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.899 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.899 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.899 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.899 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.899 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.899 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.899 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.899 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.899 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.899 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.899 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.899 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.899 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.899 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.899 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.899 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.899 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.900 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.900 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.900 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.900 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.900 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.900 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.900 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.900 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.900 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.900 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.900 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.900 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.900 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.900 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.900 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.900 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.900 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.900 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.900 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.900 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.900 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.900 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.900 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.900 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.900 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.900 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.900 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.900 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.900 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.900 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.900 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.900 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.900 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.900 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.900 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.900 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.900 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.900 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.900 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.900 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.900 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.900 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.900 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.900 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.900 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.900 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.900 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.900 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.900 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.900 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.900 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.900 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.900 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.900 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.900 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.900 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.900 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.900 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.900 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.900 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.900 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.900 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.900 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.900 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.900 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.900 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.900 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.900 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.900 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.900 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.900 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.900 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.900 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.900 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.900 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.900 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.900 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.900 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.900 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.900 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.900 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.900 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.900 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.900 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.900 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.900 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.900 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.900 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.900 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.900 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.900 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.900 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.900 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.900 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.900 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.900 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.900 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:38.900 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:38.900 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:38.900 09:53:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:38.900 09:53:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:38.900 nr_hugepages=1025 00:04:38.900 09:53:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:38.900 resv_hugepages=0 00:04:38.900 09:53:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:38.900 surplus_hugepages=0 00:04:38.900 09:53:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:38.900 anon_hugepages=0 00:04:38.900 09:53:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:38.900 09:53:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:38.900 09:53:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:38.900 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:38.900 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:38.900 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:38.900 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:38.900 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:38.900 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:38.900 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:38.900 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:38.900 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:38.900 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.900 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.901 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 29205880 kB' 'MemAvailable: 32787516 kB' 'Buffers: 2704 kB' 'Cached: 10199316 kB' 'SwapCached: 0 kB' 'Active: 7221736 kB' 'Inactive: 3506828 kB' 'Active(anon): 6826912 kB' 'Inactive(anon): 0 kB' 'Active(file): 394824 kB' 'Inactive(file): 3506828 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 529824 kB' 'Mapped: 180168 kB' 'Shmem: 6300368 kB' 'KReclaimable: 184044 kB' 'Slab: 539268 kB' 'SReclaimable: 184044 kB' 'SUnreclaim: 355224 kB' 'KernelStack: 12448 kB' 'PageTables: 7652 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29352332 kB' 'Committed_AS: 7940812 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195728 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1752668 kB' 'DirectMap2M: 12847104 kB' 'DirectMap1G: 37748736 kB' 00:04:38.901 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.901 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.901 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.901 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.901 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.901 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.901 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.901 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.901 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.901 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.901 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.901 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.901 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.901 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.901 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.901 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.901 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.901 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.901 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.901 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.901 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.901 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.901 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.901 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.901 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.901 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.901 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.901 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.901 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.901 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.901 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.901 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.901 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.901 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.901 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.901 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.901 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.901 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.901 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.901 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.901 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.901 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.901 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.901 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.901 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.901 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.901 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.901 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.901 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.901 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.901 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.901 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.901 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.901 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.901 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.901 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.901 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.901 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.901 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.901 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.901 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.901 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.901 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.901 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.901 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.901 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.901 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.901 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.901 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.901 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.901 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.901 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.901 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.901 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:38.901 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:38.901 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:38.901 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:38.901 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.164 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.164 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.164 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.164 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.164 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.164 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.164 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.164 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.164 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.164 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.164 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.164 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.164 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.164 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.164 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.164 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.164 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.164 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.164 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.164 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.164 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.164 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.164 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.164 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.164 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.164 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.164 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.164 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.164 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.164 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.164 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.164 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.164 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.164 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.164 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.164 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.164 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.164 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.164 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.164 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.164 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.164 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.164 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.164 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.164 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.164 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.164 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.164 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.164 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.164 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.164 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.164 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.164 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.164 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.164 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.164 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.164 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.164 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.164 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.164 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.164 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.164 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.164 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.164 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.164 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.164 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.164 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.164 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.164 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.164 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.164 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.164 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.164 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.164 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.164 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.164 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.164 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.164 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.164 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.164 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.164 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.164 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.164 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.164 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.164 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.164 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.164 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.164 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.164 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.164 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.164 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.164 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.164 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.164 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.164 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.164 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.165 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.165 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.165 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.165 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.165 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.165 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.165 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.165 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.165 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.165 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.165 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.165 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.165 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.165 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.165 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.165 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.165 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.165 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.165 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:39.165 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:04:39.165 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:39.165 09:53:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:39.165 09:53:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:39.165 09:53:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:04:39.165 09:53:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:39.165 09:53:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:39.165 09:53:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:39.165 09:53:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:04:39.165 09:53:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:39.165 09:53:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:39.165 09:53:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:39.165 09:53:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:39.165 09:53:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:39.165 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:39.165 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:04:39.165 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:39.165 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:39.165 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:39.165 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:39.165 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:39.165 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:39.165 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:39.165 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.165 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.165 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 24619412 kB' 'MemFree: 13467976 kB' 'MemUsed: 11151436 kB' 'SwapCached: 0 kB' 'Active: 5844096 kB' 'Inactive: 3329964 kB' 'Active(anon): 5585208 kB' 'Inactive(anon): 0 kB' 'Active(file): 258888 kB' 'Inactive(file): 3329964 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8872292 kB' 'Mapped: 79512 kB' 'AnonPages: 304980 kB' 'Shmem: 5283440 kB' 'KernelStack: 6792 kB' 'PageTables: 3924 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 118432 kB' 'Slab: 297272 kB' 'SReclaimable: 118432 kB' 'SUnreclaim: 178840 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:39.165 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.165 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.165 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.165 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.165 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.165 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.165 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.165 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.165 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.165 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.165 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.165 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.165 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.165 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.165 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.165 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.165 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.165 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.165 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.165 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.165 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.165 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.165 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.165 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.165 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.165 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.165 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.165 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.165 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.165 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.165 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.165 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.165 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.165 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.165 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.165 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.165 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.165 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.165 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.165 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.165 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.165 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.165 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.165 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.165 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.165 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.165 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.165 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.165 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.165 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.165 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.165 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.165 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.165 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.165 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.165 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.165 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.165 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.165 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.165 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.165 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.165 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.165 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.165 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.165 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.165 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.165 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.165 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.165 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.166 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.166 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.166 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.166 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.166 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.166 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.166 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.166 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.166 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.166 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.166 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.166 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.166 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.166 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.166 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.166 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.166 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.166 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.166 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.166 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.166 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.166 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.166 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.166 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.166 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.166 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.166 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.166 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.166 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.166 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.166 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.166 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.166 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.166 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.166 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.166 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.166 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.166 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.166 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.166 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.166 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.166 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.166 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.166 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.166 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.166 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.166 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.166 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.166 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.166 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.166 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.166 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.166 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.166 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.166 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.166 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.166 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.166 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.166 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.166 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.166 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.166 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.166 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.166 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.166 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.166 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.166 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.166 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.166 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.166 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.166 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.166 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.166 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.166 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.166 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.166 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.166 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:39.166 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:39.166 09:53:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:39.166 09:53:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:39.166 09:53:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:39.166 09:53:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:39.166 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:39.166 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:04:39.166 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:39.166 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:39.166 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:39.166 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:39.166 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:39.166 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:39.166 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:39.166 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.166 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.166 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 19407244 kB' 'MemFree: 15739828 kB' 'MemUsed: 3667416 kB' 'SwapCached: 0 kB' 'Active: 1377696 kB' 'Inactive: 176864 kB' 'Active(anon): 1241760 kB' 'Inactive(anon): 0 kB' 'Active(file): 135936 kB' 'Inactive(file): 176864 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1329772 kB' 'Mapped: 100656 kB' 'AnonPages: 224840 kB' 'Shmem: 1016972 kB' 'KernelStack: 5656 kB' 'PageTables: 3728 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 65612 kB' 'Slab: 241996 kB' 'SReclaimable: 65612 kB' 'SUnreclaim: 176384 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:04:39.166 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.166 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.166 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.166 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.166 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.166 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.166 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.166 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.166 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.166 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.166 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.166 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.166 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.166 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.166 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.166 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.166 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.166 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.166 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.166 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.167 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.167 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.167 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.167 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.167 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.167 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.167 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.167 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.167 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.167 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.167 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.167 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.167 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.167 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.167 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.167 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.167 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.167 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.167 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.167 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.167 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.167 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.167 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.167 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.167 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.167 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.167 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.167 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.167 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.167 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.167 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.167 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.167 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.167 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.167 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.167 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.167 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.167 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.167 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.167 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.167 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.167 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.167 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.167 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.167 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.167 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.167 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.167 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.167 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.167 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.167 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.167 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.167 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.167 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.167 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.167 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.167 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.167 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.167 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.167 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.167 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.167 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.167 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.167 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.167 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.167 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.167 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.167 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.167 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.167 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.167 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.167 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.167 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.167 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.167 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.167 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.167 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.167 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.167 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.167 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.167 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.167 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.167 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.167 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.167 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.167 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.167 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.167 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.167 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.167 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.167 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.167 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.167 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.167 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.167 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.167 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.167 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.167 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.167 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.167 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.167 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.167 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.167 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.167 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.167 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.167 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.167 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.167 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.167 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.167 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.167 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.167 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.167 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.167 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.167 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.167 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.167 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.167 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.167 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.167 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.167 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.167 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:39.167 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:39.167 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:39.167 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:39.168 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:39.168 09:53:24 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:39.168 09:53:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:39.168 09:53:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:39.168 09:53:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:39.168 09:53:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:39.168 09:53:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:04:39.168 node0=512 expecting 513 00:04:39.168 09:53:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:39.168 09:53:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:39.168 09:53:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:39.168 09:53:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:04:39.168 node1=513 expecting 512 00:04:39.168 09:53:24 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:04:39.168 00:04:39.168 real 0m1.685s 00:04:39.168 user 0m0.693s 00:04:39.168 sys 0m0.968s 00:04:39.168 09:53:24 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:39.168 09:53:24 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:39.168 ************************************ 00:04:39.168 END TEST odd_alloc 00:04:39.168 ************************************ 00:04:39.168 09:53:24 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:39.168 09:53:24 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:39.168 09:53:24 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:39.168 09:53:24 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:39.168 ************************************ 00:04:39.168 START TEST custom_alloc 00:04:39.168 ************************************ 00:04:39.168 09:53:24 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1125 -- # custom_alloc 00:04:39.168 09:53:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:04:39.168 09:53:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:04:39.168 09:53:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:39.168 09:53:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:39.168 09:53:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:39.168 09:53:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:39.168 09:53:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:39.168 09:53:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:39.168 09:53:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:39.168 09:53:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:39.168 09:53:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:39.168 09:53:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:39.168 09:53:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:39.168 09:53:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:39.168 09:53:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:39.168 09:53:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:39.168 09:53:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:39.168 09:53:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:39.168 09:53:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:39.168 09:53:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:39.168 09:53:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:39.168 09:53:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:04:39.168 09:53:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:39.168 09:53:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:39.168 09:53:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:39.168 09:53:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:39.168 09:53:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:39.168 09:53:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:39.168 09:53:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:39.168 09:53:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:04:39.168 09:53:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:04:39.168 09:53:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:39.168 09:53:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:39.168 09:53:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:39.168 09:53:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:39.168 09:53:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:39.168 09:53:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:39.168 09:53:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:39.168 09:53:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:39.168 09:53:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:39.168 09:53:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:39.168 09:53:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:39.168 09:53:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:39.168 09:53:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:39.168 09:53:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:39.168 09:53:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:39.168 09:53:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:39.168 09:53:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:04:39.168 09:53:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:39.168 09:53:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:39.168 09:53:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:39.168 09:53:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:39.168 09:53:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:39.168 09:53:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:39.168 09:53:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:39.168 09:53:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:39.168 09:53:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:39.168 09:53:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:39.168 09:53:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:39.168 09:53:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:39.168 09:53:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:39.168 09:53:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:39.168 09:53:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:04:39.168 09:53:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:39.168 09:53:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:39.168 09:53:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:39.168 09:53:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:04:39.168 09:53:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:39.168 09:53:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:04:39.168 09:53:24 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:04:39.168 09:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:39.169 09:53:24 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:40.546 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:40.546 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:40.546 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:40.546 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:40.546 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:40.546 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:40.546 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:40.546 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:40.546 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:40.546 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:40.546 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:40.546 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:40.546 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:40.546 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:40.546 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:40.546 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:40.546 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:40.810 09:53:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:04:40.810 09:53:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:40.810 09:53:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:04:40.810 09:53:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:40.810 09:53:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:40.810 09:53:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:40.810 09:53:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:40.810 09:53:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:40.810 09:53:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:40.810 09:53:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:40.810 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:40.810 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:40.810 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:40.810 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:40.810 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:40.810 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:40.810 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:40.810 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:40.810 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:40.810 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.810 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.810 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 28160708 kB' 'MemAvailable: 31742344 kB' 'Buffers: 2704 kB' 'Cached: 10199404 kB' 'SwapCached: 0 kB' 'Active: 7224080 kB' 'Inactive: 3506828 kB' 'Active(anon): 6829256 kB' 'Inactive(anon): 0 kB' 'Active(file): 394824 kB' 'Inactive(file): 3506828 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 531896 kB' 'Mapped: 180088 kB' 'Shmem: 6300456 kB' 'KReclaimable: 184044 kB' 'Slab: 539192 kB' 'SReclaimable: 184044 kB' 'SUnreclaim: 355148 kB' 'KernelStack: 13056 kB' 'PageTables: 9292 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 28829068 kB' 'Committed_AS: 7943528 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196096 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1752668 kB' 'DirectMap2M: 12847104 kB' 'DirectMap1G: 37748736 kB' 00:04:40.810 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.810 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.810 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.810 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.810 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.810 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.810 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.810 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.810 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.810 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.810 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.810 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.810 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.810 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.810 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.810 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.810 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.810 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.810 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.810 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.810 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.810 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.810 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.810 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.810 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.810 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.810 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.810 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.810 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.810 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.810 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.810 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.810 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.810 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.810 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.810 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.810 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.810 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.810 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.810 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.810 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.810 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.810 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.811 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.811 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.811 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.811 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.811 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.811 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.811 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.811 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.811 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.811 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.811 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.811 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.811 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.811 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.811 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.811 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.811 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.811 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.811 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.811 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.811 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.811 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.811 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.811 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.811 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.811 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.811 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.811 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.811 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.811 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.811 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.811 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.811 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.811 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.811 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.811 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.811 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.811 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.811 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.811 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.811 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.811 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.811 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.811 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.811 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.811 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.811 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.811 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.811 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.811 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.811 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.811 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.811 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.811 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.811 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.811 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.811 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.811 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.811 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.811 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.811 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.811 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.811 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.811 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.811 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.811 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.811 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.811 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.811 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.811 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.811 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.811 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.811 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.811 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.811 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.811 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.811 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.811 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.811 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.811 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.811 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.811 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.811 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.811 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.811 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.811 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.811 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.811 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.811 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.811 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.811 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.811 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.811 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.811 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.811 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.811 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.811 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.811 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.811 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.811 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.811 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.811 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.811 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.811 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.811 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.811 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.811 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.811 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.811 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.811 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.811 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.811 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.811 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.811 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.811 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.811 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.811 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.811 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:40.811 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:40.811 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:40.811 09:53:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:40.811 09:53:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:40.812 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:40.812 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:40.812 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:40.812 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:40.812 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:40.812 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:40.812 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:40.812 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:40.812 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:40.812 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.812 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.812 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 28157476 kB' 'MemAvailable: 31739112 kB' 'Buffers: 2704 kB' 'Cached: 10199408 kB' 'SwapCached: 0 kB' 'Active: 7224448 kB' 'Inactive: 3506828 kB' 'Active(anon): 6829624 kB' 'Inactive(anon): 0 kB' 'Active(file): 394824 kB' 'Inactive(file): 3506828 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 532296 kB' 'Mapped: 180040 kB' 'Shmem: 6300460 kB' 'KReclaimable: 184044 kB' 'Slab: 539192 kB' 'SReclaimable: 184044 kB' 'SUnreclaim: 355148 kB' 'KernelStack: 13168 kB' 'PageTables: 10296 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 28829068 kB' 'Committed_AS: 7943548 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196176 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1752668 kB' 'DirectMap2M: 12847104 kB' 'DirectMap1G: 37748736 kB' 00:04:40.812 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.812 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.812 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.812 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.812 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.812 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.812 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.812 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.812 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.812 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.812 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.812 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.812 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.812 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.812 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.812 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.812 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.812 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.812 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.812 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.812 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.812 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.812 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.812 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.812 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.812 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.812 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.812 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.812 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.812 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.812 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.812 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.812 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.812 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.812 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.812 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.812 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.812 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.812 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.812 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.812 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.812 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.812 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.812 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.812 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.812 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.812 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.812 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.812 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.812 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.812 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.812 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.812 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.812 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.812 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.812 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.812 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.812 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.812 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.812 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.812 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.812 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.812 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.812 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.812 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.812 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.812 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.812 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.812 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.812 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.812 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.812 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.812 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.812 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.812 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.812 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.812 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.812 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.812 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.812 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.812 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.812 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.812 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.812 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.812 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.812 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.812 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.812 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.812 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.812 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.812 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.812 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.812 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.812 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.812 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.812 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.813 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.813 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.813 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.813 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.813 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.813 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.813 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.813 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.813 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.813 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.813 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.813 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.813 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.813 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.813 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.813 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.813 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.813 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.813 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.813 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.813 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.813 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.813 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.813 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.813 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.813 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.813 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.813 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.813 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.813 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.813 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.813 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.813 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.813 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.813 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.813 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.813 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.813 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.813 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.813 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.813 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.813 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.813 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.813 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.813 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.813 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.813 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.813 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.813 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.813 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.813 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.813 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.813 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.813 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.813 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.813 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.813 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.813 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.813 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.813 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.813 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.813 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.813 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.813 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.813 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.813 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.813 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.813 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.813 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.813 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.813 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.813 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.813 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.813 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.813 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.813 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.813 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.813 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.813 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.813 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.813 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.813 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.813 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.813 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.813 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.813 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.813 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.813 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.813 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.813 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.813 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.813 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.813 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.813 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.813 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.813 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.813 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.813 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.813 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.813 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.813 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.813 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.813 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.813 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.813 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.813 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.813 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.813 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.813 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:40.813 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:40.813 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:40.813 09:53:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:40.813 09:53:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:40.813 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:40.813 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:40.813 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:40.813 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:40.813 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:40.813 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:40.813 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:40.813 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:40.814 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:40.814 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.814 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.814 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 28157700 kB' 'MemAvailable: 31739336 kB' 'Buffers: 2704 kB' 'Cached: 10199420 kB' 'SwapCached: 0 kB' 'Active: 7222276 kB' 'Inactive: 3506828 kB' 'Active(anon): 6827452 kB' 'Inactive(anon): 0 kB' 'Active(file): 394824 kB' 'Inactive(file): 3506828 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 529712 kB' 'Mapped: 180180 kB' 'Shmem: 6300472 kB' 'KReclaimable: 184044 kB' 'Slab: 538960 kB' 'SReclaimable: 184044 kB' 'SUnreclaim: 354916 kB' 'KernelStack: 12512 kB' 'PageTables: 8096 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 28829068 kB' 'Committed_AS: 7941208 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195808 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1752668 kB' 'DirectMap2M: 12847104 kB' 'DirectMap1G: 37748736 kB' 00:04:40.814 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.814 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.814 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.814 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.814 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.814 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.814 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.814 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.814 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.814 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.814 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.814 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.814 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.814 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.814 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.814 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.814 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.814 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.814 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.814 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.814 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.814 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.814 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.814 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.814 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.814 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.814 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.814 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.814 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.814 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.814 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.814 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.814 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.814 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.814 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.814 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.814 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.814 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.814 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.814 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.814 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.814 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.814 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.814 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.814 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.814 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.814 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.814 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.814 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.814 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.814 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.814 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.814 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.814 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.814 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.814 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.814 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.814 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.814 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.814 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.814 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.814 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.814 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.814 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.814 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.814 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.814 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.814 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.814 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.814 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.814 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.814 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.814 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.814 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.814 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.814 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.814 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.814 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.814 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.814 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.814 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.814 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.814 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.814 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.814 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.814 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.814 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.814 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.814 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.814 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.814 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.814 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.814 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.815 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.815 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.815 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.815 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.815 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.815 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.815 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.815 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.815 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.815 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.815 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.815 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.815 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.815 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.815 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.815 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.815 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.815 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.815 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.815 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.815 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.815 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.815 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.815 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.815 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.815 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.815 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.815 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.815 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.815 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.815 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.815 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.815 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.815 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.815 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.815 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.815 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.815 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.815 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.815 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.815 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.815 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.815 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.815 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.815 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.815 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.815 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.815 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.815 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.815 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.815 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.815 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.815 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.815 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.815 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.815 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.815 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.815 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.815 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.815 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.815 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.815 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.815 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.815 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.815 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.815 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.815 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.815 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.815 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.815 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.815 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.815 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.815 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.815 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.815 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.815 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.815 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.815 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.815 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.815 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.815 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.815 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.815 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.815 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.815 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.815 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.815 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.815 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.815 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.815 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.815 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.815 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.815 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.815 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.815 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.815 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.815 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.815 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.815 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.815 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.815 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.815 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.815 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.815 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.815 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.815 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.815 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.815 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:40.815 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:40.816 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:40.816 09:53:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:40.816 09:53:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:04:40.816 nr_hugepages=1536 00:04:40.816 09:53:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:40.816 resv_hugepages=0 00:04:40.816 09:53:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:40.816 surplus_hugepages=0 00:04:40.816 09:53:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:40.816 anon_hugepages=0 00:04:40.816 09:53:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:40.816 09:53:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:04:40.816 09:53:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:40.816 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:40.816 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:40.816 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:40.816 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:40.816 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:40.816 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:40.816 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:40.816 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:40.816 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:40.816 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.816 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.816 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 28157688 kB' 'MemAvailable: 31739324 kB' 'Buffers: 2704 kB' 'Cached: 10199464 kB' 'SwapCached: 0 kB' 'Active: 7221744 kB' 'Inactive: 3506828 kB' 'Active(anon): 6826920 kB' 'Inactive(anon): 0 kB' 'Active(file): 394824 kB' 'Inactive(file): 3506828 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 529600 kB' 'Mapped: 180180 kB' 'Shmem: 6300516 kB' 'KReclaimable: 184044 kB' 'Slab: 538928 kB' 'SReclaimable: 184044 kB' 'SUnreclaim: 354884 kB' 'KernelStack: 12448 kB' 'PageTables: 7612 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 28829068 kB' 'Committed_AS: 7941228 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195792 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1752668 kB' 'DirectMap2M: 12847104 kB' 'DirectMap1G: 37748736 kB' 00:04:40.816 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.816 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.816 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.816 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.816 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.816 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.816 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.816 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.816 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.816 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.816 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.816 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.816 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.816 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.816 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.816 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.816 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.816 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.816 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.816 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.816 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.816 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.816 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.816 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.816 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.816 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.816 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.816 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.816 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.816 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.816 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.816 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.816 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.816 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.816 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.816 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.816 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.816 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.816 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.816 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.816 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.816 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.816 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.816 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.816 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.816 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.816 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.816 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.816 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.816 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.816 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.816 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.816 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.816 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.816 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.816 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.816 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.816 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.816 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.816 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.816 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.816 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.816 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.816 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.816 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.816 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.816 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.816 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.816 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.816 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.816 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.816 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.816 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.816 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.816 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.816 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.816 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.816 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.816 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.816 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.816 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.816 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.816 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.816 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.816 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.816 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.817 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.817 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.817 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.817 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.817 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.817 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.817 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.817 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.817 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.817 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.817 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.817 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.817 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.817 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.817 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.817 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.817 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.817 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.817 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.817 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.817 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.817 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.817 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.817 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.817 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.817 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.817 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.817 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.817 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.817 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.817 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.817 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.817 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.817 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.817 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.817 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.817 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.817 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.817 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.817 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.817 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.817 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.817 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.817 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.817 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.817 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.817 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.817 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.817 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.817 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.817 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.817 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.817 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.817 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.817 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.817 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.817 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.817 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.817 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.817 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.817 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.817 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.817 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.817 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.817 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.817 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.817 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.817 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.817 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.817 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.817 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.817 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.817 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.817 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.817 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.817 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.817 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.817 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.817 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.817 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.817 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.817 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.817 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.817 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.817 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.817 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.817 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.817 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.817 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.817 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.817 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.817 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.817 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.817 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.817 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.817 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.817 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.817 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.817 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.817 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.817 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.817 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.817 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.817 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:40.817 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:40.817 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:40.817 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:40.817 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:04:40.817 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:40.817 09:53:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:40.817 09:53:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:40.817 09:53:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:04:40.817 09:53:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:40.817 09:53:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:40.817 09:53:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:40.817 09:53:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:40.817 09:53:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:40.817 09:53:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:40.817 09:53:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:40.817 09:53:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:41.079 09:53:25 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:41.079 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:41.079 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:41.079 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:41.079 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:41.079 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:41.079 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:41.079 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:41.079 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:41.079 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:41.079 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.079 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.079 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 24619412 kB' 'MemFree: 13481116 kB' 'MemUsed: 11138296 kB' 'SwapCached: 0 kB' 'Active: 5845216 kB' 'Inactive: 3329964 kB' 'Active(anon): 5586328 kB' 'Inactive(anon): 0 kB' 'Active(file): 258888 kB' 'Inactive(file): 3329964 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8872376 kB' 'Mapped: 79512 kB' 'AnonPages: 305976 kB' 'Shmem: 5283524 kB' 'KernelStack: 6824 kB' 'PageTables: 3944 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 118432 kB' 'Slab: 297088 kB' 'SReclaimable: 118432 kB' 'SUnreclaim: 178656 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:41.079 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.079 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.079 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.079 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.079 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.079 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.079 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.079 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.079 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.079 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.079 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.079 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.079 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.079 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.079 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.079 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.079 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.079 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.079 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.079 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.079 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.079 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.079 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.079 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.079 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.079 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.079 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.079 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.079 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.079 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.079 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.079 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.079 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.079 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.079 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.079 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.079 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.079 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.079 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.079 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.079 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.079 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.079 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.079 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.079 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.079 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.079 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.079 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.079 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.079 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.079 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.079 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.079 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.079 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.079 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.079 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.079 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.079 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.079 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.079 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.079 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.079 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.079 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.079 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.079 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.079 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.079 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.079 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.079 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.079 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.079 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.079 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.079 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.079 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.079 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.079 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.079 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.080 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.080 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.080 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.080 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.080 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.080 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.080 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.080 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.080 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.080 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.080 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.080 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.080 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.080 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.080 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.080 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.080 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.080 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.080 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.080 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.080 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.080 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.080 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.080 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.080 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.080 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.080 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.080 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.080 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.080 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.080 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.080 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.080 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.080 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.080 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.080 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.080 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.080 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.080 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.080 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.080 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.080 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.080 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.080 09:53:25 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.080 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.080 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.080 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.080 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.080 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.080 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.080 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.080 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.080 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.080 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.080 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.080 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.080 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.080 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.080 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.080 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.080 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.080 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.080 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.080 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.080 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.080 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.080 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.080 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.080 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:41.080 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:41.080 09:53:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:41.080 09:53:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:41.080 09:53:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:41.080 09:53:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:41.080 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:41.080 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:04:41.080 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:41.080 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:41.080 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:41.080 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:41.080 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:41.080 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:41.080 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:41.080 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.080 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.080 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 19407244 kB' 'MemFree: 14677972 kB' 'MemUsed: 4729272 kB' 'SwapCached: 0 kB' 'Active: 1376896 kB' 'Inactive: 176864 kB' 'Active(anon): 1240960 kB' 'Inactive(anon): 0 kB' 'Active(file): 135936 kB' 'Inactive(file): 176864 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1329796 kB' 'Mapped: 100668 kB' 'AnonPages: 224024 kB' 'Shmem: 1016996 kB' 'KernelStack: 5640 kB' 'PageTables: 3720 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 65612 kB' 'Slab: 241840 kB' 'SReclaimable: 65612 kB' 'SUnreclaim: 176228 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:41.080 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.080 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.080 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.080 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.080 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.080 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.080 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.080 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.080 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.080 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.080 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.080 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.080 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.080 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.080 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.080 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.081 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.081 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.081 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.081 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.081 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.081 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.081 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.081 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.081 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.081 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.081 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.081 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.081 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.081 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.081 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.081 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.081 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.081 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.081 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.081 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.081 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.081 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.081 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.081 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.081 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.081 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.081 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.081 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.081 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.081 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.081 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.081 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.081 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.081 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.081 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.081 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.081 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.081 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.081 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.081 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.081 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.081 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.081 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.081 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.081 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.081 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.081 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.081 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.081 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.081 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.081 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.081 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.081 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.081 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.081 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.081 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.081 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.081 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.081 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.081 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.081 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.081 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.081 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.081 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.081 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.081 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.081 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.081 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.081 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.081 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.081 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.081 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.081 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.081 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.081 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.081 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.081 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.081 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.081 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.081 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.081 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.081 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.081 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.081 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.081 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.081 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.081 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.081 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.081 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.081 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.081 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.081 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.081 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.081 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.081 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.081 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.081 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.081 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.081 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.081 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.081 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.081 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.081 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.081 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.081 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.081 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.081 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.081 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.081 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.081 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.081 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.081 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.081 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.081 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.081 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.081 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.081 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.081 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.081 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.081 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.082 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.082 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.082 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.082 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.082 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.082 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:41.082 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:41.082 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:41.082 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:41.082 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:41.082 09:53:26 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:41.082 09:53:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:41.082 09:53:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:41.082 09:53:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:41.082 09:53:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:41.082 09:53:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:41.082 node0=512 expecting 512 00:04:41.082 09:53:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:41.082 09:53:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:41.082 09:53:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:41.082 09:53:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:04:41.082 node1=1024 expecting 1024 00:04:41.082 09:53:26 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:04:41.082 00:04:41.082 real 0m1.857s 00:04:41.082 user 0m0.797s 00:04:41.082 sys 0m1.042s 00:04:41.082 09:53:26 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:41.082 09:53:26 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:41.082 ************************************ 00:04:41.082 END TEST custom_alloc 00:04:41.082 ************************************ 00:04:41.082 09:53:26 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:41.082 09:53:26 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:41.082 09:53:26 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:41.082 09:53:26 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:41.082 ************************************ 00:04:41.082 START TEST no_shrink_alloc 00:04:41.082 ************************************ 00:04:41.082 09:53:26 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1125 -- # no_shrink_alloc 00:04:41.082 09:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:41.082 09:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:41.082 09:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:41.082 09:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:04:41.082 09:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:41.082 09:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:41.082 09:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:41.082 09:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:41.082 09:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:41.082 09:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:41.082 09:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:41.082 09:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:41.082 09:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:41.082 09:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:41.082 09:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:41.082 09:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:41.082 09:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:41.082 09:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:41.082 09:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:41.082 09:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:04:41.082 09:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:41.082 09:53:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:42.460 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:42.460 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:42.460 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:42.460 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:42.460 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:42.460 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:42.460 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:42.460 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:42.460 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:42.460 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:42.460 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:42.460 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:42.460 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:42.460 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:42.460 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:42.460 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:42.460 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:42.725 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:42.725 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:42.725 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:42.725 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:42.725 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:42.725 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:42.725 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:42.725 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:42.725 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:42.725 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:42.725 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:42.725 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:42.725 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:42.725 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.725 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:42.725 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:42.725 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.726 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.726 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.726 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 29223196 kB' 'MemAvailable: 32804832 kB' 'Buffers: 2704 kB' 'Cached: 10199532 kB' 'SwapCached: 0 kB' 'Active: 7222592 kB' 'Inactive: 3506828 kB' 'Active(anon): 6827768 kB' 'Inactive(anon): 0 kB' 'Active(file): 394824 kB' 'Inactive(file): 3506828 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 529980 kB' 'Mapped: 180264 kB' 'Shmem: 6300584 kB' 'KReclaimable: 184044 kB' 'Slab: 538720 kB' 'SReclaimable: 184044 kB' 'SUnreclaim: 354676 kB' 'KernelStack: 12464 kB' 'PageTables: 7664 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353356 kB' 'Committed_AS: 7941428 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195840 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1752668 kB' 'DirectMap2M: 12847104 kB' 'DirectMap1G: 37748736 kB' 00:04:42.726 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.726 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.726 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.726 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.726 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.726 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.726 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.726 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.726 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.726 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.726 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.726 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.726 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.726 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.726 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.726 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.726 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.726 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.726 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.726 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.726 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.726 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.726 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.726 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.726 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.726 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.726 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.726 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.726 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.726 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.726 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.726 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.726 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.726 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.726 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.726 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.726 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.726 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.726 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.726 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.726 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.726 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.726 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.726 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.726 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.726 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.726 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.726 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.726 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.726 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.726 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.726 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.726 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.726 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.726 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.726 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.726 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.726 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.726 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.726 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.726 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.726 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.726 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.726 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.726 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.726 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.726 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.726 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.726 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.726 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.726 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.726 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.726 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.726 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.726 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.726 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.726 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.726 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.726 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.726 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.726 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.726 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.726 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.726 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.726 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.726 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.726 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.726 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.726 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.726 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.726 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.726 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.726 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.726 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.726 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.726 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.726 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.726 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.726 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.726 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.726 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.726 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.727 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.727 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.727 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.727 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.727 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.727 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.727 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.727 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.727 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.727 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.727 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.727 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.727 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.727 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.727 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.727 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.727 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.727 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.727 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.727 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.727 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.727 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.727 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.727 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.727 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.727 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.727 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.727 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.727 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.727 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.727 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.727 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.727 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.727 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.727 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.727 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.727 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.727 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.727 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.727 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.727 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.727 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.727 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.727 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.727 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.727 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.727 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.727 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.727 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.727 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.727 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.727 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.727 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.727 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.727 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.727 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.727 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.727 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.727 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.727 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:42.727 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:42.727 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:42.727 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:42.727 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:42.727 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:42.727 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:42.727 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:42.727 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:42.727 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.727 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:42.727 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:42.727 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.727 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.727 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.727 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.727 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 29222948 kB' 'MemAvailable: 32804584 kB' 'Buffers: 2704 kB' 'Cached: 10199536 kB' 'SwapCached: 0 kB' 'Active: 7222472 kB' 'Inactive: 3506828 kB' 'Active(anon): 6827648 kB' 'Inactive(anon): 0 kB' 'Active(file): 394824 kB' 'Inactive(file): 3506828 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 530284 kB' 'Mapped: 180640 kB' 'Shmem: 6300588 kB' 'KReclaimable: 184044 kB' 'Slab: 538732 kB' 'SReclaimable: 184044 kB' 'SUnreclaim: 354688 kB' 'KernelStack: 12464 kB' 'PageTables: 7612 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353356 kB' 'Committed_AS: 7942668 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195776 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1752668 kB' 'DirectMap2M: 12847104 kB' 'DirectMap1G: 37748736 kB' 00:04:42.727 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.727 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.727 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.727 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.727 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.727 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.727 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.727 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.727 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.727 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.727 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.727 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.727 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.727 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.727 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.727 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.727 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.727 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.727 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.727 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.727 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.727 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.727 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.727 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.727 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.727 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.727 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.727 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.727 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.727 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.727 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.728 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.728 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.728 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.728 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.728 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.728 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.728 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.728 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.728 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.728 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.728 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.728 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.728 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.728 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.728 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.728 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.728 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.728 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.728 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.728 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.728 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.728 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.728 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.728 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.728 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.728 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.728 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.728 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.728 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.728 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.728 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.728 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.728 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.728 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.728 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.728 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.728 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.728 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.728 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.728 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.728 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.728 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.728 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.728 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.728 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.728 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.728 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.728 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.728 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.728 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.728 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.728 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.728 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.728 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.728 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.728 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.728 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.728 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.728 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.728 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.728 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.728 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.728 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.728 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.728 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.728 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.728 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.728 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.728 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.728 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.728 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.728 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.728 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.728 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.728 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.728 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.728 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.728 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.728 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.728 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.728 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.728 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.728 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.728 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.728 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.728 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.728 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.728 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.728 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.728 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.728 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.728 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.728 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.728 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.728 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.728 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.728 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.728 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.728 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.728 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.728 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.728 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.728 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.728 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.729 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.729 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.729 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.729 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.729 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.729 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.729 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.729 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.729 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.729 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.729 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.729 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.729 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.729 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.729 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.729 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.729 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.729 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.729 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.729 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.729 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.729 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.729 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.729 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.729 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.729 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.729 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.729 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.729 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.729 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.729 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.729 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.729 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.729 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.729 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.729 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.729 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.729 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.729 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.729 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.729 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.729 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.729 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.729 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.729 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.729 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.729 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.729 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.729 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.729 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.729 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.729 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.729 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.729 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.729 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.729 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.729 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.729 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.729 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.729 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.729 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.729 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.729 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.729 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.729 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.729 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.729 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.729 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.729 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.729 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.729 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:42.729 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:42.729 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:42.729 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:42.729 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:42.729 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:42.729 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:42.729 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:42.729 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.729 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:42.729 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:42.729 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.729 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.729 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.729 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.729 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 29219168 kB' 'MemAvailable: 32800804 kB' 'Buffers: 2704 kB' 'Cached: 10199556 kB' 'SwapCached: 0 kB' 'Active: 7226240 kB' 'Inactive: 3506828 kB' 'Active(anon): 6831416 kB' 'Inactive(anon): 0 kB' 'Active(file): 394824 kB' 'Inactive(file): 3506828 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 534052 kB' 'Mapped: 180640 kB' 'Shmem: 6300608 kB' 'KReclaimable: 184044 kB' 'Slab: 538732 kB' 'SReclaimable: 184044 kB' 'SUnreclaim: 354688 kB' 'KernelStack: 12480 kB' 'PageTables: 7664 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353356 kB' 'Committed_AS: 7946256 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195760 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1752668 kB' 'DirectMap2M: 12847104 kB' 'DirectMap1G: 37748736 kB' 00:04:42.729 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.729 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.729 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.729 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.729 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.729 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.729 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.729 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.729 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.729 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.729 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.729 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.729 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.729 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.729 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.729 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.729 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.729 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.729 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.729 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.730 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.730 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.730 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.730 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.730 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.730 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.730 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.730 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.730 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.730 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.730 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.730 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.730 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.730 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.730 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.730 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.730 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.730 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.730 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.730 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.730 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.730 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.730 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.730 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.730 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.730 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.730 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.730 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.730 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.730 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.730 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.730 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.730 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.730 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.730 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.730 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.730 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.730 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.730 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.730 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.730 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.730 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.730 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.730 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.730 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.730 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.730 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.730 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.730 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.730 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.730 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.730 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.730 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.730 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.730 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.730 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.730 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.730 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.730 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.730 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.730 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.730 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.730 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.730 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.730 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.730 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.730 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.730 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.730 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.730 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.730 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.730 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.730 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.730 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.730 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.730 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.730 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.730 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.730 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.730 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.730 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.730 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.730 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.730 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.731 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.731 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.731 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.731 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.731 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.731 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.731 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.731 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.731 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.731 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.731 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.731 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.731 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.731 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.731 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.731 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.731 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.731 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.731 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.731 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.731 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.731 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.731 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.731 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.731 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.731 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.731 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.731 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.731 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.731 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.731 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.731 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.731 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.731 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.731 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.731 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.731 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.731 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.731 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.731 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.731 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.731 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.731 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.731 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.731 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.731 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.731 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.731 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.731 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.731 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.731 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.731 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.731 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.731 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.731 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.731 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.731 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.731 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.731 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.731 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.731 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.731 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.731 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.731 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.731 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.731 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.731 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.731 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.731 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.731 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.731 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.731 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.731 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.731 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.731 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.731 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.731 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.731 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.731 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.731 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.731 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.731 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.731 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.731 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.731 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.731 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.731 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.731 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.731 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.731 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.731 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.731 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.731 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.731 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.731 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.731 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.731 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:42.731 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:42.731 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:42.731 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:42.731 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:42.731 nr_hugepages=1024 00:04:42.731 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:42.731 resv_hugepages=0 00:04:42.732 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:42.732 surplus_hugepages=0 00:04:42.732 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:42.732 anon_hugepages=0 00:04:42.732 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:42.732 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:42.732 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:42.732 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:42.732 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:42.732 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:42.732 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:42.732 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.732 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:42.732 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:42.732 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.732 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.732 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.732 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.732 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 29215500 kB' 'MemAvailable: 32797136 kB' 'Buffers: 2704 kB' 'Cached: 10199576 kB' 'SwapCached: 0 kB' 'Active: 7227912 kB' 'Inactive: 3506828 kB' 'Active(anon): 6833088 kB' 'Inactive(anon): 0 kB' 'Active(file): 394824 kB' 'Inactive(file): 3506828 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 535692 kB' 'Mapped: 180984 kB' 'Shmem: 6300628 kB' 'KReclaimable: 184044 kB' 'Slab: 538728 kB' 'SReclaimable: 184044 kB' 'SUnreclaim: 354684 kB' 'KernelStack: 12464 kB' 'PageTables: 7640 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353356 kB' 'Committed_AS: 7947608 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195764 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1752668 kB' 'DirectMap2M: 12847104 kB' 'DirectMap1G: 37748736 kB' 00:04:42.732 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.732 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.732 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.732 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.732 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.732 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.732 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.732 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.732 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.732 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.732 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.732 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.732 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.732 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.732 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.732 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.732 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.732 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.732 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.732 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.732 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.732 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.732 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.732 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.732 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.732 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.732 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.732 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.732 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.732 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.732 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.732 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.732 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.732 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.732 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.732 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.732 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.732 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.732 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.732 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.732 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.732 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.732 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.732 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.732 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.732 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.732 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.732 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.732 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.732 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.732 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.732 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.732 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.732 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.732 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.732 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.732 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.732 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.732 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.732 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.732 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.732 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.732 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.732 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.732 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.732 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.732 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.732 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.732 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.732 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.732 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.732 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.732 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.732 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.732 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.732 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.732 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.732 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.732 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.732 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.732 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.732 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.732 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.732 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.732 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.732 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.732 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.732 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.733 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.733 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.733 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.733 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.733 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.733 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.733 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.733 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.733 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.733 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.733 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.733 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.733 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.733 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.733 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.733 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.733 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.733 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.733 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.733 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.733 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.733 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.733 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.733 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.733 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.733 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.733 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.733 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.733 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.733 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.733 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.733 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.733 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.733 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.733 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.733 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.733 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.733 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.733 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.733 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.733 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.733 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.733 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.733 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.733 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.733 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.733 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.733 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.733 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.733 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.733 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.733 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.733 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.733 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.733 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.733 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.733 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.733 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.733 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.733 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.733 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.733 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.733 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.733 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.733 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.733 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.733 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.733 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.733 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.733 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.733 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.733 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.733 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.733 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.733 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.733 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.733 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.733 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.733 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.733 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.733 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.733 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.733 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.733 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.733 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.733 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.733 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.733 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.733 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.733 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.733 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.733 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.733 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.733 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.733 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.733 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.733 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.733 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.733 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.733 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.733 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.733 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.733 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.733 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.733 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:42.733 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:42.733 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:42.733 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:42.733 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:42.733 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:42.734 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:42.734 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:42.734 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:42.734 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:42.734 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:42.734 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:42.734 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:42.734 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:42.734 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:42.734 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:42.734 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:42.734 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:42.734 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:42.734 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:42.734 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:42.734 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:42.734 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:42.734 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:42.734 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.734 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.734 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 24619412 kB' 'MemFree: 12437328 kB' 'MemUsed: 12182084 kB' 'SwapCached: 0 kB' 'Active: 5846756 kB' 'Inactive: 3329964 kB' 'Active(anon): 5587868 kB' 'Inactive(anon): 0 kB' 'Active(file): 258888 kB' 'Inactive(file): 3329964 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8872432 kB' 'Mapped: 79512 kB' 'AnonPages: 307424 kB' 'Shmem: 5283580 kB' 'KernelStack: 6760 kB' 'PageTables: 3744 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 118432 kB' 'Slab: 296924 kB' 'SReclaimable: 118432 kB' 'SUnreclaim: 178492 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:42.734 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.734 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.734 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.734 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.734 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.734 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.734 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.734 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.734 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.734 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.734 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.734 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.734 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.734 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.734 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.734 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.734 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.734 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.734 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.734 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.734 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.734 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.734 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.734 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.734 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.734 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.734 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.734 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.734 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.734 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.734 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.734 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.734 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.734 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.734 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.734 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.734 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.734 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.734 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.734 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.734 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.734 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.734 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.734 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.734 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.734 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.734 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.734 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.734 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.734 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.734 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.734 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.734 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.734 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.734 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.734 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.734 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.734 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.734 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.734 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.734 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.734 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.734 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.734 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.734 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.734 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.734 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.734 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.734 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.734 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.734 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.734 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.734 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.734 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.734 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.734 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.734 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.734 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.735 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.735 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.735 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.735 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.735 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.735 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.735 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.735 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.735 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.735 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.735 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.735 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.735 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.735 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.735 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.735 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.735 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.735 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.735 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.735 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.735 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.735 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.735 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.735 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.735 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.735 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.735 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.735 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.735 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.735 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.735 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.735 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.735 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.735 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.735 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.735 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.735 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.735 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.735 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.735 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.735 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.735 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.735 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.735 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.735 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.735 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.735 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.735 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.735 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.735 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.735 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.735 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.735 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.735 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.735 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.735 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.735 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.735 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.735 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.735 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.735 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.735 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.735 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.735 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:42.735 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:42.735 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:42.735 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:42.735 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:42.735 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:42.735 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:42.735 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:42.735 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:42.735 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:42.735 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:42.735 node0=1024 expecting 1024 00:04:42.735 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:42.735 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:42.735 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:42.735 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:04:42.735 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:42.735 09:53:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:44.112 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:44.112 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:44.112 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:44.112 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:44.112 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:44.112 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:44.112 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:44.112 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:44.112 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:44.112 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:44.112 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:44.112 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:44.112 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:44.113 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:44.113 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:44.113 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:44.375 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:44.375 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:44.375 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:44.375 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:44.375 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:44.376 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:44.376 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:44.376 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:44.376 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:44.376 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:44.376 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:44.376 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:44.376 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:44.376 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:44.376 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:44.376 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:44.376 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:44.376 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:44.376 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:44.376 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:44.376 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.376 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.376 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 29218708 kB' 'MemAvailable: 32800344 kB' 'Buffers: 2704 kB' 'Cached: 10199652 kB' 'SwapCached: 0 kB' 'Active: 7222660 kB' 'Inactive: 3506828 kB' 'Active(anon): 6827836 kB' 'Inactive(anon): 0 kB' 'Active(file): 394824 kB' 'Inactive(file): 3506828 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 530304 kB' 'Mapped: 180300 kB' 'Shmem: 6300704 kB' 'KReclaimable: 184044 kB' 'Slab: 538572 kB' 'SReclaimable: 184044 kB' 'SUnreclaim: 354528 kB' 'KernelStack: 12496 kB' 'PageTables: 7732 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353356 kB' 'Committed_AS: 7941676 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195824 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1752668 kB' 'DirectMap2M: 12847104 kB' 'DirectMap1G: 37748736 kB' 00:04:44.376 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.376 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.376 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.376 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.376 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.376 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.376 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.376 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.376 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.376 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.376 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.376 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.376 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.376 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.376 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.376 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.376 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.376 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.376 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.376 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.376 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.376 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.376 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.376 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.376 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.376 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.376 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.376 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.376 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.376 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.376 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.376 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.376 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.376 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.376 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.376 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.376 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.376 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.376 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.376 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.376 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.376 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.376 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.376 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.376 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.376 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.376 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.376 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.376 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.376 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.376 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.376 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.376 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.376 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.376 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.376 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.376 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.376 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.376 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.376 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.376 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.376 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.376 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.376 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.376 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.376 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.376 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.376 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.376 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.376 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.376 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.376 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.376 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.376 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.376 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.376 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.376 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.376 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.376 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.376 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.376 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.376 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.376 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.377 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.377 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.377 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.377 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.377 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.377 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.377 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.377 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.377 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.377 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.377 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.377 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.377 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.377 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.377 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.377 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.377 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.377 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.377 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.377 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.377 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.377 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.377 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.377 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.377 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.377 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.377 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.377 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.377 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.377 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.377 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.377 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.377 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.377 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.377 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.377 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.377 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.377 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.377 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.377 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.377 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.377 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.377 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.377 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.377 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.377 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.377 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.377 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.377 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.377 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.377 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.377 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.377 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.377 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.377 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.377 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.377 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.377 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.377 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.377 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.377 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.377 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.377 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.377 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.377 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.377 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.377 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.377 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.377 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.377 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.377 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.377 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.377 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.377 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.377 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.377 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.377 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.377 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.377 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:44.377 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:44.377 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:44.377 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:44.377 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:44.377 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:44.377 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:44.377 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:44.377 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:44.377 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:44.377 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:44.377 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:44.377 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:44.377 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.377 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.377 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 29218708 kB' 'MemAvailable: 32800344 kB' 'Buffers: 2704 kB' 'Cached: 10199652 kB' 'SwapCached: 0 kB' 'Active: 7222516 kB' 'Inactive: 3506828 kB' 'Active(anon): 6827692 kB' 'Inactive(anon): 0 kB' 'Active(file): 394824 kB' 'Inactive(file): 3506828 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 530188 kB' 'Mapped: 180292 kB' 'Shmem: 6300704 kB' 'KReclaimable: 184044 kB' 'Slab: 538568 kB' 'SReclaimable: 184044 kB' 'SUnreclaim: 354524 kB' 'KernelStack: 12480 kB' 'PageTables: 7668 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353356 kB' 'Committed_AS: 7941692 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195792 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1752668 kB' 'DirectMap2M: 12847104 kB' 'DirectMap1G: 37748736 kB' 00:04:44.377 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.377 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.377 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.377 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.377 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.377 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.378 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.378 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.378 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.378 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.378 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.378 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.378 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.378 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.378 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.378 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.378 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.378 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.378 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.378 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.378 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.378 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.378 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.378 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.378 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.378 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.378 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.378 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.378 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.378 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.378 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.378 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.378 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.378 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.378 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.378 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.378 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.378 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.378 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.378 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.378 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.378 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.378 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.378 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.378 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.378 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.378 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.378 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.378 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.378 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.378 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.378 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.378 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.378 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.378 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.378 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.378 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.378 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.378 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.378 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.378 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.378 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.378 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.378 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.378 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.378 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.378 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.378 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.378 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.378 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.378 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.378 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.378 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.378 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.378 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.378 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.378 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.378 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.378 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.378 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.378 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.378 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.378 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.378 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.378 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.378 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.378 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.378 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.378 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.378 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.378 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.378 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.378 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.378 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.378 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.378 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.378 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.378 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.378 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.378 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.378 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.378 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.378 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.378 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.378 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.378 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.378 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.378 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.378 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.378 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.378 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.378 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.378 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.378 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.378 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.378 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.378 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.378 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.378 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.379 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.379 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.379 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.379 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.379 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.379 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.379 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.379 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.379 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.379 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.379 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.379 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.379 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.379 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.379 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.379 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.379 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.379 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.379 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.379 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.379 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.379 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.379 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.379 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.379 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.379 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.379 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.379 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.379 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.379 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.379 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.379 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.379 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.379 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.379 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.379 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.379 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.379 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.379 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.379 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.379 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.379 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.379 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.379 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.379 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.379 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.379 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.379 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.379 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.379 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.379 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.379 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.379 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.379 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.379 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.379 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.379 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.379 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.379 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.379 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.379 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.379 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.379 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.379 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.379 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.379 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.379 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.379 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.379 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.379 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.379 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.379 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.379 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.379 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.379 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.379 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.379 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.379 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.379 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.379 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.379 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.379 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.379 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.379 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.379 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.379 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.379 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:44.379 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:44.379 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:44.379 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:44.379 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:44.379 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:44.379 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:44.379 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:44.379 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:44.379 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:44.379 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:44.379 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:44.379 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:44.379 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.379 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.379 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 29219028 kB' 'MemAvailable: 32800664 kB' 'Buffers: 2704 kB' 'Cached: 10199676 kB' 'SwapCached: 0 kB' 'Active: 7222644 kB' 'Inactive: 3506828 kB' 'Active(anon): 6827820 kB' 'Inactive(anon): 0 kB' 'Active(file): 394824 kB' 'Inactive(file): 3506828 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 530256 kB' 'Mapped: 180216 kB' 'Shmem: 6300728 kB' 'KReclaimable: 184044 kB' 'Slab: 538568 kB' 'SReclaimable: 184044 kB' 'SUnreclaim: 354524 kB' 'KernelStack: 12480 kB' 'PageTables: 7672 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353356 kB' 'Committed_AS: 7941716 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195776 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1752668 kB' 'DirectMap2M: 12847104 kB' 'DirectMap1G: 37748736 kB' 00:04:44.380 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.380 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.380 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.380 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.380 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.380 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.380 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.380 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.380 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.380 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.380 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.380 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.380 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.380 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.380 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.380 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.380 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.380 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.380 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.380 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.380 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.380 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.380 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.380 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.380 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.380 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.380 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.380 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.380 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.380 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.380 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.380 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.380 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.380 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.380 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.380 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.380 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.380 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.380 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.380 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.380 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.380 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.380 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.380 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.380 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.380 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.380 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.380 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.380 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.380 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.380 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.380 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.380 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.380 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.380 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.380 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.380 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.380 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.380 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.380 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.380 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.380 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.380 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.380 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.380 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.380 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.380 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.380 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.380 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.380 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.380 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.380 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.380 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.380 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.380 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.380 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.380 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.380 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.380 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.380 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.380 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.381 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.381 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.381 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.381 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.381 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.381 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.381 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.381 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.381 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.381 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.381 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.381 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.381 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.381 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.381 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.381 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.381 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.381 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.381 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.381 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.381 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.381 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.381 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.381 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.381 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.381 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.381 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.381 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.381 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.381 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.381 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.381 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.381 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.381 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.381 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.381 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.381 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.381 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.381 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.381 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.381 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.381 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.381 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.381 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.381 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.381 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.381 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.381 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.381 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.381 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.381 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.381 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.381 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.381 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.381 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.381 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.381 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.381 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.381 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.381 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.381 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.381 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.381 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.381 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.381 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.381 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.381 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.381 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.381 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.381 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.381 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.381 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.381 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.381 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.381 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.381 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.381 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.381 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.381 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.381 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.381 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.381 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.381 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.381 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.381 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.381 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.381 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.381 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.381 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.381 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.381 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.381 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.381 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.381 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.381 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.381 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.381 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.381 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.381 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.381 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.381 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.381 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.381 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.381 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.381 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.381 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.381 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.382 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.382 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.382 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.382 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.382 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.382 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.382 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.382 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.382 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.382 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.382 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.382 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.382 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.382 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:44.382 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:44.382 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:44.382 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:44.382 nr_hugepages=1024 00:04:44.382 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:44.382 resv_hugepages=0 00:04:44.382 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:44.382 surplus_hugepages=0 00:04:44.382 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:44.382 anon_hugepages=0 00:04:44.382 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:44.382 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:44.382 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:44.382 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:44.382 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:44.382 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:44.382 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:44.382 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:44.382 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:44.382 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:44.382 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:44.382 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:44.382 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.382 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.382 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 44026656 kB' 'MemFree: 29219984 kB' 'MemAvailable: 32801620 kB' 'Buffers: 2704 kB' 'Cached: 10199716 kB' 'SwapCached: 0 kB' 'Active: 7222272 kB' 'Inactive: 3506828 kB' 'Active(anon): 6827448 kB' 'Inactive(anon): 0 kB' 'Active(file): 394824 kB' 'Inactive(file): 3506828 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 529848 kB' 'Mapped: 180216 kB' 'Shmem: 6300768 kB' 'KReclaimable: 184044 kB' 'Slab: 538568 kB' 'SReclaimable: 184044 kB' 'SUnreclaim: 354524 kB' 'KernelStack: 12464 kB' 'PageTables: 7620 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 29353356 kB' 'Committed_AS: 7941736 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195776 kB' 'VmallocChunk: 0 kB' 'Percpu: 34944 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1752668 kB' 'DirectMap2M: 12847104 kB' 'DirectMap1G: 37748736 kB' 00:04:44.382 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.382 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.382 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.382 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.382 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.382 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.382 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.382 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.382 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.382 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.382 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.382 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.382 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.382 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.382 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.382 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.382 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.382 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.382 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.382 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.382 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.382 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.382 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.382 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.382 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.382 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.382 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.382 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.382 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.382 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.382 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.382 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.382 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.382 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.382 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.382 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.382 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.382 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.382 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.382 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.382 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.382 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.382 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.382 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.382 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.382 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.382 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.382 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.382 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.382 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.382 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.382 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.382 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.382 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.382 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.382 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.382 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.382 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.382 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.382 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.382 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.382 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.382 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.382 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.382 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.382 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.383 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.383 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.383 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.383 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.383 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.383 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.383 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.383 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.383 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.383 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.383 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.383 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.383 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.383 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.383 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.383 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.383 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.383 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.383 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.383 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.383 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.383 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.383 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.383 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.383 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.383 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.383 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.383 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.383 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.383 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.383 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.643 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.643 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.643 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.643 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.643 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.643 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.643 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.643 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.643 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.643 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.643 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.643 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.643 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.643 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.643 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.643 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.643 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.643 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.643 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.643 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.643 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.643 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.643 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.643 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.643 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.643 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.643 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.643 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.643 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.643 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.643 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.643 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.643 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.643 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.643 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.643 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.643 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.643 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.643 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.643 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.643 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.643 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.643 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.643 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.643 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.643 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.643 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.643 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.643 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.643 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.643 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.643 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.643 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.643 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.643 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.643 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.643 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.643 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.643 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.643 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.643 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.643 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.643 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.643 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.643 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.643 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.643 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.643 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.643 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.643 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.643 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.643 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.643 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.643 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.643 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.643 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.643 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.643 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.643 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.643 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.643 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.643 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.643 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.643 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.643 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.643 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.643 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.643 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.643 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.643 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.643 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.643 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.643 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.643 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.643 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.643 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.643 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:44.643 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:44.643 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:44.643 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 24619412 kB' 'MemFree: 12436520 kB' 'MemUsed: 12182892 kB' 'SwapCached: 0 kB' 'Active: 5845256 kB' 'Inactive: 3329964 kB' 'Active(anon): 5586368 kB' 'Inactive(anon): 0 kB' 'Active(file): 258888 kB' 'Inactive(file): 3329964 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8872436 kB' 'Mapped: 79512 kB' 'AnonPages: 305908 kB' 'Shmem: 5283584 kB' 'KernelStack: 6808 kB' 'PageTables: 3940 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 118432 kB' 'Slab: 296884 kB' 'SReclaimable: 118432 kB' 'SUnreclaim: 178452 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:44.644 node0=1024 expecting 1024 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:44.644 00:04:44.644 real 0m3.480s 00:04:44.644 user 0m1.434s 00:04:44.644 sys 0m2.000s 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:44.644 09:53:29 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:44.644 ************************************ 00:04:44.644 END TEST no_shrink_alloc 00:04:44.644 ************************************ 00:04:44.644 09:53:29 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:04:44.644 09:53:29 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:44.644 09:53:29 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:44.644 09:53:29 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:44.644 09:53:29 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:44.644 09:53:29 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:44.644 09:53:29 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:44.644 09:53:29 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:44.644 09:53:29 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:44.644 09:53:29 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:44.644 09:53:29 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:44.644 09:53:29 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:44.645 09:53:29 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:44.645 09:53:29 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:44.645 00:04:44.645 real 0m13.449s 00:04:44.645 user 0m5.204s 00:04:44.645 sys 0m7.289s 00:04:44.645 09:53:29 setup.sh.hugepages -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:44.645 09:53:29 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:44.645 ************************************ 00:04:44.645 END TEST hugepages 00:04:44.645 ************************************ 00:04:44.645 09:53:29 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:44.645 09:53:29 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:44.645 09:53:29 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:44.645 09:53:29 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:44.645 ************************************ 00:04:44.645 START TEST driver 00:04:44.645 ************************************ 00:04:44.645 09:53:29 setup.sh.driver -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:44.645 * Looking for test storage... 00:04:44.645 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:44.645 09:53:29 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:44.645 09:53:29 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:44.645 09:53:29 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:47.175 09:53:32 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:47.175 09:53:32 setup.sh.driver -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:47.175 09:53:32 setup.sh.driver -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:47.175 09:53:32 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:47.175 ************************************ 00:04:47.175 START TEST guess_driver 00:04:47.175 ************************************ 00:04:47.175 09:53:32 setup.sh.driver.guess_driver -- common/autotest_common.sh@1125 -- # guess_driver 00:04:47.175 09:53:32 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:47.175 09:53:32 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:47.175 09:53:32 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:47.175 09:53:32 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:47.175 09:53:32 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:47.175 09:53:32 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:47.175 09:53:32 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:47.175 09:53:32 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:47.175 09:53:32 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:47.175 09:53:32 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 143 > 0 )) 00:04:47.175 09:53:32 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:47.175 09:53:32 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:04:47.175 09:53:32 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:04:47.175 09:53:32 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:47.434 09:53:32 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:47.434 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:47.434 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:47.434 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:47.434 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:47.434 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:47.434 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:47.434 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:47.434 09:53:32 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:04:47.434 09:53:32 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:04:47.434 09:53:32 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:47.434 09:53:32 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:47.434 09:53:32 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:47.434 Looking for driver=vfio-pci 00:04:47.434 09:53:32 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:47.434 09:53:32 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:47.434 09:53:32 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:47.434 09:53:32 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:48.863 09:53:33 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:48.863 09:53:33 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:48.863 09:53:33 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:48.863 09:53:33 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:48.863 09:53:33 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:48.863 09:53:33 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:48.863 09:53:33 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:48.863 09:53:33 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:48.863 09:53:33 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:48.863 09:53:33 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:48.863 09:53:33 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:48.863 09:53:33 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:48.863 09:53:33 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:48.863 09:53:33 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:48.863 09:53:33 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:48.863 09:53:33 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:48.863 09:53:33 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:48.863 09:53:33 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:48.863 09:53:33 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:48.863 09:53:33 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:48.863 09:53:33 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:48.863 09:53:33 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:48.863 09:53:33 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:48.863 09:53:33 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:48.863 09:53:33 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:48.863 09:53:33 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:48.863 09:53:33 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:48.864 09:53:33 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:48.864 09:53:33 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:48.864 09:53:33 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:48.864 09:53:33 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:48.864 09:53:33 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:48.864 09:53:33 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:48.864 09:53:33 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:48.864 09:53:33 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:48.864 09:53:33 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:48.864 09:53:33 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:48.864 09:53:33 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:48.864 09:53:33 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:48.864 09:53:33 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:48.864 09:53:33 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:48.864 09:53:33 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:48.864 09:53:33 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:48.864 09:53:33 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:48.864 09:53:33 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:48.864 09:53:33 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:48.864 09:53:33 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:48.864 09:53:33 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:49.801 09:53:34 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:49.801 09:53:34 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:49.801 09:53:34 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:49.801 09:53:34 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:49.801 09:53:34 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:49.801 09:53:34 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:49.801 09:53:34 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:53.088 00:04:53.088 real 0m5.371s 00:04:53.088 user 0m1.322s 00:04:53.088 sys 0m2.293s 00:04:53.088 09:53:37 setup.sh.driver.guess_driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:53.088 09:53:37 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:53.088 ************************************ 00:04:53.088 END TEST guess_driver 00:04:53.088 ************************************ 00:04:53.088 00:04:53.088 real 0m8.048s 00:04:53.088 user 0m1.928s 00:04:53.088 sys 0m3.449s 00:04:53.088 09:53:37 setup.sh.driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:53.088 09:53:37 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:53.088 ************************************ 00:04:53.088 END TEST driver 00:04:53.088 ************************************ 00:04:53.088 09:53:37 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:53.088 09:53:37 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:53.088 09:53:37 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:53.088 09:53:37 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:53.088 ************************************ 00:04:53.088 START TEST devices 00:04:53.088 ************************************ 00:04:53.088 09:53:37 setup.sh.devices -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:53.088 * Looking for test storage... 00:04:53.088 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:53.088 09:53:37 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:53.088 09:53:37 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:53.088 09:53:37 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:53.088 09:53:37 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:54.463 09:53:39 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:54.463 09:53:39 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:54.463 09:53:39 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:54.463 09:53:39 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:54.464 09:53:39 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:54.464 09:53:39 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:54.464 09:53:39 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:54.464 09:53:39 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:54.464 09:53:39 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:54.464 09:53:39 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:54.464 09:53:39 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:54.464 09:53:39 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:54.464 09:53:39 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:54.464 09:53:39 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:54.464 09:53:39 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:54.464 09:53:39 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:54.464 09:53:39 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:54.464 09:53:39 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:82:00.0 00:04:54.464 09:53:39 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\8\2\:\0\0\.\0* ]] 00:04:54.464 09:53:39 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:54.464 09:53:39 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:54.464 09:53:39 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:54.723 No valid GPT data, bailing 00:04:54.723 09:53:39 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:54.723 09:53:39 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:54.723 09:53:39 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:54.723 09:53:39 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:54.723 09:53:39 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:54.723 09:53:39 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:54.723 09:53:39 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:04:54.723 09:53:39 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:04:54.723 09:53:39 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:54.723 09:53:39 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:82:00.0 00:04:54.723 09:53:39 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:54.723 09:53:39 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:54.723 09:53:39 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:54.723 09:53:39 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:54.723 09:53:39 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:54.723 09:53:39 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:54.723 ************************************ 00:04:54.723 START TEST nvme_mount 00:04:54.723 ************************************ 00:04:54.723 09:53:39 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1125 -- # nvme_mount 00:04:54.723 09:53:39 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:54.723 09:53:39 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:54.723 09:53:39 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:54.723 09:53:39 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:54.723 09:53:39 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:54.723 09:53:39 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:54.723 09:53:39 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:54.723 09:53:39 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:54.723 09:53:39 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:54.723 09:53:39 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:54.724 09:53:39 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:54.724 09:53:39 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:54.724 09:53:39 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:54.724 09:53:39 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:54.724 09:53:39 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:54.724 09:53:39 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:54.724 09:53:39 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:54.724 09:53:39 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:54.724 09:53:39 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:55.660 Creating new GPT entries in memory. 00:04:55.660 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:55.660 other utilities. 00:04:55.660 09:53:40 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:55.660 09:53:40 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:55.660 09:53:40 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:55.660 09:53:40 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:55.660 09:53:40 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:57.039 Creating new GPT entries in memory. 00:04:57.039 The operation has completed successfully. 00:04:57.039 09:53:41 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:57.039 09:53:41 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:57.039 09:53:41 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 300707 00:04:57.039 09:53:41 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:57.039 09:53:41 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:57.039 09:53:41 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:57.039 09:53:41 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:57.039 09:53:41 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:57.039 09:53:41 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:57.039 09:53:41 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:82:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:57.039 09:53:41 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:82:00.0 00:04:57.039 09:53:41 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:57.039 09:53:41 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:57.039 09:53:41 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:57.039 09:53:41 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:57.039 09:53:41 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:57.039 09:53:41 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:57.039 09:53:41 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:57.039 09:53:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:57.039 09:53:41 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:82:00.0 00:04:57.039 09:53:41 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:57.039 09:53:41 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:57.039 09:53:41 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:58.415 09:53:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:82:00.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:58.415 09:53:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:58.415 09:53:43 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:58.415 09:53:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.415 09:53:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:58.415 09:53:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.415 09:53:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:58.415 09:53:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.415 09:53:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:58.415 09:53:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.415 09:53:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:58.415 09:53:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.415 09:53:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:58.415 09:53:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.415 09:53:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:58.415 09:53:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.415 09:53:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:58.415 09:53:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.415 09:53:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:58.415 09:53:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.415 09:53:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:58.415 09:53:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.415 09:53:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:58.415 09:53:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.415 09:53:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:58.415 09:53:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.415 09:53:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:58.415 09:53:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.415 09:53:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:58.415 09:53:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.415 09:53:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:58.415 09:53:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.415 09:53:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:58.415 09:53:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.415 09:53:43 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:04:58.415 09:53:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.415 09:53:43 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:58.415 09:53:43 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:58.415 09:53:43 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:58.415 09:53:43 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:58.415 09:53:43 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:58.415 09:53:43 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:58.415 09:53:43 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:58.415 09:53:43 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:58.415 09:53:43 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:58.415 09:53:43 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:58.415 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:58.415 09:53:43 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:58.415 09:53:43 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:58.673 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:58.673 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:04:58.673 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:58.673 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:58.673 09:53:43 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:58.673 09:53:43 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:58.673 09:53:43 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:58.673 09:53:43 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:58.673 09:53:43 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:58.673 09:53:43 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:58.674 09:53:43 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:82:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:58.674 09:53:43 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:82:00.0 00:04:58.674 09:53:43 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:58.674 09:53:43 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:58.674 09:53:43 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:58.674 09:53:43 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:58.674 09:53:43 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:58.674 09:53:43 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:58.674 09:53:43 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:58.674 09:53:43 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.674 09:53:43 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:82:00.0 00:04:58.674 09:53:43 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:58.674 09:53:43 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:58.674 09:53:43 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:00.049 09:53:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:82:00.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:00.049 09:53:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:00.049 09:53:45 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:00.049 09:53:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.049 09:53:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:00.049 09:53:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.049 09:53:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:00.049 09:53:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.049 09:53:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:00.049 09:53:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.049 09:53:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:00.049 09:53:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.049 09:53:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:00.049 09:53:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.049 09:53:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:00.049 09:53:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.049 09:53:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:00.049 09:53:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.049 09:53:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:00.050 09:53:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.050 09:53:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:00.050 09:53:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.050 09:53:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:00.050 09:53:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.050 09:53:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:00.050 09:53:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.050 09:53:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:00.050 09:53:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.050 09:53:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:00.050 09:53:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.050 09:53:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:00.050 09:53:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.050 09:53:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:00.050 09:53:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.050 09:53:45 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:00.050 09:53:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.308 09:53:45 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:00.308 09:53:45 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:00.308 09:53:45 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:00.308 09:53:45 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:00.308 09:53:45 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:00.308 09:53:45 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:00.308 09:53:45 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:82:00.0 data@nvme0n1 '' '' 00:05:00.308 09:53:45 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:82:00.0 00:05:00.308 09:53:45 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:00.308 09:53:45 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:00.308 09:53:45 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:05:00.308 09:53:45 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:00.308 09:53:45 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:00.308 09:53:45 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:00.308 09:53:45 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:00.308 09:53:45 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:82:00.0 00:05:00.308 09:53:45 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:00.308 09:53:45 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:00.308 09:53:45 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:01.684 09:53:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:82:00.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:01.684 09:53:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:01.684 09:53:46 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:01.684 09:53:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.684 09:53:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:01.684 09:53:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.684 09:53:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:01.684 09:53:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.684 09:53:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:01.684 09:53:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.684 09:53:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:01.684 09:53:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.684 09:53:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:01.684 09:53:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.684 09:53:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:01.684 09:53:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.684 09:53:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:01.684 09:53:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.684 09:53:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:01.684 09:53:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.684 09:53:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:01.684 09:53:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.684 09:53:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:01.684 09:53:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.684 09:53:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:01.684 09:53:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.684 09:53:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:01.684 09:53:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.684 09:53:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:01.684 09:53:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.684 09:53:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:01.684 09:53:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.684 09:53:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:01.684 09:53:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.684 09:53:46 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:01.684 09:53:46 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:01.684 09:53:46 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:01.684 09:53:46 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:01.684 09:53:46 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:05:01.685 09:53:46 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:05:01.685 09:53:46 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:01.685 09:53:46 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:01.685 09:53:46 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:01.685 09:53:46 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:01.685 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:01.685 00:05:01.685 real 0m6.997s 00:05:01.685 user 0m1.658s 00:05:01.685 sys 0m2.952s 00:05:01.685 09:53:46 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:01.685 09:53:46 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:05:01.685 ************************************ 00:05:01.685 END TEST nvme_mount 00:05:01.685 ************************************ 00:05:01.685 09:53:46 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:01.685 09:53:46 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:01.685 09:53:46 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:01.685 09:53:46 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:01.685 ************************************ 00:05:01.685 START TEST dm_mount 00:05:01.685 ************************************ 00:05:01.685 09:53:46 setup.sh.devices.dm_mount -- common/autotest_common.sh@1125 -- # dm_mount 00:05:01.685 09:53:46 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:01.685 09:53:46 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:01.685 09:53:46 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:01.685 09:53:46 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:01.685 09:53:46 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:01.685 09:53:46 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:05:01.685 09:53:46 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:01.685 09:53:46 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:01.685 09:53:46 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:05:01.685 09:53:46 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:05:01.685 09:53:46 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:01.685 09:53:46 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:01.685 09:53:46 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:01.685 09:53:46 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:01.685 09:53:46 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:01.685 09:53:46 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:01.685 09:53:46 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:01.685 09:53:46 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:01.685 09:53:46 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:01.685 09:53:46 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:01.685 09:53:46 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:03.063 Creating new GPT entries in memory. 00:05:03.063 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:03.063 other utilities. 00:05:03.063 09:53:47 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:03.063 09:53:47 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:03.063 09:53:47 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:03.063 09:53:47 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:03.063 09:53:47 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:04.000 Creating new GPT entries in memory. 00:05:04.000 The operation has completed successfully. 00:05:04.000 09:53:48 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:04.000 09:53:48 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:04.000 09:53:48 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:04.000 09:53:48 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:04.000 09:53:48 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:05:04.936 The operation has completed successfully. 00:05:04.936 09:53:49 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:04.936 09:53:49 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:04.936 09:53:49 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 303122 00:05:04.936 09:53:49 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:04.936 09:53:49 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:04.936 09:53:49 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:04.936 09:53:49 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:04.936 09:53:49 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:05:04.936 09:53:49 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:04.936 09:53:49 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:05:04.936 09:53:49 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:04.936 09:53:49 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:04.936 09:53:49 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:04.936 09:53:49 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:05:04.936 09:53:49 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:05:04.936 09:53:49 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:05:04.936 09:53:49 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:04.936 09:53:49 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:05:04.936 09:53:49 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:04.936 09:53:49 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:04.936 09:53:49 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:04.936 09:53:49 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:04.936 09:53:49 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:82:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:04.936 09:53:49 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:82:00.0 00:05:04.936 09:53:49 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:04.936 09:53:49 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:04.936 09:53:49 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:04.936 09:53:49 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:04.936 09:53:49 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:04.936 09:53:49 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:05:04.936 09:53:49 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:04.936 09:53:49 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.936 09:53:49 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:82:00.0 00:05:04.936 09:53:49 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:04.936 09:53:49 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:04.936 09:53:49 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:06.312 09:53:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:82:00.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:06.312 09:53:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:06.312 09:53:51 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:06.312 09:53:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.312 09:53:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:06.312 09:53:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.312 09:53:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:06.312 09:53:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.312 09:53:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:06.312 09:53:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.312 09:53:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:06.312 09:53:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.312 09:53:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:06.312 09:53:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.312 09:53:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:06.312 09:53:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.312 09:53:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:06.312 09:53:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.312 09:53:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:06.312 09:53:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.312 09:53:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:06.312 09:53:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.312 09:53:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:06.313 09:53:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.313 09:53:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:06.313 09:53:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.313 09:53:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:06.313 09:53:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.313 09:53:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:06.313 09:53:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.313 09:53:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:06.313 09:53:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.313 09:53:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:06.313 09:53:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.313 09:53:51 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:06.313 09:53:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.571 09:53:51 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:06.571 09:53:51 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:05:06.571 09:53:51 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:06.571 09:53:51 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:06.571 09:53:51 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:06.571 09:53:51 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:06.571 09:53:51 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:82:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:06.571 09:53:51 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:82:00.0 00:05:06.571 09:53:51 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:06.571 09:53:51 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:06.571 09:53:51 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:05:06.571 09:53:51 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:06.571 09:53:51 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:06.571 09:53:51 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:06.571 09:53:51 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.571 09:53:51 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:82:00.0 00:05:06.571 09:53:51 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:06.571 09:53:51 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:06.571 09:53:51 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:07.949 09:53:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:82:00.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:07.949 09:53:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:07.949 09:53:52 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:07.949 09:53:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.949 09:53:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:07.949 09:53:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.949 09:53:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:07.949 09:53:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.949 09:53:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:07.949 09:53:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.949 09:53:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:07.949 09:53:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.949 09:53:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:07.949 09:53:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.949 09:53:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:07.950 09:53:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.950 09:53:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:07.950 09:53:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.950 09:53:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:07.950 09:53:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.950 09:53:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:07.950 09:53:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.950 09:53:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:07.950 09:53:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.950 09:53:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:07.950 09:53:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.950 09:53:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:07.950 09:53:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.950 09:53:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:07.950 09:53:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.950 09:53:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:07.950 09:53:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.950 09:53:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:07.950 09:53:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.950 09:53:52 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\2\:\0\0\.\0 ]] 00:05:07.950 09:53:52 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:08.209 09:53:53 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:08.209 09:53:53 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:08.209 09:53:53 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:05:08.209 09:53:53 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:05:08.209 09:53:53 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:08.209 09:53:53 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:08.209 09:53:53 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:08.209 09:53:53 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:08.209 09:53:53 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:08.209 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:08.209 09:53:53 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:08.209 09:53:53 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:08.209 00:05:08.209 real 0m6.387s 00:05:08.209 user 0m1.181s 00:05:08.209 sys 0m2.056s 00:05:08.209 09:53:53 setup.sh.devices.dm_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:08.209 09:53:53 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:05:08.209 ************************************ 00:05:08.209 END TEST dm_mount 00:05:08.209 ************************************ 00:05:08.209 09:53:53 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:05:08.209 09:53:53 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:05:08.209 09:53:53 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:08.209 09:53:53 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:08.209 09:53:53 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:08.209 09:53:53 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:08.209 09:53:53 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:08.467 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:08.467 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:05:08.467 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:08.467 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:08.467 09:53:53 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:05:08.467 09:53:53 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:08.467 09:53:53 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:08.467 09:53:53 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:08.467 09:53:53 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:08.467 09:53:53 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:08.467 09:53:53 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:08.467 00:05:08.467 real 0m15.698s 00:05:08.467 user 0m3.630s 00:05:08.467 sys 0m6.318s 00:05:08.467 09:53:53 setup.sh.devices -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:08.467 09:53:53 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:08.467 ************************************ 00:05:08.467 END TEST devices 00:05:08.467 ************************************ 00:05:08.467 00:05:08.467 real 0m49.480s 00:05:08.467 user 0m14.596s 00:05:08.467 sys 0m23.676s 00:05:08.467 09:53:53 setup.sh -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:08.467 09:53:53 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:08.467 ************************************ 00:05:08.467 END TEST setup.sh 00:05:08.467 ************************************ 00:05:08.467 09:53:53 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:05:09.842 Hugepages 00:05:09.843 node hugesize free / total 00:05:09.843 node0 1048576kB 0 / 0 00:05:09.843 node0 2048kB 2048 / 2048 00:05:09.843 node1 1048576kB 0 / 0 00:05:09.843 node1 2048kB 0 / 0 00:05:09.843 00:05:09.843 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:09.843 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:05:09.843 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:05:09.843 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:05:09.843 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:05:09.843 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:05:09.843 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:05:09.843 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:05:09.843 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:05:09.843 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:05:09.843 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:05:09.843 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:05:09.843 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:05:09.843 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:05:09.843 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:05:09.843 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:05:09.843 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:05:10.101 NVMe 0000:82:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:05:10.101 09:53:55 -- spdk/autotest.sh@130 -- # uname -s 00:05:10.101 09:53:55 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:05:10.101 09:53:55 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:05:10.101 09:53:55 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:11.476 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:11.477 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:11.477 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:11.477 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:11.477 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:11.477 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:11.477 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:11.477 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:11.477 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:11.477 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:11.477 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:11.477 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:11.477 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:11.477 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:11.477 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:11.477 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:12.413 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:05:12.673 09:53:57 -- common/autotest_common.sh@1532 -- # sleep 1 00:05:13.637 09:53:58 -- common/autotest_common.sh@1533 -- # bdfs=() 00:05:13.637 09:53:58 -- common/autotest_common.sh@1533 -- # local bdfs 00:05:13.637 09:53:58 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:05:13.637 09:53:58 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:05:13.637 09:53:58 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:13.637 09:53:58 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:13.637 09:53:58 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:13.637 09:53:58 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:13.637 09:53:58 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:13.637 09:53:58 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:05:13.637 09:53:58 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:82:00.0 00:05:13.637 09:53:58 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:15.012 Waiting for block devices as requested 00:05:15.012 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:05:15.271 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:05:15.271 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:05:15.271 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:05:15.271 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:05:15.528 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:05:15.528 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:05:15.529 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:05:15.529 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:05:15.787 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:05:15.787 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:05:15.787 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:05:15.787 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:05:16.044 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:05:16.044 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:05:16.044 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:05:16.044 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:05:16.304 09:54:01 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:05:16.304 09:54:01 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:82:00.0 00:05:16.304 09:54:01 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:05:16.304 09:54:01 -- common/autotest_common.sh@1502 -- # grep 0000:82:00.0/nvme/nvme 00:05:16.304 09:54:01 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:02.0/0000:82:00.0/nvme/nvme0 00:05:16.304 09:54:01 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:80/0000:80:02.0/0000:82:00.0/nvme/nvme0 ]] 00:05:16.304 09:54:01 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:80/0000:80:02.0/0000:82:00.0/nvme/nvme0 00:05:16.304 09:54:01 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:05:16.304 09:54:01 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:05:16.304 09:54:01 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:05:16.304 09:54:01 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:05:16.304 09:54:01 -- common/autotest_common.sh@1545 -- # grep oacs 00:05:16.304 09:54:01 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:05:16.304 09:54:01 -- common/autotest_common.sh@1545 -- # oacs=' 0xf' 00:05:16.304 09:54:01 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:05:16.304 09:54:01 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:05:16.304 09:54:01 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:05:16.304 09:54:01 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:05:16.304 09:54:01 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:05:16.304 09:54:01 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:05:16.304 09:54:01 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:05:16.304 09:54:01 -- common/autotest_common.sh@1557 -- # continue 00:05:16.304 09:54:01 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:05:16.304 09:54:01 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:16.304 09:54:01 -- common/autotest_common.sh@10 -- # set +x 00:05:16.304 09:54:01 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:05:16.304 09:54:01 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:16.304 09:54:01 -- common/autotest_common.sh@10 -- # set +x 00:05:16.304 09:54:01 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:17.681 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:17.681 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:17.681 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:17.681 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:17.681 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:17.681 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:17.681 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:17.681 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:17.681 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:17.681 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:17.681 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:17.681 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:17.681 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:17.681 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:17.681 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:17.681 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:18.617 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:05:18.875 09:54:03 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:18.876 09:54:03 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:18.876 09:54:03 -- common/autotest_common.sh@10 -- # set +x 00:05:18.876 09:54:03 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:18.876 09:54:03 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:05:18.876 09:54:03 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:05:18.876 09:54:03 -- common/autotest_common.sh@1577 -- # bdfs=() 00:05:18.876 09:54:03 -- common/autotest_common.sh@1577 -- # local bdfs 00:05:18.876 09:54:03 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:05:18.876 09:54:03 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:18.876 09:54:03 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:18.876 09:54:03 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:18.876 09:54:03 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:18.876 09:54:03 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:18.876 09:54:04 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:05:18.876 09:54:04 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:82:00.0 00:05:18.876 09:54:04 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:05:18.876 09:54:04 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:82:00.0/device 00:05:18.876 09:54:04 -- common/autotest_common.sh@1580 -- # device=0x0a54 00:05:18.876 09:54:04 -- common/autotest_common.sh@1581 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:18.876 09:54:04 -- common/autotest_common.sh@1582 -- # bdfs+=($bdf) 00:05:18.876 09:54:04 -- common/autotest_common.sh@1586 -- # printf '%s\n' 0000:82:00.0 00:05:18.876 09:54:04 -- common/autotest_common.sh@1592 -- # [[ -z 0000:82:00.0 ]] 00:05:18.876 09:54:04 -- common/autotest_common.sh@1597 -- # spdk_tgt_pid=308594 00:05:18.876 09:54:04 -- common/autotest_common.sh@1596 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:18.876 09:54:04 -- common/autotest_common.sh@1598 -- # waitforlisten 308594 00:05:18.876 09:54:04 -- common/autotest_common.sh@831 -- # '[' -z 308594 ']' 00:05:18.876 09:54:04 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:18.876 09:54:04 -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:18.876 09:54:04 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:18.876 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:18.876 09:54:04 -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:18.876 09:54:04 -- common/autotest_common.sh@10 -- # set +x 00:05:19.134 [2024-07-25 09:54:04.074592] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:05:19.134 [2024-07-25 09:54:04.074697] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid308594 ] 00:05:19.134 EAL: No free 2048 kB hugepages reported on node 1 00:05:19.134 [2024-07-25 09:54:04.148095] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.134 [2024-07-25 09:54:04.269244] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.406 09:54:04 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:19.406 09:54:04 -- common/autotest_common.sh@864 -- # return 0 00:05:19.406 09:54:04 -- common/autotest_common.sh@1600 -- # bdf_id=0 00:05:19.406 09:54:04 -- common/autotest_common.sh@1601 -- # for bdf in "${bdfs[@]}" 00:05:19.406 09:54:04 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:82:00.0 00:05:22.686 nvme0n1 00:05:22.686 09:54:07 -- common/autotest_common.sh@1604 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:23.251 [2024-07-25 09:54:08.137362] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:05:23.251 [2024-07-25 09:54:08.137412] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:05:23.251 request: 00:05:23.251 { 00:05:23.251 "nvme_ctrlr_name": "nvme0", 00:05:23.251 "password": "test", 00:05:23.251 "method": "bdev_nvme_opal_revert", 00:05:23.251 "req_id": 1 00:05:23.251 } 00:05:23.251 Got JSON-RPC error response 00:05:23.251 response: 00:05:23.251 { 00:05:23.251 "code": -32603, 00:05:23.251 "message": "Internal error" 00:05:23.251 } 00:05:23.251 09:54:08 -- common/autotest_common.sh@1604 -- # true 00:05:23.251 09:54:08 -- common/autotest_common.sh@1605 -- # (( ++bdf_id )) 00:05:23.251 09:54:08 -- common/autotest_common.sh@1608 -- # killprocess 308594 00:05:23.251 09:54:08 -- common/autotest_common.sh@950 -- # '[' -z 308594 ']' 00:05:23.251 09:54:08 -- common/autotest_common.sh@954 -- # kill -0 308594 00:05:23.251 09:54:08 -- common/autotest_common.sh@955 -- # uname 00:05:23.251 09:54:08 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:23.251 09:54:08 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 308594 00:05:23.251 09:54:08 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:23.251 09:54:08 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:23.251 09:54:08 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 308594' 00:05:23.251 killing process with pid 308594 00:05:23.251 09:54:08 -- common/autotest_common.sh@969 -- # kill 308594 00:05:23.251 09:54:08 -- common/autotest_common.sh@974 -- # wait 308594 00:05:25.152 09:54:10 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:25.152 09:54:10 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:25.152 09:54:10 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:25.152 09:54:10 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:25.152 09:54:10 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:25.152 09:54:10 -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:25.152 09:54:10 -- common/autotest_common.sh@10 -- # set +x 00:05:25.152 09:54:10 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:05:25.152 09:54:10 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:25.152 09:54:10 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:25.152 09:54:10 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:25.152 09:54:10 -- common/autotest_common.sh@10 -- # set +x 00:05:25.152 ************************************ 00:05:25.152 START TEST env 00:05:25.152 ************************************ 00:05:25.152 09:54:10 env -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:25.152 * Looking for test storage... 00:05:25.152 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:05:25.152 09:54:10 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:25.152 09:54:10 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:25.152 09:54:10 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:25.152 09:54:10 env -- common/autotest_common.sh@10 -- # set +x 00:05:25.152 ************************************ 00:05:25.152 START TEST env_memory 00:05:25.152 ************************************ 00:05:25.152 09:54:10 env.env_memory -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:25.152 00:05:25.152 00:05:25.152 CUnit - A unit testing framework for C - Version 2.1-3 00:05:25.152 http://cunit.sourceforge.net/ 00:05:25.152 00:05:25.152 00:05:25.152 Suite: memory 00:05:25.152 Test: alloc and free memory map ...[2024-07-25 09:54:10.209049] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:25.152 passed 00:05:25.152 Test: mem map translation ...[2024-07-25 09:54:10.243481] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:25.152 [2024-07-25 09:54:10.243512] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:25.152 [2024-07-25 09:54:10.243565] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:25.153 [2024-07-25 09:54:10.243580] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:25.153 passed 00:05:25.153 Test: mem map registration ...[2024-07-25 09:54:10.295909] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:25.153 [2024-07-25 09:54:10.295934] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:25.153 passed 00:05:25.411 Test: mem map adjacent registrations ...passed 00:05:25.411 00:05:25.411 Run Summary: Type Total Ran Passed Failed Inactive 00:05:25.411 suites 1 1 n/a 0 0 00:05:25.411 tests 4 4 4 0 0 00:05:25.411 asserts 152 152 152 0 n/a 00:05:25.411 00:05:25.411 Elapsed time = 0.211 seconds 00:05:25.411 00:05:25.411 real 0m0.223s 00:05:25.411 user 0m0.212s 00:05:25.411 sys 0m0.010s 00:05:25.411 09:54:10 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:25.411 09:54:10 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:25.411 ************************************ 00:05:25.411 END TEST env_memory 00:05:25.411 ************************************ 00:05:25.411 09:54:10 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:25.411 09:54:10 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:25.411 09:54:10 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:25.411 09:54:10 env -- common/autotest_common.sh@10 -- # set +x 00:05:25.411 ************************************ 00:05:25.411 START TEST env_vtophys 00:05:25.411 ************************************ 00:05:25.411 09:54:10 env.env_vtophys -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:25.411 EAL: lib.eal log level changed from notice to debug 00:05:25.411 EAL: Detected lcore 0 as core 0 on socket 0 00:05:25.411 EAL: Detected lcore 1 as core 1 on socket 0 00:05:25.411 EAL: Detected lcore 2 as core 2 on socket 0 00:05:25.411 EAL: Detected lcore 3 as core 3 on socket 0 00:05:25.411 EAL: Detected lcore 4 as core 4 on socket 0 00:05:25.411 EAL: Detected lcore 5 as core 5 on socket 0 00:05:25.411 EAL: Detected lcore 6 as core 8 on socket 0 00:05:25.411 EAL: Detected lcore 7 as core 9 on socket 0 00:05:25.411 EAL: Detected lcore 8 as core 10 on socket 0 00:05:25.411 EAL: Detected lcore 9 as core 11 on socket 0 00:05:25.411 EAL: Detected lcore 10 as core 12 on socket 0 00:05:25.411 EAL: Detected lcore 11 as core 13 on socket 0 00:05:25.411 EAL: Detected lcore 12 as core 0 on socket 1 00:05:25.411 EAL: Detected lcore 13 as core 1 on socket 1 00:05:25.411 EAL: Detected lcore 14 as core 2 on socket 1 00:05:25.411 EAL: Detected lcore 15 as core 3 on socket 1 00:05:25.411 EAL: Detected lcore 16 as core 4 on socket 1 00:05:25.411 EAL: Detected lcore 17 as core 5 on socket 1 00:05:25.411 EAL: Detected lcore 18 as core 8 on socket 1 00:05:25.411 EAL: Detected lcore 19 as core 9 on socket 1 00:05:25.411 EAL: Detected lcore 20 as core 10 on socket 1 00:05:25.411 EAL: Detected lcore 21 as core 11 on socket 1 00:05:25.411 EAL: Detected lcore 22 as core 12 on socket 1 00:05:25.411 EAL: Detected lcore 23 as core 13 on socket 1 00:05:25.411 EAL: Detected lcore 24 as core 0 on socket 0 00:05:25.411 EAL: Detected lcore 25 as core 1 on socket 0 00:05:25.411 EAL: Detected lcore 26 as core 2 on socket 0 00:05:25.411 EAL: Detected lcore 27 as core 3 on socket 0 00:05:25.411 EAL: Detected lcore 28 as core 4 on socket 0 00:05:25.411 EAL: Detected lcore 29 as core 5 on socket 0 00:05:25.411 EAL: Detected lcore 30 as core 8 on socket 0 00:05:25.411 EAL: Detected lcore 31 as core 9 on socket 0 00:05:25.411 EAL: Detected lcore 32 as core 10 on socket 0 00:05:25.411 EAL: Detected lcore 33 as core 11 on socket 0 00:05:25.411 EAL: Detected lcore 34 as core 12 on socket 0 00:05:25.411 EAL: Detected lcore 35 as core 13 on socket 0 00:05:25.411 EAL: Detected lcore 36 as core 0 on socket 1 00:05:25.411 EAL: Detected lcore 37 as core 1 on socket 1 00:05:25.411 EAL: Detected lcore 38 as core 2 on socket 1 00:05:25.411 EAL: Detected lcore 39 as core 3 on socket 1 00:05:25.411 EAL: Detected lcore 40 as core 4 on socket 1 00:05:25.411 EAL: Detected lcore 41 as core 5 on socket 1 00:05:25.411 EAL: Detected lcore 42 as core 8 on socket 1 00:05:25.411 EAL: Detected lcore 43 as core 9 on socket 1 00:05:25.411 EAL: Detected lcore 44 as core 10 on socket 1 00:05:25.411 EAL: Detected lcore 45 as core 11 on socket 1 00:05:25.411 EAL: Detected lcore 46 as core 12 on socket 1 00:05:25.411 EAL: Detected lcore 47 as core 13 on socket 1 00:05:25.412 EAL: Maximum logical cores by configuration: 128 00:05:25.412 EAL: Detected CPU lcores: 48 00:05:25.412 EAL: Detected NUMA nodes: 2 00:05:25.412 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:25.412 EAL: Detected shared linkage of DPDK 00:05:25.412 EAL: No shared files mode enabled, IPC will be disabled 00:05:25.412 EAL: Bus pci wants IOVA as 'DC' 00:05:25.412 EAL: Buses did not request a specific IOVA mode. 00:05:25.412 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:25.412 EAL: Selected IOVA mode 'VA' 00:05:25.412 EAL: No free 2048 kB hugepages reported on node 1 00:05:25.412 EAL: Probing VFIO support... 00:05:25.412 EAL: IOMMU type 1 (Type 1) is supported 00:05:25.412 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:25.412 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:25.412 EAL: VFIO support initialized 00:05:25.412 EAL: Ask a virtual area of 0x2e000 bytes 00:05:25.412 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:25.412 EAL: Setting up physically contiguous memory... 00:05:25.412 EAL: Setting maximum number of open files to 524288 00:05:25.412 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:25.412 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:25.412 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:25.412 EAL: Ask a virtual area of 0x61000 bytes 00:05:25.412 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:25.412 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:25.412 EAL: Ask a virtual area of 0x400000000 bytes 00:05:25.412 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:25.412 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:25.412 EAL: Ask a virtual area of 0x61000 bytes 00:05:25.412 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:25.412 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:25.412 EAL: Ask a virtual area of 0x400000000 bytes 00:05:25.412 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:25.412 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:25.412 EAL: Ask a virtual area of 0x61000 bytes 00:05:25.412 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:25.412 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:25.412 EAL: Ask a virtual area of 0x400000000 bytes 00:05:25.412 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:25.412 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:25.412 EAL: Ask a virtual area of 0x61000 bytes 00:05:25.412 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:25.412 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:25.412 EAL: Ask a virtual area of 0x400000000 bytes 00:05:25.412 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:25.412 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:25.412 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:25.412 EAL: Ask a virtual area of 0x61000 bytes 00:05:25.412 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:25.412 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:25.412 EAL: Ask a virtual area of 0x400000000 bytes 00:05:25.412 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:25.412 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:25.412 EAL: Ask a virtual area of 0x61000 bytes 00:05:25.412 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:25.412 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:25.412 EAL: Ask a virtual area of 0x400000000 bytes 00:05:25.412 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:25.412 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:25.412 EAL: Ask a virtual area of 0x61000 bytes 00:05:25.412 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:25.412 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:25.412 EAL: Ask a virtual area of 0x400000000 bytes 00:05:25.412 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:25.412 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:25.412 EAL: Ask a virtual area of 0x61000 bytes 00:05:25.412 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:25.412 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:25.412 EAL: Ask a virtual area of 0x400000000 bytes 00:05:25.412 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:25.412 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:25.412 EAL: Hugepages will be freed exactly as allocated. 00:05:25.412 EAL: No shared files mode enabled, IPC is disabled 00:05:25.412 EAL: No shared files mode enabled, IPC is disabled 00:05:25.412 EAL: TSC frequency is ~2700000 KHz 00:05:25.412 EAL: Main lcore 0 is ready (tid=7f1947fafa00;cpuset=[0]) 00:05:25.412 EAL: Trying to obtain current memory policy. 00:05:25.412 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:25.412 EAL: Restoring previous memory policy: 0 00:05:25.412 EAL: request: mp_malloc_sync 00:05:25.412 EAL: No shared files mode enabled, IPC is disabled 00:05:25.412 EAL: Heap on socket 0 was expanded by 2MB 00:05:25.412 EAL: No shared files mode enabled, IPC is disabled 00:05:25.412 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:25.412 EAL: Mem event callback 'spdk:(nil)' registered 00:05:25.412 00:05:25.412 00:05:25.412 CUnit - A unit testing framework for C - Version 2.1-3 00:05:25.412 http://cunit.sourceforge.net/ 00:05:25.412 00:05:25.412 00:05:25.412 Suite: components_suite 00:05:25.412 Test: vtophys_malloc_test ...passed 00:05:25.412 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:25.412 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:25.412 EAL: Restoring previous memory policy: 4 00:05:25.412 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.412 EAL: request: mp_malloc_sync 00:05:25.412 EAL: No shared files mode enabled, IPC is disabled 00:05:25.412 EAL: Heap on socket 0 was expanded by 4MB 00:05:25.412 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.412 EAL: request: mp_malloc_sync 00:05:25.412 EAL: No shared files mode enabled, IPC is disabled 00:05:25.412 EAL: Heap on socket 0 was shrunk by 4MB 00:05:25.412 EAL: Trying to obtain current memory policy. 00:05:25.412 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:25.412 EAL: Restoring previous memory policy: 4 00:05:25.412 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.412 EAL: request: mp_malloc_sync 00:05:25.412 EAL: No shared files mode enabled, IPC is disabled 00:05:25.412 EAL: Heap on socket 0 was expanded by 6MB 00:05:25.412 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.412 EAL: request: mp_malloc_sync 00:05:25.412 EAL: No shared files mode enabled, IPC is disabled 00:05:25.412 EAL: Heap on socket 0 was shrunk by 6MB 00:05:25.412 EAL: Trying to obtain current memory policy. 00:05:25.412 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:25.412 EAL: Restoring previous memory policy: 4 00:05:25.412 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.412 EAL: request: mp_malloc_sync 00:05:25.412 EAL: No shared files mode enabled, IPC is disabled 00:05:25.412 EAL: Heap on socket 0 was expanded by 10MB 00:05:25.412 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.412 EAL: request: mp_malloc_sync 00:05:25.412 EAL: No shared files mode enabled, IPC is disabled 00:05:25.412 EAL: Heap on socket 0 was shrunk by 10MB 00:05:25.412 EAL: Trying to obtain current memory policy. 00:05:25.412 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:25.412 EAL: Restoring previous memory policy: 4 00:05:25.412 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.412 EAL: request: mp_malloc_sync 00:05:25.412 EAL: No shared files mode enabled, IPC is disabled 00:05:25.412 EAL: Heap on socket 0 was expanded by 18MB 00:05:25.412 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.412 EAL: request: mp_malloc_sync 00:05:25.412 EAL: No shared files mode enabled, IPC is disabled 00:05:25.412 EAL: Heap on socket 0 was shrunk by 18MB 00:05:25.412 EAL: Trying to obtain current memory policy. 00:05:25.412 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:25.412 EAL: Restoring previous memory policy: 4 00:05:25.412 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.412 EAL: request: mp_malloc_sync 00:05:25.412 EAL: No shared files mode enabled, IPC is disabled 00:05:25.412 EAL: Heap on socket 0 was expanded by 34MB 00:05:25.412 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.412 EAL: request: mp_malloc_sync 00:05:25.412 EAL: No shared files mode enabled, IPC is disabled 00:05:25.412 EAL: Heap on socket 0 was shrunk by 34MB 00:05:25.412 EAL: Trying to obtain current memory policy. 00:05:25.412 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:25.412 EAL: Restoring previous memory policy: 4 00:05:25.412 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.412 EAL: request: mp_malloc_sync 00:05:25.412 EAL: No shared files mode enabled, IPC is disabled 00:05:25.412 EAL: Heap on socket 0 was expanded by 66MB 00:05:25.412 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.670 EAL: request: mp_malloc_sync 00:05:25.670 EAL: No shared files mode enabled, IPC is disabled 00:05:25.670 EAL: Heap on socket 0 was shrunk by 66MB 00:05:25.670 EAL: Trying to obtain current memory policy. 00:05:25.670 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:25.670 EAL: Restoring previous memory policy: 4 00:05:25.670 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.670 EAL: request: mp_malloc_sync 00:05:25.670 EAL: No shared files mode enabled, IPC is disabled 00:05:25.670 EAL: Heap on socket 0 was expanded by 130MB 00:05:25.670 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.670 EAL: request: mp_malloc_sync 00:05:25.670 EAL: No shared files mode enabled, IPC is disabled 00:05:25.670 EAL: Heap on socket 0 was shrunk by 130MB 00:05:25.670 EAL: Trying to obtain current memory policy. 00:05:25.670 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:25.670 EAL: Restoring previous memory policy: 4 00:05:25.670 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.670 EAL: request: mp_malloc_sync 00:05:25.670 EAL: No shared files mode enabled, IPC is disabled 00:05:25.670 EAL: Heap on socket 0 was expanded by 258MB 00:05:25.670 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.928 EAL: request: mp_malloc_sync 00:05:25.928 EAL: No shared files mode enabled, IPC is disabled 00:05:25.928 EAL: Heap on socket 0 was shrunk by 258MB 00:05:25.928 EAL: Trying to obtain current memory policy. 00:05:25.928 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:25.928 EAL: Restoring previous memory policy: 4 00:05:25.928 EAL: Calling mem event callback 'spdk:(nil)' 00:05:25.928 EAL: request: mp_malloc_sync 00:05:25.928 EAL: No shared files mode enabled, IPC is disabled 00:05:25.928 EAL: Heap on socket 0 was expanded by 514MB 00:05:26.186 EAL: Calling mem event callback 'spdk:(nil)' 00:05:26.186 EAL: request: mp_malloc_sync 00:05:26.186 EAL: No shared files mode enabled, IPC is disabled 00:05:26.186 EAL: Heap on socket 0 was shrunk by 514MB 00:05:26.186 EAL: Trying to obtain current memory policy. 00:05:26.186 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:26.444 EAL: Restoring previous memory policy: 4 00:05:26.444 EAL: Calling mem event callback 'spdk:(nil)' 00:05:26.444 EAL: request: mp_malloc_sync 00:05:26.444 EAL: No shared files mode enabled, IPC is disabled 00:05:26.444 EAL: Heap on socket 0 was expanded by 1026MB 00:05:26.701 EAL: Calling mem event callback 'spdk:(nil)' 00:05:26.960 EAL: request: mp_malloc_sync 00:05:26.960 EAL: No shared files mode enabled, IPC is disabled 00:05:26.960 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:26.960 passed 00:05:26.960 00:05:26.960 Run Summary: Type Total Ran Passed Failed Inactive 00:05:26.960 suites 1 1 n/a 0 0 00:05:26.960 tests 2 2 2 0 0 00:05:26.960 asserts 497 497 497 0 n/a 00:05:26.960 00:05:26.960 Elapsed time = 1.465 seconds 00:05:26.960 EAL: Calling mem event callback 'spdk:(nil)' 00:05:26.960 EAL: request: mp_malloc_sync 00:05:26.960 EAL: No shared files mode enabled, IPC is disabled 00:05:26.960 EAL: Heap on socket 0 was shrunk by 2MB 00:05:26.960 EAL: No shared files mode enabled, IPC is disabled 00:05:26.960 EAL: No shared files mode enabled, IPC is disabled 00:05:26.960 EAL: No shared files mode enabled, IPC is disabled 00:05:26.960 00:05:26.960 real 0m1.595s 00:05:26.960 user 0m0.925s 00:05:26.960 sys 0m0.633s 00:05:26.960 09:54:12 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:26.960 09:54:12 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:26.960 ************************************ 00:05:26.960 END TEST env_vtophys 00:05:26.960 ************************************ 00:05:26.960 09:54:12 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:26.960 09:54:12 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:26.960 09:54:12 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:26.960 09:54:12 env -- common/autotest_common.sh@10 -- # set +x 00:05:26.960 ************************************ 00:05:26.960 START TEST env_pci 00:05:26.960 ************************************ 00:05:26.960 09:54:12 env.env_pci -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:26.960 00:05:26.960 00:05:26.960 CUnit - A unit testing framework for C - Version 2.1-3 00:05:26.960 http://cunit.sourceforge.net/ 00:05:26.960 00:05:26.960 00:05:26.960 Suite: pci 00:05:26.961 Test: pci_hook ...[2024-07-25 09:54:12.111462] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 310109 has claimed it 00:05:27.219 EAL: Cannot find device (10000:00:01.0) 00:05:27.219 EAL: Failed to attach device on primary process 00:05:27.219 passed 00:05:27.219 00:05:27.219 Run Summary: Type Total Ran Passed Failed Inactive 00:05:27.219 suites 1 1 n/a 0 0 00:05:27.219 tests 1 1 1 0 0 00:05:27.219 asserts 25 25 25 0 n/a 00:05:27.219 00:05:27.219 Elapsed time = 0.023 seconds 00:05:27.219 00:05:27.219 real 0m0.040s 00:05:27.219 user 0m0.009s 00:05:27.219 sys 0m0.031s 00:05:27.219 09:54:12 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:27.219 09:54:12 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:27.219 ************************************ 00:05:27.219 END TEST env_pci 00:05:27.219 ************************************ 00:05:27.219 09:54:12 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:27.219 09:54:12 env -- env/env.sh@15 -- # uname 00:05:27.219 09:54:12 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:27.219 09:54:12 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:27.219 09:54:12 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:27.219 09:54:12 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:05:27.219 09:54:12 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:27.219 09:54:12 env -- common/autotest_common.sh@10 -- # set +x 00:05:27.219 ************************************ 00:05:27.219 START TEST env_dpdk_post_init 00:05:27.219 ************************************ 00:05:27.219 09:54:12 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:27.219 EAL: Detected CPU lcores: 48 00:05:27.219 EAL: Detected NUMA nodes: 2 00:05:27.219 EAL: Detected shared linkage of DPDK 00:05:27.219 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:27.219 EAL: Selected IOVA mode 'VA' 00:05:27.219 EAL: No free 2048 kB hugepages reported on node 1 00:05:27.219 EAL: VFIO support initialized 00:05:27.219 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:27.219 EAL: Using IOMMU type 1 (Type 1) 00:05:27.219 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:05:27.219 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:05:27.219 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:05:27.219 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:05:27.219 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:05:27.219 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:05:27.479 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:05:27.479 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:05:27.479 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:05:27.479 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:05:27.479 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:05:27.479 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:05:27.479 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:05:27.479 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:05:27.479 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:05:27.479 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:05:28.416 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:82:00.0 (socket 1) 00:05:31.694 EAL: Releasing PCI mapped resource for 0000:82:00.0 00:05:31.694 EAL: Calling pci_unmap_resource for 0000:82:00.0 at 0x202001040000 00:05:31.694 Starting DPDK initialization... 00:05:31.694 Starting SPDK post initialization... 00:05:31.694 SPDK NVMe probe 00:05:31.694 Attaching to 0000:82:00.0 00:05:31.694 Attached to 0000:82:00.0 00:05:31.694 Cleaning up... 00:05:31.694 00:05:31.694 real 0m4.422s 00:05:31.694 user 0m3.270s 00:05:31.694 sys 0m0.207s 00:05:31.694 09:54:16 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:31.694 09:54:16 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:31.694 ************************************ 00:05:31.694 END TEST env_dpdk_post_init 00:05:31.694 ************************************ 00:05:31.694 09:54:16 env -- env/env.sh@26 -- # uname 00:05:31.694 09:54:16 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:31.694 09:54:16 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:31.694 09:54:16 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:31.694 09:54:16 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:31.694 09:54:16 env -- common/autotest_common.sh@10 -- # set +x 00:05:31.694 ************************************ 00:05:31.694 START TEST env_mem_callbacks 00:05:31.694 ************************************ 00:05:31.694 09:54:16 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:31.694 EAL: Detected CPU lcores: 48 00:05:31.694 EAL: Detected NUMA nodes: 2 00:05:31.694 EAL: Detected shared linkage of DPDK 00:05:31.694 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:31.694 EAL: Selected IOVA mode 'VA' 00:05:31.694 EAL: No free 2048 kB hugepages reported on node 1 00:05:31.694 EAL: VFIO support initialized 00:05:31.694 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:31.694 00:05:31.694 00:05:31.694 CUnit - A unit testing framework for C - Version 2.1-3 00:05:31.694 http://cunit.sourceforge.net/ 00:05:31.694 00:05:31.694 00:05:31.694 Suite: memory 00:05:31.694 Test: test ... 00:05:31.694 register 0x200000200000 2097152 00:05:31.694 malloc 3145728 00:05:31.694 register 0x200000400000 4194304 00:05:31.694 buf 0x200000500000 len 3145728 PASSED 00:05:31.694 malloc 64 00:05:31.694 buf 0x2000004fff40 len 64 PASSED 00:05:31.694 malloc 4194304 00:05:31.694 register 0x200000800000 6291456 00:05:31.694 buf 0x200000a00000 len 4194304 PASSED 00:05:31.694 free 0x200000500000 3145728 00:05:31.694 free 0x2000004fff40 64 00:05:31.694 unregister 0x200000400000 4194304 PASSED 00:05:31.694 free 0x200000a00000 4194304 00:05:31.694 unregister 0x200000800000 6291456 PASSED 00:05:31.694 malloc 8388608 00:05:31.694 register 0x200000400000 10485760 00:05:31.694 buf 0x200000600000 len 8388608 PASSED 00:05:31.694 free 0x200000600000 8388608 00:05:31.694 unregister 0x200000400000 10485760 PASSED 00:05:31.694 passed 00:05:31.694 00:05:31.694 Run Summary: Type Total Ran Passed Failed Inactive 00:05:31.694 suites 1 1 n/a 0 0 00:05:31.694 tests 1 1 1 0 0 00:05:31.694 asserts 15 15 15 0 n/a 00:05:31.694 00:05:31.694 Elapsed time = 0.006 seconds 00:05:31.694 00:05:31.694 real 0m0.095s 00:05:31.694 user 0m0.026s 00:05:31.694 sys 0m0.068s 00:05:31.694 09:54:16 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:31.694 09:54:16 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:31.694 ************************************ 00:05:31.694 END TEST env_mem_callbacks 00:05:31.694 ************************************ 00:05:31.694 00:05:31.694 real 0m6.759s 00:05:31.694 user 0m4.594s 00:05:31.694 sys 0m1.201s 00:05:31.694 09:54:16 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:31.694 09:54:16 env -- common/autotest_common.sh@10 -- # set +x 00:05:31.694 ************************************ 00:05:31.694 END TEST env 00:05:31.694 ************************************ 00:05:31.694 09:54:16 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:31.695 09:54:16 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:31.695 09:54:16 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:31.695 09:54:16 -- common/autotest_common.sh@10 -- # set +x 00:05:31.695 ************************************ 00:05:31.695 START TEST rpc 00:05:31.695 ************************************ 00:05:31.695 09:54:16 rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:31.953 * Looking for test storage... 00:05:31.953 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:31.953 09:54:16 rpc -- rpc/rpc.sh@65 -- # spdk_pid=310767 00:05:31.953 09:54:16 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:31.953 09:54:16 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:31.953 09:54:16 rpc -- rpc/rpc.sh@67 -- # waitforlisten 310767 00:05:31.953 09:54:16 rpc -- common/autotest_common.sh@831 -- # '[' -z 310767 ']' 00:05:31.953 09:54:16 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:31.953 09:54:16 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:31.953 09:54:16 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:31.953 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:31.953 09:54:16 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:31.953 09:54:16 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.953 [2024-07-25 09:54:16.961189] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:05:31.953 [2024-07-25 09:54:16.961289] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid310767 ] 00:05:31.953 EAL: No free 2048 kB hugepages reported on node 1 00:05:31.953 [2024-07-25 09:54:17.029940] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.211 [2024-07-25 09:54:17.152704] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:32.211 [2024-07-25 09:54:17.152766] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 310767' to capture a snapshot of events at runtime. 00:05:32.211 [2024-07-25 09:54:17.152782] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:32.211 [2024-07-25 09:54:17.152796] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:32.211 [2024-07-25 09:54:17.152808] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid310767 for offline analysis/debug. 00:05:32.211 [2024-07-25 09:54:17.152841] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.469 09:54:17 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:32.469 09:54:17 rpc -- common/autotest_common.sh@864 -- # return 0 00:05:32.469 09:54:17 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:32.469 09:54:17 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:32.469 09:54:17 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:32.469 09:54:17 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:32.469 09:54:17 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:32.469 09:54:17 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:32.469 09:54:17 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:32.469 ************************************ 00:05:32.469 START TEST rpc_integrity 00:05:32.469 ************************************ 00:05:32.469 09:54:17 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:05:32.469 09:54:17 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:32.469 09:54:17 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:32.469 09:54:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:32.469 09:54:17 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:32.469 09:54:17 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:32.469 09:54:17 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:32.469 09:54:17 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:32.469 09:54:17 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:32.469 09:54:17 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:32.469 09:54:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:32.469 09:54:17 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:32.469 09:54:17 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:32.469 09:54:17 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:32.469 09:54:17 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:32.469 09:54:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:32.469 09:54:17 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:32.469 09:54:17 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:32.469 { 00:05:32.469 "name": "Malloc0", 00:05:32.469 "aliases": [ 00:05:32.469 "557caacb-0695-4f53-b9e4-bc20857be5ce" 00:05:32.469 ], 00:05:32.469 "product_name": "Malloc disk", 00:05:32.469 "block_size": 512, 00:05:32.469 "num_blocks": 16384, 00:05:32.469 "uuid": "557caacb-0695-4f53-b9e4-bc20857be5ce", 00:05:32.469 "assigned_rate_limits": { 00:05:32.470 "rw_ios_per_sec": 0, 00:05:32.470 "rw_mbytes_per_sec": 0, 00:05:32.470 "r_mbytes_per_sec": 0, 00:05:32.470 "w_mbytes_per_sec": 0 00:05:32.470 }, 00:05:32.470 "claimed": false, 00:05:32.470 "zoned": false, 00:05:32.470 "supported_io_types": { 00:05:32.470 "read": true, 00:05:32.470 "write": true, 00:05:32.470 "unmap": true, 00:05:32.470 "flush": true, 00:05:32.470 "reset": true, 00:05:32.470 "nvme_admin": false, 00:05:32.470 "nvme_io": false, 00:05:32.470 "nvme_io_md": false, 00:05:32.470 "write_zeroes": true, 00:05:32.470 "zcopy": true, 00:05:32.470 "get_zone_info": false, 00:05:32.470 "zone_management": false, 00:05:32.470 "zone_append": false, 00:05:32.470 "compare": false, 00:05:32.470 "compare_and_write": false, 00:05:32.470 "abort": true, 00:05:32.470 "seek_hole": false, 00:05:32.470 "seek_data": false, 00:05:32.470 "copy": true, 00:05:32.470 "nvme_iov_md": false 00:05:32.470 }, 00:05:32.470 "memory_domains": [ 00:05:32.470 { 00:05:32.470 "dma_device_id": "system", 00:05:32.470 "dma_device_type": 1 00:05:32.470 }, 00:05:32.470 { 00:05:32.470 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:32.470 "dma_device_type": 2 00:05:32.470 } 00:05:32.470 ], 00:05:32.470 "driver_specific": {} 00:05:32.470 } 00:05:32.470 ]' 00:05:32.470 09:54:17 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:32.470 09:54:17 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:32.470 09:54:17 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:32.470 09:54:17 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:32.470 09:54:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:32.470 [2024-07-25 09:54:17.573844] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:32.470 [2024-07-25 09:54:17.573893] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:32.470 [2024-07-25 09:54:17.573918] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xd293e0 00:05:32.470 [2024-07-25 09:54:17.573933] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:32.470 [2024-07-25 09:54:17.575381] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:32.470 [2024-07-25 09:54:17.575409] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:32.470 Passthru0 00:05:32.470 09:54:17 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:32.470 09:54:17 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:32.470 09:54:17 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:32.470 09:54:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:32.470 09:54:17 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:32.470 09:54:17 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:32.470 { 00:05:32.470 "name": "Malloc0", 00:05:32.470 "aliases": [ 00:05:32.470 "557caacb-0695-4f53-b9e4-bc20857be5ce" 00:05:32.470 ], 00:05:32.470 "product_name": "Malloc disk", 00:05:32.470 "block_size": 512, 00:05:32.470 "num_blocks": 16384, 00:05:32.470 "uuid": "557caacb-0695-4f53-b9e4-bc20857be5ce", 00:05:32.470 "assigned_rate_limits": { 00:05:32.470 "rw_ios_per_sec": 0, 00:05:32.470 "rw_mbytes_per_sec": 0, 00:05:32.470 "r_mbytes_per_sec": 0, 00:05:32.470 "w_mbytes_per_sec": 0 00:05:32.470 }, 00:05:32.470 "claimed": true, 00:05:32.470 "claim_type": "exclusive_write", 00:05:32.470 "zoned": false, 00:05:32.470 "supported_io_types": { 00:05:32.470 "read": true, 00:05:32.470 "write": true, 00:05:32.470 "unmap": true, 00:05:32.470 "flush": true, 00:05:32.470 "reset": true, 00:05:32.470 "nvme_admin": false, 00:05:32.470 "nvme_io": false, 00:05:32.470 "nvme_io_md": false, 00:05:32.470 "write_zeroes": true, 00:05:32.470 "zcopy": true, 00:05:32.470 "get_zone_info": false, 00:05:32.470 "zone_management": false, 00:05:32.470 "zone_append": false, 00:05:32.470 "compare": false, 00:05:32.470 "compare_and_write": false, 00:05:32.470 "abort": true, 00:05:32.470 "seek_hole": false, 00:05:32.470 "seek_data": false, 00:05:32.470 "copy": true, 00:05:32.470 "nvme_iov_md": false 00:05:32.470 }, 00:05:32.470 "memory_domains": [ 00:05:32.470 { 00:05:32.470 "dma_device_id": "system", 00:05:32.470 "dma_device_type": 1 00:05:32.470 }, 00:05:32.470 { 00:05:32.470 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:32.470 "dma_device_type": 2 00:05:32.470 } 00:05:32.470 ], 00:05:32.470 "driver_specific": {} 00:05:32.470 }, 00:05:32.470 { 00:05:32.470 "name": "Passthru0", 00:05:32.470 "aliases": [ 00:05:32.470 "3592cef8-b2fd-58ce-9e16-453e84691802" 00:05:32.470 ], 00:05:32.470 "product_name": "passthru", 00:05:32.470 "block_size": 512, 00:05:32.470 "num_blocks": 16384, 00:05:32.470 "uuid": "3592cef8-b2fd-58ce-9e16-453e84691802", 00:05:32.470 "assigned_rate_limits": { 00:05:32.470 "rw_ios_per_sec": 0, 00:05:32.470 "rw_mbytes_per_sec": 0, 00:05:32.470 "r_mbytes_per_sec": 0, 00:05:32.470 "w_mbytes_per_sec": 0 00:05:32.470 }, 00:05:32.470 "claimed": false, 00:05:32.470 "zoned": false, 00:05:32.470 "supported_io_types": { 00:05:32.470 "read": true, 00:05:32.470 "write": true, 00:05:32.470 "unmap": true, 00:05:32.470 "flush": true, 00:05:32.470 "reset": true, 00:05:32.470 "nvme_admin": false, 00:05:32.470 "nvme_io": false, 00:05:32.470 "nvme_io_md": false, 00:05:32.470 "write_zeroes": true, 00:05:32.470 "zcopy": true, 00:05:32.470 "get_zone_info": false, 00:05:32.470 "zone_management": false, 00:05:32.470 "zone_append": false, 00:05:32.470 "compare": false, 00:05:32.470 "compare_and_write": false, 00:05:32.470 "abort": true, 00:05:32.470 "seek_hole": false, 00:05:32.470 "seek_data": false, 00:05:32.470 "copy": true, 00:05:32.470 "nvme_iov_md": false 00:05:32.470 }, 00:05:32.470 "memory_domains": [ 00:05:32.470 { 00:05:32.470 "dma_device_id": "system", 00:05:32.470 "dma_device_type": 1 00:05:32.470 }, 00:05:32.470 { 00:05:32.470 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:32.470 "dma_device_type": 2 00:05:32.470 } 00:05:32.470 ], 00:05:32.470 "driver_specific": { 00:05:32.470 "passthru": { 00:05:32.470 "name": "Passthru0", 00:05:32.470 "base_bdev_name": "Malloc0" 00:05:32.470 } 00:05:32.470 } 00:05:32.470 } 00:05:32.470 ]' 00:05:32.470 09:54:17 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:32.470 09:54:17 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:32.470 09:54:17 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:32.470 09:54:17 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:32.470 09:54:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:32.728 09:54:17 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:32.728 09:54:17 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:32.728 09:54:17 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:32.728 09:54:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:32.728 09:54:17 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:32.728 09:54:17 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:32.728 09:54:17 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:32.728 09:54:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:32.728 09:54:17 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:32.728 09:54:17 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:32.728 09:54:17 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:32.728 09:54:17 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:32.728 00:05:32.728 real 0m0.238s 00:05:32.728 user 0m0.158s 00:05:32.728 sys 0m0.024s 00:05:32.728 09:54:17 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:32.728 09:54:17 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:32.728 ************************************ 00:05:32.728 END TEST rpc_integrity 00:05:32.728 ************************************ 00:05:32.728 09:54:17 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:32.728 09:54:17 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:32.728 09:54:17 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:32.728 09:54:17 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:32.728 ************************************ 00:05:32.728 START TEST rpc_plugins 00:05:32.728 ************************************ 00:05:32.728 09:54:17 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:05:32.728 09:54:17 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:32.728 09:54:17 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:32.728 09:54:17 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:32.728 09:54:17 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:32.728 09:54:17 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:32.728 09:54:17 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:32.728 09:54:17 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:32.728 09:54:17 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:32.728 09:54:17 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:32.728 09:54:17 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:32.728 { 00:05:32.728 "name": "Malloc1", 00:05:32.728 "aliases": [ 00:05:32.728 "fe6f037c-e904-4896-8a6b-3e9c868d1391" 00:05:32.728 ], 00:05:32.728 "product_name": "Malloc disk", 00:05:32.728 "block_size": 4096, 00:05:32.728 "num_blocks": 256, 00:05:32.728 "uuid": "fe6f037c-e904-4896-8a6b-3e9c868d1391", 00:05:32.728 "assigned_rate_limits": { 00:05:32.728 "rw_ios_per_sec": 0, 00:05:32.728 "rw_mbytes_per_sec": 0, 00:05:32.728 "r_mbytes_per_sec": 0, 00:05:32.728 "w_mbytes_per_sec": 0 00:05:32.728 }, 00:05:32.728 "claimed": false, 00:05:32.728 "zoned": false, 00:05:32.728 "supported_io_types": { 00:05:32.728 "read": true, 00:05:32.728 "write": true, 00:05:32.728 "unmap": true, 00:05:32.728 "flush": true, 00:05:32.728 "reset": true, 00:05:32.728 "nvme_admin": false, 00:05:32.728 "nvme_io": false, 00:05:32.728 "nvme_io_md": false, 00:05:32.728 "write_zeroes": true, 00:05:32.728 "zcopy": true, 00:05:32.728 "get_zone_info": false, 00:05:32.728 "zone_management": false, 00:05:32.728 "zone_append": false, 00:05:32.728 "compare": false, 00:05:32.728 "compare_and_write": false, 00:05:32.728 "abort": true, 00:05:32.728 "seek_hole": false, 00:05:32.728 "seek_data": false, 00:05:32.728 "copy": true, 00:05:32.728 "nvme_iov_md": false 00:05:32.728 }, 00:05:32.728 "memory_domains": [ 00:05:32.728 { 00:05:32.728 "dma_device_id": "system", 00:05:32.728 "dma_device_type": 1 00:05:32.728 }, 00:05:32.728 { 00:05:32.728 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:32.728 "dma_device_type": 2 00:05:32.728 } 00:05:32.728 ], 00:05:32.728 "driver_specific": {} 00:05:32.728 } 00:05:32.728 ]' 00:05:32.728 09:54:17 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:32.729 09:54:17 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:32.729 09:54:17 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:32.729 09:54:17 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:32.729 09:54:17 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:32.729 09:54:17 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:32.729 09:54:17 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:32.729 09:54:17 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:32.729 09:54:17 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:32.729 09:54:17 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:32.729 09:54:17 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:32.729 09:54:17 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:32.729 09:54:17 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:32.729 00:05:32.729 real 0m0.118s 00:05:32.729 user 0m0.081s 00:05:32.729 sys 0m0.010s 00:05:32.729 09:54:17 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:32.729 09:54:17 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:32.729 ************************************ 00:05:32.729 END TEST rpc_plugins 00:05:32.729 ************************************ 00:05:32.729 09:54:17 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:32.729 09:54:17 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:32.729 09:54:17 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:32.729 09:54:17 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:32.987 ************************************ 00:05:32.987 START TEST rpc_trace_cmd_test 00:05:32.987 ************************************ 00:05:32.987 09:54:17 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:05:32.987 09:54:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:32.987 09:54:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:32.987 09:54:17 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:32.987 09:54:17 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:32.987 09:54:17 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:32.987 09:54:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:32.987 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid310767", 00:05:32.987 "tpoint_group_mask": "0x8", 00:05:32.987 "iscsi_conn": { 00:05:32.987 "mask": "0x2", 00:05:32.987 "tpoint_mask": "0x0" 00:05:32.987 }, 00:05:32.987 "scsi": { 00:05:32.987 "mask": "0x4", 00:05:32.987 "tpoint_mask": "0x0" 00:05:32.987 }, 00:05:32.987 "bdev": { 00:05:32.987 "mask": "0x8", 00:05:32.987 "tpoint_mask": "0xffffffffffffffff" 00:05:32.987 }, 00:05:32.987 "nvmf_rdma": { 00:05:32.987 "mask": "0x10", 00:05:32.987 "tpoint_mask": "0x0" 00:05:32.987 }, 00:05:32.987 "nvmf_tcp": { 00:05:32.987 "mask": "0x20", 00:05:32.987 "tpoint_mask": "0x0" 00:05:32.987 }, 00:05:32.987 "ftl": { 00:05:32.987 "mask": "0x40", 00:05:32.987 "tpoint_mask": "0x0" 00:05:32.987 }, 00:05:32.987 "blobfs": { 00:05:32.987 "mask": "0x80", 00:05:32.987 "tpoint_mask": "0x0" 00:05:32.987 }, 00:05:32.987 "dsa": { 00:05:32.987 "mask": "0x200", 00:05:32.987 "tpoint_mask": "0x0" 00:05:32.987 }, 00:05:32.987 "thread": { 00:05:32.987 "mask": "0x400", 00:05:32.987 "tpoint_mask": "0x0" 00:05:32.987 }, 00:05:32.987 "nvme_pcie": { 00:05:32.987 "mask": "0x800", 00:05:32.987 "tpoint_mask": "0x0" 00:05:32.987 }, 00:05:32.987 "iaa": { 00:05:32.987 "mask": "0x1000", 00:05:32.987 "tpoint_mask": "0x0" 00:05:32.987 }, 00:05:32.987 "nvme_tcp": { 00:05:32.987 "mask": "0x2000", 00:05:32.987 "tpoint_mask": "0x0" 00:05:32.987 }, 00:05:32.987 "bdev_nvme": { 00:05:32.987 "mask": "0x4000", 00:05:32.987 "tpoint_mask": "0x0" 00:05:32.987 }, 00:05:32.987 "sock": { 00:05:32.987 "mask": "0x8000", 00:05:32.987 "tpoint_mask": "0x0" 00:05:32.987 } 00:05:32.987 }' 00:05:32.987 09:54:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:32.987 09:54:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:32.987 09:54:17 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:32.987 09:54:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:32.987 09:54:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:32.987 09:54:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:32.987 09:54:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:33.246 09:54:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:33.246 09:54:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:33.246 09:54:18 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:33.246 00:05:33.246 real 0m0.280s 00:05:33.246 user 0m0.252s 00:05:33.246 sys 0m0.018s 00:05:33.246 09:54:18 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:33.246 09:54:18 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:33.246 ************************************ 00:05:33.246 END TEST rpc_trace_cmd_test 00:05:33.246 ************************************ 00:05:33.246 09:54:18 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:33.246 09:54:18 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:33.246 09:54:18 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:33.246 09:54:18 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:33.246 09:54:18 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:33.246 09:54:18 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:33.246 ************************************ 00:05:33.246 START TEST rpc_daemon_integrity 00:05:33.246 ************************************ 00:05:33.246 09:54:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:05:33.246 09:54:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:33.246 09:54:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:33.246 09:54:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:33.246 09:54:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:33.246 09:54:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:33.246 09:54:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:33.246 09:54:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:33.246 09:54:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:33.246 09:54:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:33.246 09:54:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:33.246 09:54:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:33.246 09:54:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:33.246 09:54:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:33.246 09:54:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:33.246 09:54:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:33.246 09:54:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:33.246 09:54:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:33.246 { 00:05:33.246 "name": "Malloc2", 00:05:33.246 "aliases": [ 00:05:33.246 "f2e9effc-d8ed-41ed-8bd5-86f4bf153b5d" 00:05:33.246 ], 00:05:33.246 "product_name": "Malloc disk", 00:05:33.246 "block_size": 512, 00:05:33.246 "num_blocks": 16384, 00:05:33.246 "uuid": "f2e9effc-d8ed-41ed-8bd5-86f4bf153b5d", 00:05:33.246 "assigned_rate_limits": { 00:05:33.246 "rw_ios_per_sec": 0, 00:05:33.246 "rw_mbytes_per_sec": 0, 00:05:33.246 "r_mbytes_per_sec": 0, 00:05:33.246 "w_mbytes_per_sec": 0 00:05:33.246 }, 00:05:33.246 "claimed": false, 00:05:33.246 "zoned": false, 00:05:33.246 "supported_io_types": { 00:05:33.246 "read": true, 00:05:33.246 "write": true, 00:05:33.246 "unmap": true, 00:05:33.246 "flush": true, 00:05:33.246 "reset": true, 00:05:33.246 "nvme_admin": false, 00:05:33.246 "nvme_io": false, 00:05:33.246 "nvme_io_md": false, 00:05:33.246 "write_zeroes": true, 00:05:33.246 "zcopy": true, 00:05:33.246 "get_zone_info": false, 00:05:33.246 "zone_management": false, 00:05:33.246 "zone_append": false, 00:05:33.246 "compare": false, 00:05:33.246 "compare_and_write": false, 00:05:33.246 "abort": true, 00:05:33.246 "seek_hole": false, 00:05:33.246 "seek_data": false, 00:05:33.246 "copy": true, 00:05:33.246 "nvme_iov_md": false 00:05:33.246 }, 00:05:33.246 "memory_domains": [ 00:05:33.246 { 00:05:33.246 "dma_device_id": "system", 00:05:33.246 "dma_device_type": 1 00:05:33.246 }, 00:05:33.246 { 00:05:33.246 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:33.246 "dma_device_type": 2 00:05:33.246 } 00:05:33.246 ], 00:05:33.246 "driver_specific": {} 00:05:33.246 } 00:05:33.246 ]' 00:05:33.246 09:54:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:33.246 09:54:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:33.246 09:54:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:33.246 09:54:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:33.246 09:54:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:33.246 [2024-07-25 09:54:18.356097] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:33.246 [2024-07-25 09:54:18.356146] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:33.246 [2024-07-25 09:54:18.356174] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xdc72f0 00:05:33.246 [2024-07-25 09:54:18.356197] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:33.246 [2024-07-25 09:54:18.357541] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:33.246 [2024-07-25 09:54:18.357568] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:33.246 Passthru0 00:05:33.246 09:54:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:33.246 09:54:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:33.246 09:54:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:33.246 09:54:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:33.246 09:54:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:33.246 09:54:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:33.246 { 00:05:33.246 "name": "Malloc2", 00:05:33.246 "aliases": [ 00:05:33.246 "f2e9effc-d8ed-41ed-8bd5-86f4bf153b5d" 00:05:33.246 ], 00:05:33.246 "product_name": "Malloc disk", 00:05:33.246 "block_size": 512, 00:05:33.246 "num_blocks": 16384, 00:05:33.246 "uuid": "f2e9effc-d8ed-41ed-8bd5-86f4bf153b5d", 00:05:33.246 "assigned_rate_limits": { 00:05:33.246 "rw_ios_per_sec": 0, 00:05:33.246 "rw_mbytes_per_sec": 0, 00:05:33.246 "r_mbytes_per_sec": 0, 00:05:33.246 "w_mbytes_per_sec": 0 00:05:33.246 }, 00:05:33.246 "claimed": true, 00:05:33.246 "claim_type": "exclusive_write", 00:05:33.246 "zoned": false, 00:05:33.246 "supported_io_types": { 00:05:33.246 "read": true, 00:05:33.246 "write": true, 00:05:33.246 "unmap": true, 00:05:33.246 "flush": true, 00:05:33.246 "reset": true, 00:05:33.246 "nvme_admin": false, 00:05:33.246 "nvme_io": false, 00:05:33.246 "nvme_io_md": false, 00:05:33.246 "write_zeroes": true, 00:05:33.246 "zcopy": true, 00:05:33.246 "get_zone_info": false, 00:05:33.246 "zone_management": false, 00:05:33.246 "zone_append": false, 00:05:33.246 "compare": false, 00:05:33.246 "compare_and_write": false, 00:05:33.246 "abort": true, 00:05:33.246 "seek_hole": false, 00:05:33.246 "seek_data": false, 00:05:33.246 "copy": true, 00:05:33.246 "nvme_iov_md": false 00:05:33.246 }, 00:05:33.246 "memory_domains": [ 00:05:33.246 { 00:05:33.246 "dma_device_id": "system", 00:05:33.246 "dma_device_type": 1 00:05:33.246 }, 00:05:33.246 { 00:05:33.246 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:33.246 "dma_device_type": 2 00:05:33.246 } 00:05:33.246 ], 00:05:33.246 "driver_specific": {} 00:05:33.246 }, 00:05:33.246 { 00:05:33.246 "name": "Passthru0", 00:05:33.246 "aliases": [ 00:05:33.246 "22cd597e-8b4f-5d9c-9fa8-9da947999ab7" 00:05:33.246 ], 00:05:33.246 "product_name": "passthru", 00:05:33.246 "block_size": 512, 00:05:33.246 "num_blocks": 16384, 00:05:33.246 "uuid": "22cd597e-8b4f-5d9c-9fa8-9da947999ab7", 00:05:33.246 "assigned_rate_limits": { 00:05:33.246 "rw_ios_per_sec": 0, 00:05:33.246 "rw_mbytes_per_sec": 0, 00:05:33.246 "r_mbytes_per_sec": 0, 00:05:33.246 "w_mbytes_per_sec": 0 00:05:33.246 }, 00:05:33.246 "claimed": false, 00:05:33.246 "zoned": false, 00:05:33.246 "supported_io_types": { 00:05:33.246 "read": true, 00:05:33.246 "write": true, 00:05:33.246 "unmap": true, 00:05:33.246 "flush": true, 00:05:33.246 "reset": true, 00:05:33.246 "nvme_admin": false, 00:05:33.246 "nvme_io": false, 00:05:33.246 "nvme_io_md": false, 00:05:33.246 "write_zeroes": true, 00:05:33.246 "zcopy": true, 00:05:33.246 "get_zone_info": false, 00:05:33.246 "zone_management": false, 00:05:33.246 "zone_append": false, 00:05:33.246 "compare": false, 00:05:33.246 "compare_and_write": false, 00:05:33.246 "abort": true, 00:05:33.246 "seek_hole": false, 00:05:33.246 "seek_data": false, 00:05:33.246 "copy": true, 00:05:33.246 "nvme_iov_md": false 00:05:33.246 }, 00:05:33.246 "memory_domains": [ 00:05:33.246 { 00:05:33.246 "dma_device_id": "system", 00:05:33.246 "dma_device_type": 1 00:05:33.246 }, 00:05:33.246 { 00:05:33.246 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:33.246 "dma_device_type": 2 00:05:33.246 } 00:05:33.246 ], 00:05:33.246 "driver_specific": { 00:05:33.246 "passthru": { 00:05:33.246 "name": "Passthru0", 00:05:33.246 "base_bdev_name": "Malloc2" 00:05:33.246 } 00:05:33.246 } 00:05:33.246 } 00:05:33.246 ]' 00:05:33.246 09:54:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:33.504 09:54:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:33.504 09:54:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:33.504 09:54:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:33.504 09:54:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:33.504 09:54:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:33.504 09:54:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:33.504 09:54:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:33.504 09:54:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:33.504 09:54:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:33.504 09:54:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:33.504 09:54:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:33.504 09:54:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:33.504 09:54:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:33.504 09:54:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:33.504 09:54:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:33.504 09:54:18 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:33.504 00:05:33.504 real 0m0.237s 00:05:33.504 user 0m0.160s 00:05:33.504 sys 0m0.022s 00:05:33.505 09:54:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:33.505 09:54:18 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:33.505 ************************************ 00:05:33.505 END TEST rpc_daemon_integrity 00:05:33.505 ************************************ 00:05:33.505 09:54:18 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:33.505 09:54:18 rpc -- rpc/rpc.sh@84 -- # killprocess 310767 00:05:33.505 09:54:18 rpc -- common/autotest_common.sh@950 -- # '[' -z 310767 ']' 00:05:33.505 09:54:18 rpc -- common/autotest_common.sh@954 -- # kill -0 310767 00:05:33.505 09:54:18 rpc -- common/autotest_common.sh@955 -- # uname 00:05:33.505 09:54:18 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:33.505 09:54:18 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 310767 00:05:33.505 09:54:18 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:33.505 09:54:18 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:33.505 09:54:18 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 310767' 00:05:33.505 killing process with pid 310767 00:05:33.505 09:54:18 rpc -- common/autotest_common.sh@969 -- # kill 310767 00:05:33.505 09:54:18 rpc -- common/autotest_common.sh@974 -- # wait 310767 00:05:34.070 00:05:34.070 real 0m2.161s 00:05:34.070 user 0m2.782s 00:05:34.070 sys 0m0.613s 00:05:34.070 09:54:19 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:34.070 09:54:19 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:34.070 ************************************ 00:05:34.070 END TEST rpc 00:05:34.070 ************************************ 00:05:34.070 09:54:19 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:34.070 09:54:19 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:34.070 09:54:19 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:34.070 09:54:19 -- common/autotest_common.sh@10 -- # set +x 00:05:34.070 ************************************ 00:05:34.070 START TEST skip_rpc 00:05:34.070 ************************************ 00:05:34.070 09:54:19 skip_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:34.070 * Looking for test storage... 00:05:34.070 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:34.070 09:54:19 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:34.070 09:54:19 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:34.070 09:54:19 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:34.070 09:54:19 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:34.070 09:54:19 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:34.070 09:54:19 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:34.070 ************************************ 00:05:34.070 START TEST skip_rpc 00:05:34.070 ************************************ 00:05:34.070 09:54:19 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:05:34.070 09:54:19 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=311204 00:05:34.070 09:54:19 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:34.070 09:54:19 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:34.070 09:54:19 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:34.070 [2024-07-25 09:54:19.222180] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:05:34.070 [2024-07-25 09:54:19.222260] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid311204 ] 00:05:34.329 EAL: No free 2048 kB hugepages reported on node 1 00:05:34.329 [2024-07-25 09:54:19.293364] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.329 [2024-07-25 09:54:19.418710] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.590 09:54:24 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:39.590 09:54:24 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:39.590 09:54:24 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:39.590 09:54:24 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:39.590 09:54:24 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:39.590 09:54:24 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:39.590 09:54:24 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:39.590 09:54:24 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:05:39.590 09:54:24 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:39.590 09:54:24 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:39.590 09:54:24 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:39.590 09:54:24 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:39.590 09:54:24 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:39.590 09:54:24 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:39.590 09:54:24 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:39.590 09:54:24 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:39.590 09:54:24 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 311204 00:05:39.590 09:54:24 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 311204 ']' 00:05:39.590 09:54:24 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 311204 00:05:39.590 09:54:24 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:05:39.590 09:54:24 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:39.590 09:54:24 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 311204 00:05:39.590 09:54:24 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:39.590 09:54:24 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:39.590 09:54:24 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 311204' 00:05:39.590 killing process with pid 311204 00:05:39.590 09:54:24 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 311204 00:05:39.590 09:54:24 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 311204 00:05:39.590 00:05:39.590 real 0m5.542s 00:05:39.590 user 0m5.197s 00:05:39.590 sys 0m0.361s 00:05:39.590 09:54:24 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:39.590 09:54:24 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:39.590 ************************************ 00:05:39.590 END TEST skip_rpc 00:05:39.590 ************************************ 00:05:39.590 09:54:24 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:39.590 09:54:24 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:39.590 09:54:24 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:39.590 09:54:24 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:39.849 ************************************ 00:05:39.849 START TEST skip_rpc_with_json 00:05:39.849 ************************************ 00:05:39.849 09:54:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:05:39.849 09:54:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:39.849 09:54:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=311891 00:05:39.849 09:54:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:39.849 09:54:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:39.849 09:54:24 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 311891 00:05:39.849 09:54:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 311891 ']' 00:05:39.849 09:54:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:39.849 09:54:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:39.849 09:54:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:39.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:39.849 09:54:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:39.849 09:54:24 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:39.849 [2024-07-25 09:54:24.854758] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:05:39.849 [2024-07-25 09:54:24.854938] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid311891 ] 00:05:39.849 EAL: No free 2048 kB hugepages reported on node 1 00:05:39.849 [2024-07-25 09:54:24.949133] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.107 [2024-07-25 09:54:25.075146] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.365 09:54:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:40.365 09:54:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:05:40.365 09:54:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:40.365 09:54:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:40.365 09:54:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:40.365 [2024-07-25 09:54:25.351702] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:40.365 request: 00:05:40.365 { 00:05:40.365 "trtype": "tcp", 00:05:40.365 "method": "nvmf_get_transports", 00:05:40.365 "req_id": 1 00:05:40.365 } 00:05:40.365 Got JSON-RPC error response 00:05:40.365 response: 00:05:40.365 { 00:05:40.365 "code": -19, 00:05:40.365 "message": "No such device" 00:05:40.365 } 00:05:40.365 09:54:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:40.365 09:54:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:40.365 09:54:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:40.365 09:54:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:40.365 [2024-07-25 09:54:25.359830] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:40.365 09:54:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:40.365 09:54:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:40.365 09:54:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:40.366 09:54:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:40.366 09:54:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:40.366 09:54:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:40.366 { 00:05:40.366 "subsystems": [ 00:05:40.366 { 00:05:40.366 "subsystem": "vfio_user_target", 00:05:40.366 "config": null 00:05:40.366 }, 00:05:40.366 { 00:05:40.366 "subsystem": "keyring", 00:05:40.366 "config": [] 00:05:40.366 }, 00:05:40.366 { 00:05:40.366 "subsystem": "iobuf", 00:05:40.366 "config": [ 00:05:40.366 { 00:05:40.366 "method": "iobuf_set_options", 00:05:40.366 "params": { 00:05:40.366 "small_pool_count": 8192, 00:05:40.366 "large_pool_count": 1024, 00:05:40.366 "small_bufsize": 8192, 00:05:40.366 "large_bufsize": 135168 00:05:40.366 } 00:05:40.366 } 00:05:40.366 ] 00:05:40.366 }, 00:05:40.366 { 00:05:40.366 "subsystem": "sock", 00:05:40.366 "config": [ 00:05:40.366 { 00:05:40.366 "method": "sock_set_default_impl", 00:05:40.366 "params": { 00:05:40.366 "impl_name": "posix" 00:05:40.366 } 00:05:40.366 }, 00:05:40.366 { 00:05:40.366 "method": "sock_impl_set_options", 00:05:40.366 "params": { 00:05:40.366 "impl_name": "ssl", 00:05:40.366 "recv_buf_size": 4096, 00:05:40.366 "send_buf_size": 4096, 00:05:40.366 "enable_recv_pipe": true, 00:05:40.366 "enable_quickack": false, 00:05:40.366 "enable_placement_id": 0, 00:05:40.366 "enable_zerocopy_send_server": true, 00:05:40.366 "enable_zerocopy_send_client": false, 00:05:40.366 "zerocopy_threshold": 0, 00:05:40.366 "tls_version": 0, 00:05:40.366 "enable_ktls": false 00:05:40.366 } 00:05:40.366 }, 00:05:40.366 { 00:05:40.366 "method": "sock_impl_set_options", 00:05:40.366 "params": { 00:05:40.366 "impl_name": "posix", 00:05:40.366 "recv_buf_size": 2097152, 00:05:40.366 "send_buf_size": 2097152, 00:05:40.366 "enable_recv_pipe": true, 00:05:40.366 "enable_quickack": false, 00:05:40.366 "enable_placement_id": 0, 00:05:40.366 "enable_zerocopy_send_server": true, 00:05:40.366 "enable_zerocopy_send_client": false, 00:05:40.366 "zerocopy_threshold": 0, 00:05:40.366 "tls_version": 0, 00:05:40.366 "enable_ktls": false 00:05:40.366 } 00:05:40.366 } 00:05:40.366 ] 00:05:40.366 }, 00:05:40.366 { 00:05:40.366 "subsystem": "vmd", 00:05:40.366 "config": [] 00:05:40.366 }, 00:05:40.366 { 00:05:40.366 "subsystem": "accel", 00:05:40.366 "config": [ 00:05:40.366 { 00:05:40.366 "method": "accel_set_options", 00:05:40.366 "params": { 00:05:40.366 "small_cache_size": 128, 00:05:40.366 "large_cache_size": 16, 00:05:40.366 "task_count": 2048, 00:05:40.366 "sequence_count": 2048, 00:05:40.366 "buf_count": 2048 00:05:40.366 } 00:05:40.366 } 00:05:40.366 ] 00:05:40.366 }, 00:05:40.366 { 00:05:40.366 "subsystem": "bdev", 00:05:40.366 "config": [ 00:05:40.366 { 00:05:40.366 "method": "bdev_set_options", 00:05:40.366 "params": { 00:05:40.366 "bdev_io_pool_size": 65535, 00:05:40.366 "bdev_io_cache_size": 256, 00:05:40.366 "bdev_auto_examine": true, 00:05:40.366 "iobuf_small_cache_size": 128, 00:05:40.366 "iobuf_large_cache_size": 16 00:05:40.366 } 00:05:40.366 }, 00:05:40.366 { 00:05:40.366 "method": "bdev_raid_set_options", 00:05:40.366 "params": { 00:05:40.366 "process_window_size_kb": 1024, 00:05:40.366 "process_max_bandwidth_mb_sec": 0 00:05:40.366 } 00:05:40.366 }, 00:05:40.366 { 00:05:40.366 "method": "bdev_iscsi_set_options", 00:05:40.366 "params": { 00:05:40.366 "timeout_sec": 30 00:05:40.366 } 00:05:40.366 }, 00:05:40.366 { 00:05:40.366 "method": "bdev_nvme_set_options", 00:05:40.366 "params": { 00:05:40.366 "action_on_timeout": "none", 00:05:40.366 "timeout_us": 0, 00:05:40.366 "timeout_admin_us": 0, 00:05:40.366 "keep_alive_timeout_ms": 10000, 00:05:40.366 "arbitration_burst": 0, 00:05:40.366 "low_priority_weight": 0, 00:05:40.366 "medium_priority_weight": 0, 00:05:40.366 "high_priority_weight": 0, 00:05:40.366 "nvme_adminq_poll_period_us": 10000, 00:05:40.366 "nvme_ioq_poll_period_us": 0, 00:05:40.366 "io_queue_requests": 0, 00:05:40.366 "delay_cmd_submit": true, 00:05:40.366 "transport_retry_count": 4, 00:05:40.366 "bdev_retry_count": 3, 00:05:40.366 "transport_ack_timeout": 0, 00:05:40.366 "ctrlr_loss_timeout_sec": 0, 00:05:40.366 "reconnect_delay_sec": 0, 00:05:40.366 "fast_io_fail_timeout_sec": 0, 00:05:40.366 "disable_auto_failback": false, 00:05:40.366 "generate_uuids": false, 00:05:40.366 "transport_tos": 0, 00:05:40.366 "nvme_error_stat": false, 00:05:40.366 "rdma_srq_size": 0, 00:05:40.366 "io_path_stat": false, 00:05:40.366 "allow_accel_sequence": false, 00:05:40.366 "rdma_max_cq_size": 0, 00:05:40.366 "rdma_cm_event_timeout_ms": 0, 00:05:40.366 "dhchap_digests": [ 00:05:40.366 "sha256", 00:05:40.366 "sha384", 00:05:40.366 "sha512" 00:05:40.366 ], 00:05:40.366 "dhchap_dhgroups": [ 00:05:40.366 "null", 00:05:40.366 "ffdhe2048", 00:05:40.366 "ffdhe3072", 00:05:40.366 "ffdhe4096", 00:05:40.366 "ffdhe6144", 00:05:40.366 "ffdhe8192" 00:05:40.366 ] 00:05:40.366 } 00:05:40.366 }, 00:05:40.366 { 00:05:40.366 "method": "bdev_nvme_set_hotplug", 00:05:40.366 "params": { 00:05:40.366 "period_us": 100000, 00:05:40.366 "enable": false 00:05:40.366 } 00:05:40.366 }, 00:05:40.366 { 00:05:40.366 "method": "bdev_wait_for_examine" 00:05:40.366 } 00:05:40.366 ] 00:05:40.366 }, 00:05:40.366 { 00:05:40.366 "subsystem": "scsi", 00:05:40.366 "config": null 00:05:40.366 }, 00:05:40.366 { 00:05:40.366 "subsystem": "scheduler", 00:05:40.366 "config": [ 00:05:40.366 { 00:05:40.366 "method": "framework_set_scheduler", 00:05:40.366 "params": { 00:05:40.366 "name": "static" 00:05:40.366 } 00:05:40.366 } 00:05:40.366 ] 00:05:40.366 }, 00:05:40.366 { 00:05:40.366 "subsystem": "vhost_scsi", 00:05:40.366 "config": [] 00:05:40.366 }, 00:05:40.366 { 00:05:40.366 "subsystem": "vhost_blk", 00:05:40.366 "config": [] 00:05:40.366 }, 00:05:40.366 { 00:05:40.366 "subsystem": "ublk", 00:05:40.366 "config": [] 00:05:40.366 }, 00:05:40.366 { 00:05:40.366 "subsystem": "nbd", 00:05:40.366 "config": [] 00:05:40.366 }, 00:05:40.366 { 00:05:40.366 "subsystem": "nvmf", 00:05:40.366 "config": [ 00:05:40.366 { 00:05:40.366 "method": "nvmf_set_config", 00:05:40.366 "params": { 00:05:40.366 "discovery_filter": "match_any", 00:05:40.366 "admin_cmd_passthru": { 00:05:40.366 "identify_ctrlr": false 00:05:40.366 } 00:05:40.366 } 00:05:40.366 }, 00:05:40.366 { 00:05:40.366 "method": "nvmf_set_max_subsystems", 00:05:40.366 "params": { 00:05:40.366 "max_subsystems": 1024 00:05:40.366 } 00:05:40.366 }, 00:05:40.366 { 00:05:40.366 "method": "nvmf_set_crdt", 00:05:40.366 "params": { 00:05:40.366 "crdt1": 0, 00:05:40.366 "crdt2": 0, 00:05:40.366 "crdt3": 0 00:05:40.366 } 00:05:40.366 }, 00:05:40.366 { 00:05:40.366 "method": "nvmf_create_transport", 00:05:40.366 "params": { 00:05:40.366 "trtype": "TCP", 00:05:40.366 "max_queue_depth": 128, 00:05:40.366 "max_io_qpairs_per_ctrlr": 127, 00:05:40.366 "in_capsule_data_size": 4096, 00:05:40.366 "max_io_size": 131072, 00:05:40.366 "io_unit_size": 131072, 00:05:40.366 "max_aq_depth": 128, 00:05:40.366 "num_shared_buffers": 511, 00:05:40.366 "buf_cache_size": 4294967295, 00:05:40.366 "dif_insert_or_strip": false, 00:05:40.366 "zcopy": false, 00:05:40.366 "c2h_success": true, 00:05:40.366 "sock_priority": 0, 00:05:40.366 "abort_timeout_sec": 1, 00:05:40.366 "ack_timeout": 0, 00:05:40.366 "data_wr_pool_size": 0 00:05:40.366 } 00:05:40.366 } 00:05:40.366 ] 00:05:40.366 }, 00:05:40.366 { 00:05:40.366 "subsystem": "iscsi", 00:05:40.366 "config": [ 00:05:40.366 { 00:05:40.366 "method": "iscsi_set_options", 00:05:40.366 "params": { 00:05:40.367 "node_base": "iqn.2016-06.io.spdk", 00:05:40.367 "max_sessions": 128, 00:05:40.367 "max_connections_per_session": 2, 00:05:40.367 "max_queue_depth": 64, 00:05:40.367 "default_time2wait": 2, 00:05:40.367 "default_time2retain": 20, 00:05:40.367 "first_burst_length": 8192, 00:05:40.367 "immediate_data": true, 00:05:40.367 "allow_duplicated_isid": false, 00:05:40.367 "error_recovery_level": 0, 00:05:40.367 "nop_timeout": 60, 00:05:40.367 "nop_in_interval": 30, 00:05:40.367 "disable_chap": false, 00:05:40.367 "require_chap": false, 00:05:40.367 "mutual_chap": false, 00:05:40.367 "chap_group": 0, 00:05:40.367 "max_large_datain_per_connection": 64, 00:05:40.367 "max_r2t_per_connection": 4, 00:05:40.367 "pdu_pool_size": 36864, 00:05:40.367 "immediate_data_pool_size": 16384, 00:05:40.367 "data_out_pool_size": 2048 00:05:40.367 } 00:05:40.367 } 00:05:40.367 ] 00:05:40.367 } 00:05:40.367 ] 00:05:40.367 } 00:05:40.367 09:54:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:40.367 09:54:25 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 311891 00:05:40.367 09:54:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 311891 ']' 00:05:40.367 09:54:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 311891 00:05:40.367 09:54:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:05:40.367 09:54:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:40.367 09:54:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 311891 00:05:40.624 09:54:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:40.624 09:54:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:40.624 09:54:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 311891' 00:05:40.624 killing process with pid 311891 00:05:40.625 09:54:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 311891 00:05:40.625 09:54:25 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 311891 00:05:40.883 09:54:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=312040 00:05:40.883 09:54:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:40.883 09:54:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:46.144 09:54:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 312040 00:05:46.144 09:54:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 312040 ']' 00:05:46.144 09:54:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 312040 00:05:46.144 09:54:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:05:46.144 09:54:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:46.144 09:54:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 312040 00:05:46.144 09:54:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:46.144 09:54:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:46.144 09:54:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 312040' 00:05:46.144 killing process with pid 312040 00:05:46.144 09:54:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 312040 00:05:46.144 09:54:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 312040 00:05:46.401 09:54:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:46.401 09:54:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:46.401 00:05:46.401 real 0m6.790s 00:05:46.401 user 0m6.396s 00:05:46.401 sys 0m0.783s 00:05:46.401 09:54:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:46.401 09:54:31 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:46.401 ************************************ 00:05:46.401 END TEST skip_rpc_with_json 00:05:46.401 ************************************ 00:05:46.659 09:54:31 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:46.659 09:54:31 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:46.659 09:54:31 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:46.659 09:54:31 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.659 ************************************ 00:05:46.659 START TEST skip_rpc_with_delay 00:05:46.659 ************************************ 00:05:46.659 09:54:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:05:46.659 09:54:31 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:46.659 09:54:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:05:46.659 09:54:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:46.659 09:54:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:46.659 09:54:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:46.659 09:54:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:46.659 09:54:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:46.659 09:54:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:46.659 09:54:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:46.659 09:54:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:46.659 09:54:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:46.659 09:54:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:46.659 [2024-07-25 09:54:31.724969] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:46.659 [2024-07-25 09:54:31.725203] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:46.659 09:54:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:05:46.659 09:54:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:46.659 09:54:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:46.659 09:54:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:46.659 00:05:46.659 real 0m0.152s 00:05:46.659 user 0m0.098s 00:05:46.659 sys 0m0.052s 00:05:46.659 09:54:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:46.659 09:54:31 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:46.659 ************************************ 00:05:46.659 END TEST skip_rpc_with_delay 00:05:46.659 ************************************ 00:05:46.659 09:54:31 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:46.659 09:54:31 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:46.659 09:54:31 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:46.659 09:54:31 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:46.659 09:54:31 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:46.659 09:54:31 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:46.917 ************************************ 00:05:46.917 START TEST exit_on_failed_rpc_init 00:05:46.917 ************************************ 00:05:46.917 09:54:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:05:46.917 09:54:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=312747 00:05:46.917 09:54:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:46.917 09:54:31 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 312747 00:05:46.917 09:54:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 312747 ']' 00:05:46.917 09:54:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:46.917 09:54:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:46.917 09:54:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:46.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:46.917 09:54:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:46.917 09:54:31 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:46.917 [2024-07-25 09:54:31.897051] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:05:46.917 [2024-07-25 09:54:31.897158] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid312747 ] 00:05:46.917 EAL: No free 2048 kB hugepages reported on node 1 00:05:46.917 [2024-07-25 09:54:31.971697] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.175 [2024-07-25 09:54:32.098743] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.434 09:54:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:47.434 09:54:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:05:47.434 09:54:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:47.434 09:54:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:47.434 09:54:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:05:47.434 09:54:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:47.434 09:54:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:47.434 09:54:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:47.434 09:54:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:47.434 09:54:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:47.434 09:54:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:47.434 09:54:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:47.434 09:54:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:47.434 09:54:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:47.434 09:54:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:47.434 [2024-07-25 09:54:32.473654] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:05:47.434 [2024-07-25 09:54:32.473769] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid312764 ] 00:05:47.434 EAL: No free 2048 kB hugepages reported on node 1 00:05:47.434 [2024-07-25 09:54:32.557330] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.692 [2024-07-25 09:54:32.682133] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:47.692 [2024-07-25 09:54:32.682274] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:47.692 [2024-07-25 09:54:32.682296] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:47.692 [2024-07-25 09:54:32.682311] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:47.692 09:54:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:05:47.692 09:54:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:47.692 09:54:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:05:47.692 09:54:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:05:47.692 09:54:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:05:47.692 09:54:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:47.692 09:54:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:47.692 09:54:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 312747 00:05:47.692 09:54:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 312747 ']' 00:05:47.692 09:54:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 312747 00:05:47.692 09:54:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:05:47.692 09:54:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:47.693 09:54:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 312747 00:05:47.950 09:54:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:47.950 09:54:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:47.950 09:54:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 312747' 00:05:47.950 killing process with pid 312747 00:05:47.950 09:54:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 312747 00:05:47.950 09:54:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 312747 00:05:48.208 00:05:48.208 real 0m1.529s 00:05:48.208 user 0m1.949s 00:05:48.208 sys 0m0.556s 00:05:48.208 09:54:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:48.208 09:54:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:48.208 ************************************ 00:05:48.208 END TEST exit_on_failed_rpc_init 00:05:48.208 ************************************ 00:05:48.467 09:54:33 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:48.467 00:05:48.467 real 0m14.318s 00:05:48.467 user 0m13.767s 00:05:48.467 sys 0m1.949s 00:05:48.467 09:54:33 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:48.467 09:54:33 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.467 ************************************ 00:05:48.467 END TEST skip_rpc 00:05:48.467 ************************************ 00:05:48.467 09:54:33 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:48.467 09:54:33 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:48.467 09:54:33 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:48.467 09:54:33 -- common/autotest_common.sh@10 -- # set +x 00:05:48.467 ************************************ 00:05:48.467 START TEST rpc_client 00:05:48.467 ************************************ 00:05:48.467 09:54:33 rpc_client -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:48.467 * Looking for test storage... 00:05:48.467 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:48.467 09:54:33 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:48.467 OK 00:05:48.467 09:54:33 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:48.467 00:05:48.467 real 0m0.079s 00:05:48.467 user 0m0.029s 00:05:48.467 sys 0m0.056s 00:05:48.467 09:54:33 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:48.467 09:54:33 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:48.467 ************************************ 00:05:48.467 END TEST rpc_client 00:05:48.467 ************************************ 00:05:48.467 09:54:33 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:48.467 09:54:33 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:48.467 09:54:33 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:48.467 09:54:33 -- common/autotest_common.sh@10 -- # set +x 00:05:48.467 ************************************ 00:05:48.467 START TEST json_config 00:05:48.467 ************************************ 00:05:48.467 09:54:33 json_config -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:48.467 09:54:33 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:48.467 09:54:33 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:48.467 09:54:33 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:48.467 09:54:33 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:48.467 09:54:33 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:48.467 09:54:33 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:48.467 09:54:33 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:48.467 09:54:33 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:48.467 09:54:33 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:48.467 09:54:33 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:48.467 09:54:33 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:48.467 09:54:33 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:48.467 09:54:33 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:05:48.725 09:54:33 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:05:48.725 09:54:33 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:48.725 09:54:33 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:48.725 09:54:33 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:48.725 09:54:33 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:48.725 09:54:33 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:48.725 09:54:33 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:48.725 09:54:33 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:48.725 09:54:33 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:48.725 09:54:33 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:48.725 09:54:33 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:48.725 09:54:33 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:48.725 09:54:33 json_config -- paths/export.sh@5 -- # export PATH 00:05:48.725 09:54:33 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:48.725 09:54:33 json_config -- nvmf/common.sh@47 -- # : 0 00:05:48.725 09:54:33 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:48.725 09:54:33 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:48.725 09:54:33 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:48.725 09:54:33 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:48.725 09:54:33 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:48.725 09:54:33 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:48.725 09:54:33 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:48.725 09:54:33 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:48.725 09:54:33 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:48.725 09:54:33 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:48.725 09:54:33 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:48.725 09:54:33 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:48.725 09:54:33 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:48.725 09:54:33 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:48.725 09:54:33 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:48.725 09:54:33 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:48.725 09:54:33 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:48.725 09:54:33 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:48.725 09:54:33 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:48.725 09:54:33 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:48.725 09:54:33 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:48.725 09:54:33 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:48.725 09:54:33 json_config -- json_config/json_config.sh@359 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:48.725 09:54:33 json_config -- json_config/json_config.sh@360 -- # echo 'INFO: JSON configuration test init' 00:05:48.725 INFO: JSON configuration test init 00:05:48.725 09:54:33 json_config -- json_config/json_config.sh@361 -- # json_config_test_init 00:05:48.725 09:54:33 json_config -- json_config/json_config.sh@266 -- # timing_enter json_config_test_init 00:05:48.725 09:54:33 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:48.725 09:54:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:48.725 09:54:33 json_config -- json_config/json_config.sh@267 -- # timing_enter json_config_setup_target 00:05:48.725 09:54:33 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:48.725 09:54:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:48.725 09:54:33 json_config -- json_config/json_config.sh@269 -- # json_config_test_start_app target --wait-for-rpc 00:05:48.725 09:54:33 json_config -- json_config/common.sh@9 -- # local app=target 00:05:48.725 09:54:33 json_config -- json_config/common.sh@10 -- # shift 00:05:48.725 09:54:33 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:48.725 09:54:33 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:48.725 09:54:33 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:48.725 09:54:33 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:48.725 09:54:33 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:48.725 09:54:33 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=313034 00:05:48.725 09:54:33 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:48.725 09:54:33 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:48.725 Waiting for target to run... 00:05:48.725 09:54:33 json_config -- json_config/common.sh@25 -- # waitforlisten 313034 /var/tmp/spdk_tgt.sock 00:05:48.725 09:54:33 json_config -- common/autotest_common.sh@831 -- # '[' -z 313034 ']' 00:05:48.725 09:54:33 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:48.725 09:54:33 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:48.725 09:54:33 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:48.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:48.725 09:54:33 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:48.725 09:54:33 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:48.725 [2024-07-25 09:54:33.716787] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:05:48.725 [2024-07-25 09:54:33.716902] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid313034 ] 00:05:48.725 EAL: No free 2048 kB hugepages reported on node 1 00:05:49.292 [2024-07-25 09:54:34.349267] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.292 [2024-07-25 09:54:34.457312] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.857 09:54:34 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:49.857 09:54:34 json_config -- common/autotest_common.sh@864 -- # return 0 00:05:49.857 09:54:34 json_config -- json_config/common.sh@26 -- # echo '' 00:05:49.857 00:05:49.857 09:54:34 json_config -- json_config/json_config.sh@273 -- # create_accel_config 00:05:49.857 09:54:34 json_config -- json_config/json_config.sh@97 -- # timing_enter create_accel_config 00:05:49.857 09:54:34 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:49.857 09:54:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:49.857 09:54:34 json_config -- json_config/json_config.sh@99 -- # [[ 0 -eq 1 ]] 00:05:49.857 09:54:34 json_config -- json_config/json_config.sh@105 -- # timing_exit create_accel_config 00:05:49.857 09:54:34 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:49.857 09:54:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:49.857 09:54:34 json_config -- json_config/json_config.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:49.857 09:54:34 json_config -- json_config/json_config.sh@278 -- # tgt_rpc load_config 00:05:49.857 09:54:34 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:53.192 09:54:38 json_config -- json_config/json_config.sh@280 -- # tgt_check_notification_types 00:05:53.192 09:54:38 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:53.192 09:54:38 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:53.192 09:54:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:53.192 09:54:38 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:53.192 09:54:38 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:53.192 09:54:38 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:53.192 09:54:38 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:53.192 09:54:38 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:53.192 09:54:38 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:53.450 09:54:38 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:53.450 09:54:38 json_config -- json_config/json_config.sh@48 -- # local get_types 00:05:53.450 09:54:38 json_config -- json_config/json_config.sh@50 -- # local type_diff 00:05:53.450 09:54:38 json_config -- json_config/json_config.sh@51 -- # echo bdev_register bdev_unregister bdev_register bdev_unregister 00:05:53.450 09:54:38 json_config -- json_config/json_config.sh@51 -- # tr ' ' '\n' 00:05:53.450 09:54:38 json_config -- json_config/json_config.sh@51 -- # sort 00:05:53.450 09:54:38 json_config -- json_config/json_config.sh@51 -- # uniq -u 00:05:53.450 09:54:38 json_config -- json_config/json_config.sh@51 -- # type_diff= 00:05:53.450 09:54:38 json_config -- json_config/json_config.sh@53 -- # [[ -n '' ]] 00:05:53.450 09:54:38 json_config -- json_config/json_config.sh@58 -- # timing_exit tgt_check_notification_types 00:05:53.450 09:54:38 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:53.450 09:54:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:53.450 09:54:38 json_config -- json_config/json_config.sh@59 -- # return 0 00:05:53.450 09:54:38 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:05:53.450 09:54:38 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:05:53.450 09:54:38 json_config -- json_config/json_config.sh@290 -- # [[ 0 -eq 1 ]] 00:05:53.450 09:54:38 json_config -- json_config/json_config.sh@294 -- # [[ 1 -eq 1 ]] 00:05:53.450 09:54:38 json_config -- json_config/json_config.sh@295 -- # create_nvmf_subsystem_config 00:05:53.450 09:54:38 json_config -- json_config/json_config.sh@234 -- # timing_enter create_nvmf_subsystem_config 00:05:53.450 09:54:38 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:53.450 09:54:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:53.450 09:54:38 json_config -- json_config/json_config.sh@236 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:53.450 09:54:38 json_config -- json_config/json_config.sh@237 -- # [[ tcp == \r\d\m\a ]] 00:05:53.450 09:54:38 json_config -- json_config/json_config.sh@241 -- # [[ -z 127.0.0.1 ]] 00:05:53.450 09:54:38 json_config -- json_config/json_config.sh@246 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:53.450 09:54:38 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:54.381 MallocForNvmf0 00:05:54.381 09:54:39 json_config -- json_config/json_config.sh@247 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:54.381 09:54:39 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:54.639 MallocForNvmf1 00:05:54.639 09:54:39 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:54.639 09:54:39 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:55.203 [2024-07-25 09:54:40.073076] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:55.203 09:54:40 json_config -- json_config/json_config.sh@250 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:55.203 09:54:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:55.768 09:54:40 json_config -- json_config/json_config.sh@251 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:55.768 09:54:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:56.332 09:54:41 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:56.332 09:54:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:56.898 09:54:41 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:56.898 09:54:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:57.463 [2024-07-25 09:54:42.432348] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:57.463 09:54:42 json_config -- json_config/json_config.sh@255 -- # timing_exit create_nvmf_subsystem_config 00:05:57.463 09:54:42 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:57.463 09:54:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:57.463 09:54:42 json_config -- json_config/json_config.sh@297 -- # timing_exit json_config_setup_target 00:05:57.463 09:54:42 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:57.463 09:54:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:57.463 09:54:42 json_config -- json_config/json_config.sh@299 -- # [[ 0 -eq 1 ]] 00:05:57.463 09:54:42 json_config -- json_config/json_config.sh@304 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:57.463 09:54:42 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:58.039 MallocBdevForConfigChangeCheck 00:05:58.039 09:54:43 json_config -- json_config/json_config.sh@306 -- # timing_exit json_config_test_init 00:05:58.039 09:54:43 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:58.039 09:54:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:58.039 09:54:43 json_config -- json_config/json_config.sh@363 -- # tgt_rpc save_config 00:05:58.039 09:54:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:58.604 09:54:43 json_config -- json_config/json_config.sh@365 -- # echo 'INFO: shutting down applications...' 00:05:58.604 INFO: shutting down applications... 00:05:58.604 09:54:43 json_config -- json_config/json_config.sh@366 -- # [[ 0 -eq 1 ]] 00:05:58.604 09:54:43 json_config -- json_config/json_config.sh@372 -- # json_config_clear target 00:05:58.604 09:54:43 json_config -- json_config/json_config.sh@336 -- # [[ -n 22 ]] 00:05:58.604 09:54:43 json_config -- json_config/json_config.sh@337 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:00.500 Calling clear_iscsi_subsystem 00:06:00.500 Calling clear_nvmf_subsystem 00:06:00.500 Calling clear_nbd_subsystem 00:06:00.500 Calling clear_ublk_subsystem 00:06:00.500 Calling clear_vhost_blk_subsystem 00:06:00.500 Calling clear_vhost_scsi_subsystem 00:06:00.500 Calling clear_bdev_subsystem 00:06:00.500 09:54:45 json_config -- json_config/json_config.sh@341 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:06:00.500 09:54:45 json_config -- json_config/json_config.sh@347 -- # count=100 00:06:00.500 09:54:45 json_config -- json_config/json_config.sh@348 -- # '[' 100 -gt 0 ']' 00:06:00.500 09:54:45 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:00.500 09:54:45 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:00.500 09:54:45 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:06:00.500 09:54:45 json_config -- json_config/json_config.sh@349 -- # break 00:06:00.500 09:54:45 json_config -- json_config/json_config.sh@354 -- # '[' 100 -eq 0 ']' 00:06:00.500 09:54:45 json_config -- json_config/json_config.sh@373 -- # json_config_test_shutdown_app target 00:06:00.500 09:54:45 json_config -- json_config/common.sh@31 -- # local app=target 00:06:00.500 09:54:45 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:00.500 09:54:45 json_config -- json_config/common.sh@35 -- # [[ -n 313034 ]] 00:06:00.500 09:54:45 json_config -- json_config/common.sh@38 -- # kill -SIGINT 313034 00:06:00.500 09:54:45 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:00.500 09:54:45 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:00.500 09:54:45 json_config -- json_config/common.sh@41 -- # kill -0 313034 00:06:00.500 09:54:45 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:01.069 09:54:46 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:01.069 09:54:46 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:01.069 09:54:46 json_config -- json_config/common.sh@41 -- # kill -0 313034 00:06:01.069 09:54:46 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:01.069 09:54:46 json_config -- json_config/common.sh@43 -- # break 00:06:01.069 09:54:46 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:01.069 09:54:46 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:01.069 SPDK target shutdown done 00:06:01.069 09:54:46 json_config -- json_config/json_config.sh@375 -- # echo 'INFO: relaunching applications...' 00:06:01.069 INFO: relaunching applications... 00:06:01.069 09:54:46 json_config -- json_config/json_config.sh@376 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:01.069 09:54:46 json_config -- json_config/common.sh@9 -- # local app=target 00:06:01.069 09:54:46 json_config -- json_config/common.sh@10 -- # shift 00:06:01.069 09:54:46 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:01.069 09:54:46 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:01.069 09:54:46 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:01.069 09:54:46 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:01.069 09:54:46 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:01.069 09:54:46 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=314590 00:06:01.069 09:54:46 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:01.069 09:54:46 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:01.069 Waiting for target to run... 00:06:01.069 09:54:46 json_config -- json_config/common.sh@25 -- # waitforlisten 314590 /var/tmp/spdk_tgt.sock 00:06:01.069 09:54:46 json_config -- common/autotest_common.sh@831 -- # '[' -z 314590 ']' 00:06:01.069 09:54:46 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:01.069 09:54:46 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:01.069 09:54:46 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:01.069 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:01.069 09:54:46 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:01.069 09:54:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:01.069 [2024-07-25 09:54:46.182476] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:01.069 [2024-07-25 09:54:46.182594] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid314590 ] 00:06:01.069 EAL: No free 2048 kB hugepages reported on node 1 00:06:01.637 [2024-07-25 09:54:46.735556] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.896 [2024-07-25 09:54:46.830416] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.177 [2024-07-25 09:54:49.875565] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:05.177 [2024-07-25 09:54:49.908006] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:05.177 09:54:49 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:05.177 09:54:49 json_config -- common/autotest_common.sh@864 -- # return 0 00:06:05.177 09:54:49 json_config -- json_config/common.sh@26 -- # echo '' 00:06:05.177 00:06:05.177 09:54:49 json_config -- json_config/json_config.sh@377 -- # [[ 0 -eq 1 ]] 00:06:05.177 09:54:49 json_config -- json_config/json_config.sh@381 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:05.177 INFO: Checking if target configuration is the same... 00:06:05.177 09:54:49 json_config -- json_config/json_config.sh@382 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:05.177 09:54:49 json_config -- json_config/json_config.sh@382 -- # tgt_rpc save_config 00:06:05.177 09:54:49 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:05.177 + '[' 2 -ne 2 ']' 00:06:05.177 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:05.177 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:05.177 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:05.177 +++ basename /dev/fd/62 00:06:05.177 ++ mktemp /tmp/62.XXX 00:06:05.177 + tmp_file_1=/tmp/62.YCz 00:06:05.177 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:05.177 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:05.177 + tmp_file_2=/tmp/spdk_tgt_config.json.V3M 00:06:05.177 + ret=0 00:06:05.177 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:05.435 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:05.435 + diff -u /tmp/62.YCz /tmp/spdk_tgt_config.json.V3M 00:06:05.435 + echo 'INFO: JSON config files are the same' 00:06:05.435 INFO: JSON config files are the same 00:06:05.435 + rm /tmp/62.YCz /tmp/spdk_tgt_config.json.V3M 00:06:05.435 + exit 0 00:06:05.435 09:54:50 json_config -- json_config/json_config.sh@383 -- # [[ 0 -eq 1 ]] 00:06:05.435 09:54:50 json_config -- json_config/json_config.sh@388 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:05.435 INFO: changing configuration and checking if this can be detected... 00:06:05.435 09:54:50 json_config -- json_config/json_config.sh@390 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:05.435 09:54:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:05.693 09:54:50 json_config -- json_config/json_config.sh@391 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:05.693 09:54:50 json_config -- json_config/json_config.sh@391 -- # tgt_rpc save_config 00:06:05.693 09:54:50 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:05.693 + '[' 2 -ne 2 ']' 00:06:05.693 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:05.693 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:05.693 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:05.693 +++ basename /dev/fd/62 00:06:05.693 ++ mktemp /tmp/62.XXX 00:06:05.693 + tmp_file_1=/tmp/62.dnx 00:06:05.693 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:05.693 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:05.693 + tmp_file_2=/tmp/spdk_tgt_config.json.9Rl 00:06:05.693 + ret=0 00:06:05.693 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:06.258 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:06.258 + diff -u /tmp/62.dnx /tmp/spdk_tgt_config.json.9Rl 00:06:06.258 + ret=1 00:06:06.258 + echo '=== Start of file: /tmp/62.dnx ===' 00:06:06.258 + cat /tmp/62.dnx 00:06:06.258 + echo '=== End of file: /tmp/62.dnx ===' 00:06:06.258 + echo '' 00:06:06.258 + echo '=== Start of file: /tmp/spdk_tgt_config.json.9Rl ===' 00:06:06.258 + cat /tmp/spdk_tgt_config.json.9Rl 00:06:06.258 + echo '=== End of file: /tmp/spdk_tgt_config.json.9Rl ===' 00:06:06.259 + echo '' 00:06:06.259 + rm /tmp/62.dnx /tmp/spdk_tgt_config.json.9Rl 00:06:06.259 + exit 1 00:06:06.259 09:54:51 json_config -- json_config/json_config.sh@395 -- # echo 'INFO: configuration change detected.' 00:06:06.259 INFO: configuration change detected. 00:06:06.259 09:54:51 json_config -- json_config/json_config.sh@398 -- # json_config_test_fini 00:06:06.259 09:54:51 json_config -- json_config/json_config.sh@310 -- # timing_enter json_config_test_fini 00:06:06.259 09:54:51 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:06.259 09:54:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:06.259 09:54:51 json_config -- json_config/json_config.sh@311 -- # local ret=0 00:06:06.259 09:54:51 json_config -- json_config/json_config.sh@313 -- # [[ -n '' ]] 00:06:06.259 09:54:51 json_config -- json_config/json_config.sh@321 -- # [[ -n 314590 ]] 00:06:06.259 09:54:51 json_config -- json_config/json_config.sh@324 -- # cleanup_bdev_subsystem_config 00:06:06.259 09:54:51 json_config -- json_config/json_config.sh@188 -- # timing_enter cleanup_bdev_subsystem_config 00:06:06.259 09:54:51 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:06.259 09:54:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:06.259 09:54:51 json_config -- json_config/json_config.sh@190 -- # [[ 0 -eq 1 ]] 00:06:06.259 09:54:51 json_config -- json_config/json_config.sh@197 -- # uname -s 00:06:06.259 09:54:51 json_config -- json_config/json_config.sh@197 -- # [[ Linux = Linux ]] 00:06:06.259 09:54:51 json_config -- json_config/json_config.sh@198 -- # rm -f /sample_aio 00:06:06.259 09:54:51 json_config -- json_config/json_config.sh@201 -- # [[ 0 -eq 1 ]] 00:06:06.259 09:54:51 json_config -- json_config/json_config.sh@205 -- # timing_exit cleanup_bdev_subsystem_config 00:06:06.259 09:54:51 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:06.259 09:54:51 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:06.259 09:54:51 json_config -- json_config/json_config.sh@327 -- # killprocess 314590 00:06:06.259 09:54:51 json_config -- common/autotest_common.sh@950 -- # '[' -z 314590 ']' 00:06:06.259 09:54:51 json_config -- common/autotest_common.sh@954 -- # kill -0 314590 00:06:06.259 09:54:51 json_config -- common/autotest_common.sh@955 -- # uname 00:06:06.259 09:54:51 json_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:06.259 09:54:51 json_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 314590 00:06:06.259 09:54:51 json_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:06.259 09:54:51 json_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:06.259 09:54:51 json_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 314590' 00:06:06.259 killing process with pid 314590 00:06:06.259 09:54:51 json_config -- common/autotest_common.sh@969 -- # kill 314590 00:06:06.259 09:54:51 json_config -- common/autotest_common.sh@974 -- # wait 314590 00:06:08.157 09:54:53 json_config -- json_config/json_config.sh@330 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:08.158 09:54:53 json_config -- json_config/json_config.sh@331 -- # timing_exit json_config_test_fini 00:06:08.158 09:54:53 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:08.158 09:54:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:08.158 09:54:53 json_config -- json_config/json_config.sh@332 -- # return 0 00:06:08.158 09:54:53 json_config -- json_config/json_config.sh@400 -- # echo 'INFO: Success' 00:06:08.158 INFO: Success 00:06:08.158 00:06:08.158 real 0m19.469s 00:06:08.158 user 0m24.291s 00:06:08.158 sys 0m2.817s 00:06:08.158 09:54:53 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:08.158 09:54:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:08.158 ************************************ 00:06:08.158 END TEST json_config 00:06:08.158 ************************************ 00:06:08.158 09:54:53 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:08.158 09:54:53 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:08.158 09:54:53 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:08.158 09:54:53 -- common/autotest_common.sh@10 -- # set +x 00:06:08.158 ************************************ 00:06:08.158 START TEST json_config_extra_key 00:06:08.158 ************************************ 00:06:08.158 09:54:53 json_config_extra_key -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:08.158 09:54:53 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:08.158 09:54:53 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:08.158 09:54:53 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:08.158 09:54:53 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:08.158 09:54:53 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:08.158 09:54:53 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:08.158 09:54:53 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:08.158 09:54:53 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:08.158 09:54:53 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:08.158 09:54:53 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:08.158 09:54:53 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:08.158 09:54:53 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:08.158 09:54:53 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:06:08.158 09:54:53 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:06:08.158 09:54:53 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:08.158 09:54:53 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:08.158 09:54:53 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:08.158 09:54:53 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:08.158 09:54:53 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:08.158 09:54:53 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:08.158 09:54:53 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:08.158 09:54:53 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:08.158 09:54:53 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:08.158 09:54:53 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:08.158 09:54:53 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:08.158 09:54:53 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:08.158 09:54:53 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:08.158 09:54:53 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:06:08.158 09:54:53 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:08.158 09:54:53 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:08.158 09:54:53 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:08.158 09:54:53 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:08.158 09:54:53 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:08.158 09:54:53 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:08.158 09:54:53 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:08.158 09:54:53 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:08.158 09:54:53 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:08.158 09:54:53 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:08.158 09:54:53 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:08.158 09:54:53 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:08.158 09:54:53 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:08.158 09:54:53 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:08.158 09:54:53 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:08.158 09:54:53 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:06:08.158 09:54:53 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:08.158 09:54:53 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:08.158 09:54:53 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:08.158 INFO: launching applications... 00:06:08.158 09:54:53 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:08.158 09:54:53 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:08.158 09:54:53 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:08.158 09:54:53 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:08.158 09:54:53 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:08.158 09:54:53 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:08.158 09:54:53 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:08.158 09:54:53 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:08.158 09:54:53 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=315505 00:06:08.158 09:54:53 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:08.158 09:54:53 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:08.158 Waiting for target to run... 00:06:08.158 09:54:53 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 315505 /var/tmp/spdk_tgt.sock 00:06:08.158 09:54:53 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 315505 ']' 00:06:08.158 09:54:53 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:08.158 09:54:53 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:08.158 09:54:53 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:08.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:08.158 09:54:53 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:08.158 09:54:53 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:08.158 [2024-07-25 09:54:53.216832] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:08.158 [2024-07-25 09:54:53.216933] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid315505 ] 00:06:08.158 EAL: No free 2048 kB hugepages reported on node 1 00:06:08.729 [2024-07-25 09:54:53.641517] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.729 [2024-07-25 09:54:53.735383] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.341 09:54:54 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:09.341 09:54:54 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:06:09.341 09:54:54 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:09.341 00:06:09.341 09:54:54 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:09.341 INFO: shutting down applications... 00:06:09.341 09:54:54 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:09.341 09:54:54 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:09.341 09:54:54 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:09.341 09:54:54 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 315505 ]] 00:06:09.341 09:54:54 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 315505 00:06:09.341 09:54:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:09.341 09:54:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:09.341 09:54:54 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 315505 00:06:09.341 09:54:54 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:09.908 09:54:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:09.908 09:54:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:09.908 09:54:54 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 315505 00:06:09.908 09:54:54 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:10.167 09:54:55 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:10.167 09:54:55 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:10.167 09:54:55 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 315505 00:06:10.167 09:54:55 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:10.167 09:54:55 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:10.167 09:54:55 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:10.167 09:54:55 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:10.167 SPDK target shutdown done 00:06:10.167 09:54:55 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:10.167 Success 00:06:10.167 00:06:10.167 real 0m2.198s 00:06:10.167 user 0m1.761s 00:06:10.167 sys 0m0.548s 00:06:10.167 09:54:55 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:10.167 09:54:55 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:10.167 ************************************ 00:06:10.167 END TEST json_config_extra_key 00:06:10.167 ************************************ 00:06:10.167 09:54:55 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:10.167 09:54:55 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:10.167 09:54:55 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:10.167 09:54:55 -- common/autotest_common.sh@10 -- # set +x 00:06:10.426 ************************************ 00:06:10.426 START TEST alias_rpc 00:06:10.426 ************************************ 00:06:10.426 09:54:55 alias_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:10.426 * Looking for test storage... 00:06:10.426 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:06:10.426 09:54:55 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:10.426 09:54:55 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=315826 00:06:10.426 09:54:55 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:10.426 09:54:55 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 315826 00:06:10.426 09:54:55 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 315826 ']' 00:06:10.426 09:54:55 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:10.426 09:54:55 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:10.426 09:54:55 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:10.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:10.426 09:54:55 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:10.426 09:54:55 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:10.426 [2024-07-25 09:54:55.526343] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:10.426 [2024-07-25 09:54:55.526556] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid315826 ] 00:06:10.426 EAL: No free 2048 kB hugepages reported on node 1 00:06:10.684 [2024-07-25 09:54:55.630579] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.684 [2024-07-25 09:54:55.756467] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.942 09:54:56 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:10.942 09:54:56 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:10.942 09:54:56 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:06:11.199 09:54:56 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 315826 00:06:11.199 09:54:56 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 315826 ']' 00:06:11.199 09:54:56 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 315826 00:06:11.199 09:54:56 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:06:11.199 09:54:56 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:11.199 09:54:56 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 315826 00:06:11.457 09:54:56 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:11.457 09:54:56 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:11.457 09:54:56 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 315826' 00:06:11.457 killing process with pid 315826 00:06:11.457 09:54:56 alias_rpc -- common/autotest_common.sh@969 -- # kill 315826 00:06:11.457 09:54:56 alias_rpc -- common/autotest_common.sh@974 -- # wait 315826 00:06:11.716 00:06:11.716 real 0m1.508s 00:06:11.716 user 0m1.792s 00:06:11.716 sys 0m0.533s 00:06:11.716 09:54:56 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:11.716 09:54:56 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:11.716 ************************************ 00:06:11.716 END TEST alias_rpc 00:06:11.716 ************************************ 00:06:11.975 09:54:56 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:06:11.975 09:54:56 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:11.975 09:54:56 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:11.975 09:54:56 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:11.975 09:54:56 -- common/autotest_common.sh@10 -- # set +x 00:06:11.975 ************************************ 00:06:11.975 START TEST spdkcli_tcp 00:06:11.975 ************************************ 00:06:11.975 09:54:56 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:11.975 * Looking for test storage... 00:06:11.975 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:06:11.975 09:54:56 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:06:11.975 09:54:56 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:11.975 09:54:56 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:06:11.975 09:54:56 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:11.975 09:54:56 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:11.975 09:54:56 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:11.975 09:54:56 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:11.975 09:54:56 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:11.975 09:54:56 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:11.975 09:54:56 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=316128 00:06:11.975 09:54:56 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:11.975 09:54:57 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 316128 00:06:11.975 09:54:57 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 316128 ']' 00:06:11.975 09:54:57 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:11.975 09:54:57 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:11.975 09:54:57 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:11.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:11.975 09:54:57 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:11.975 09:54:57 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:11.975 [2024-07-25 09:54:57.056318] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:11.975 [2024-07-25 09:54:57.056424] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid316128 ] 00:06:11.975 EAL: No free 2048 kB hugepages reported on node 1 00:06:11.975 [2024-07-25 09:54:57.123074] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:12.233 [2024-07-25 09:54:57.248463] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:12.233 [2024-07-25 09:54:57.248466] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.491 09:54:57 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:12.491 09:54:57 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:06:12.491 09:54:57 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=316148 00:06:12.491 09:54:57 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:12.491 09:54:57 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:12.748 [ 00:06:12.748 "bdev_malloc_delete", 00:06:12.748 "bdev_malloc_create", 00:06:12.748 "bdev_null_resize", 00:06:12.748 "bdev_null_delete", 00:06:12.748 "bdev_null_create", 00:06:12.748 "bdev_nvme_cuse_unregister", 00:06:12.748 "bdev_nvme_cuse_register", 00:06:12.748 "bdev_opal_new_user", 00:06:12.748 "bdev_opal_set_lock_state", 00:06:12.748 "bdev_opal_delete", 00:06:12.748 "bdev_opal_get_info", 00:06:12.748 "bdev_opal_create", 00:06:12.748 "bdev_nvme_opal_revert", 00:06:12.748 "bdev_nvme_opal_init", 00:06:12.748 "bdev_nvme_send_cmd", 00:06:12.748 "bdev_nvme_get_path_iostat", 00:06:12.748 "bdev_nvme_get_mdns_discovery_info", 00:06:12.748 "bdev_nvme_stop_mdns_discovery", 00:06:12.748 "bdev_nvme_start_mdns_discovery", 00:06:12.748 "bdev_nvme_set_multipath_policy", 00:06:12.748 "bdev_nvme_set_preferred_path", 00:06:12.748 "bdev_nvme_get_io_paths", 00:06:12.748 "bdev_nvme_remove_error_injection", 00:06:12.748 "bdev_nvme_add_error_injection", 00:06:12.748 "bdev_nvme_get_discovery_info", 00:06:12.748 "bdev_nvme_stop_discovery", 00:06:12.748 "bdev_nvme_start_discovery", 00:06:12.748 "bdev_nvme_get_controller_health_info", 00:06:12.748 "bdev_nvme_disable_controller", 00:06:12.748 "bdev_nvme_enable_controller", 00:06:12.748 "bdev_nvme_reset_controller", 00:06:12.748 "bdev_nvme_get_transport_statistics", 00:06:12.748 "bdev_nvme_apply_firmware", 00:06:12.748 "bdev_nvme_detach_controller", 00:06:12.748 "bdev_nvme_get_controllers", 00:06:12.748 "bdev_nvme_attach_controller", 00:06:12.748 "bdev_nvme_set_hotplug", 00:06:12.748 "bdev_nvme_set_options", 00:06:12.748 "bdev_passthru_delete", 00:06:12.748 "bdev_passthru_create", 00:06:12.748 "bdev_lvol_set_parent_bdev", 00:06:12.748 "bdev_lvol_set_parent", 00:06:12.748 "bdev_lvol_check_shallow_copy", 00:06:12.748 "bdev_lvol_start_shallow_copy", 00:06:12.748 "bdev_lvol_grow_lvstore", 00:06:12.748 "bdev_lvol_get_lvols", 00:06:12.748 "bdev_lvol_get_lvstores", 00:06:12.748 "bdev_lvol_delete", 00:06:12.748 "bdev_lvol_set_read_only", 00:06:12.748 "bdev_lvol_resize", 00:06:12.748 "bdev_lvol_decouple_parent", 00:06:12.748 "bdev_lvol_inflate", 00:06:12.748 "bdev_lvol_rename", 00:06:12.748 "bdev_lvol_clone_bdev", 00:06:12.748 "bdev_lvol_clone", 00:06:12.748 "bdev_lvol_snapshot", 00:06:12.748 "bdev_lvol_create", 00:06:12.748 "bdev_lvol_delete_lvstore", 00:06:12.748 "bdev_lvol_rename_lvstore", 00:06:12.748 "bdev_lvol_create_lvstore", 00:06:12.748 "bdev_raid_set_options", 00:06:12.748 "bdev_raid_remove_base_bdev", 00:06:12.748 "bdev_raid_add_base_bdev", 00:06:12.748 "bdev_raid_delete", 00:06:12.748 "bdev_raid_create", 00:06:12.748 "bdev_raid_get_bdevs", 00:06:12.748 "bdev_error_inject_error", 00:06:12.748 "bdev_error_delete", 00:06:12.748 "bdev_error_create", 00:06:12.748 "bdev_split_delete", 00:06:12.748 "bdev_split_create", 00:06:12.748 "bdev_delay_delete", 00:06:12.748 "bdev_delay_create", 00:06:12.748 "bdev_delay_update_latency", 00:06:12.748 "bdev_zone_block_delete", 00:06:12.748 "bdev_zone_block_create", 00:06:12.748 "blobfs_create", 00:06:12.748 "blobfs_detect", 00:06:12.748 "blobfs_set_cache_size", 00:06:12.748 "bdev_aio_delete", 00:06:12.748 "bdev_aio_rescan", 00:06:12.748 "bdev_aio_create", 00:06:12.748 "bdev_ftl_set_property", 00:06:12.748 "bdev_ftl_get_properties", 00:06:12.748 "bdev_ftl_get_stats", 00:06:12.748 "bdev_ftl_unmap", 00:06:12.748 "bdev_ftl_unload", 00:06:12.748 "bdev_ftl_delete", 00:06:12.748 "bdev_ftl_load", 00:06:12.748 "bdev_ftl_create", 00:06:12.748 "bdev_virtio_attach_controller", 00:06:12.748 "bdev_virtio_scsi_get_devices", 00:06:12.748 "bdev_virtio_detach_controller", 00:06:12.748 "bdev_virtio_blk_set_hotplug", 00:06:12.748 "bdev_iscsi_delete", 00:06:12.748 "bdev_iscsi_create", 00:06:12.748 "bdev_iscsi_set_options", 00:06:12.748 "accel_error_inject_error", 00:06:12.748 "ioat_scan_accel_module", 00:06:12.748 "dsa_scan_accel_module", 00:06:12.748 "iaa_scan_accel_module", 00:06:12.748 "vfu_virtio_create_scsi_endpoint", 00:06:12.748 "vfu_virtio_scsi_remove_target", 00:06:12.748 "vfu_virtio_scsi_add_target", 00:06:12.748 "vfu_virtio_create_blk_endpoint", 00:06:12.748 "vfu_virtio_delete_endpoint", 00:06:12.748 "keyring_file_remove_key", 00:06:12.748 "keyring_file_add_key", 00:06:12.748 "keyring_linux_set_options", 00:06:12.748 "iscsi_get_histogram", 00:06:12.748 "iscsi_enable_histogram", 00:06:12.748 "iscsi_set_options", 00:06:12.748 "iscsi_get_auth_groups", 00:06:12.748 "iscsi_auth_group_remove_secret", 00:06:12.748 "iscsi_auth_group_add_secret", 00:06:12.748 "iscsi_delete_auth_group", 00:06:12.748 "iscsi_create_auth_group", 00:06:12.748 "iscsi_set_discovery_auth", 00:06:12.748 "iscsi_get_options", 00:06:12.748 "iscsi_target_node_request_logout", 00:06:12.748 "iscsi_target_node_set_redirect", 00:06:12.748 "iscsi_target_node_set_auth", 00:06:12.748 "iscsi_target_node_add_lun", 00:06:12.748 "iscsi_get_stats", 00:06:12.748 "iscsi_get_connections", 00:06:12.748 "iscsi_portal_group_set_auth", 00:06:12.748 "iscsi_start_portal_group", 00:06:12.748 "iscsi_delete_portal_group", 00:06:12.748 "iscsi_create_portal_group", 00:06:12.748 "iscsi_get_portal_groups", 00:06:12.748 "iscsi_delete_target_node", 00:06:12.748 "iscsi_target_node_remove_pg_ig_maps", 00:06:12.748 "iscsi_target_node_add_pg_ig_maps", 00:06:12.748 "iscsi_create_target_node", 00:06:12.748 "iscsi_get_target_nodes", 00:06:12.748 "iscsi_delete_initiator_group", 00:06:12.748 "iscsi_initiator_group_remove_initiators", 00:06:12.748 "iscsi_initiator_group_add_initiators", 00:06:12.748 "iscsi_create_initiator_group", 00:06:12.748 "iscsi_get_initiator_groups", 00:06:12.748 "nvmf_set_crdt", 00:06:12.748 "nvmf_set_config", 00:06:12.748 "nvmf_set_max_subsystems", 00:06:12.748 "nvmf_stop_mdns_prr", 00:06:12.748 "nvmf_publish_mdns_prr", 00:06:12.748 "nvmf_subsystem_get_listeners", 00:06:12.748 "nvmf_subsystem_get_qpairs", 00:06:12.748 "nvmf_subsystem_get_controllers", 00:06:12.748 "nvmf_get_stats", 00:06:12.748 "nvmf_get_transports", 00:06:12.748 "nvmf_create_transport", 00:06:12.748 "nvmf_get_targets", 00:06:12.748 "nvmf_delete_target", 00:06:12.748 "nvmf_create_target", 00:06:12.748 "nvmf_subsystem_allow_any_host", 00:06:12.748 "nvmf_subsystem_remove_host", 00:06:12.748 "nvmf_subsystem_add_host", 00:06:12.748 "nvmf_ns_remove_host", 00:06:12.748 "nvmf_ns_add_host", 00:06:12.748 "nvmf_subsystem_remove_ns", 00:06:12.748 "nvmf_subsystem_add_ns", 00:06:12.748 "nvmf_subsystem_listener_set_ana_state", 00:06:12.749 "nvmf_discovery_get_referrals", 00:06:12.749 "nvmf_discovery_remove_referral", 00:06:12.749 "nvmf_discovery_add_referral", 00:06:12.749 "nvmf_subsystem_remove_listener", 00:06:12.749 "nvmf_subsystem_add_listener", 00:06:12.749 "nvmf_delete_subsystem", 00:06:12.749 "nvmf_create_subsystem", 00:06:12.749 "nvmf_get_subsystems", 00:06:12.749 "env_dpdk_get_mem_stats", 00:06:12.749 "nbd_get_disks", 00:06:12.749 "nbd_stop_disk", 00:06:12.749 "nbd_start_disk", 00:06:12.749 "ublk_recover_disk", 00:06:12.749 "ublk_get_disks", 00:06:12.749 "ublk_stop_disk", 00:06:12.749 "ublk_start_disk", 00:06:12.749 "ublk_destroy_target", 00:06:12.749 "ublk_create_target", 00:06:12.749 "virtio_blk_create_transport", 00:06:12.749 "virtio_blk_get_transports", 00:06:12.749 "vhost_controller_set_coalescing", 00:06:12.749 "vhost_get_controllers", 00:06:12.749 "vhost_delete_controller", 00:06:12.749 "vhost_create_blk_controller", 00:06:12.749 "vhost_scsi_controller_remove_target", 00:06:12.749 "vhost_scsi_controller_add_target", 00:06:12.749 "vhost_start_scsi_controller", 00:06:12.749 "vhost_create_scsi_controller", 00:06:12.749 "thread_set_cpumask", 00:06:12.749 "framework_get_governor", 00:06:12.749 "framework_get_scheduler", 00:06:12.749 "framework_set_scheduler", 00:06:12.749 "framework_get_reactors", 00:06:12.749 "thread_get_io_channels", 00:06:12.749 "thread_get_pollers", 00:06:12.749 "thread_get_stats", 00:06:12.749 "framework_monitor_context_switch", 00:06:12.749 "spdk_kill_instance", 00:06:12.749 "log_enable_timestamps", 00:06:12.749 "log_get_flags", 00:06:12.749 "log_clear_flag", 00:06:12.749 "log_set_flag", 00:06:12.749 "log_get_level", 00:06:12.749 "log_set_level", 00:06:12.749 "log_get_print_level", 00:06:12.749 "log_set_print_level", 00:06:12.749 "framework_enable_cpumask_locks", 00:06:12.749 "framework_disable_cpumask_locks", 00:06:12.749 "framework_wait_init", 00:06:12.749 "framework_start_init", 00:06:12.749 "scsi_get_devices", 00:06:12.749 "bdev_get_histogram", 00:06:12.749 "bdev_enable_histogram", 00:06:12.749 "bdev_set_qos_limit", 00:06:12.749 "bdev_set_qd_sampling_period", 00:06:12.749 "bdev_get_bdevs", 00:06:12.749 "bdev_reset_iostat", 00:06:12.749 "bdev_get_iostat", 00:06:12.749 "bdev_examine", 00:06:12.749 "bdev_wait_for_examine", 00:06:12.749 "bdev_set_options", 00:06:12.749 "notify_get_notifications", 00:06:12.749 "notify_get_types", 00:06:12.749 "accel_get_stats", 00:06:12.749 "accel_set_options", 00:06:12.749 "accel_set_driver", 00:06:12.749 "accel_crypto_key_destroy", 00:06:12.749 "accel_crypto_keys_get", 00:06:12.749 "accel_crypto_key_create", 00:06:12.749 "accel_assign_opc", 00:06:12.749 "accel_get_module_info", 00:06:12.749 "accel_get_opc_assignments", 00:06:12.749 "vmd_rescan", 00:06:12.749 "vmd_remove_device", 00:06:12.749 "vmd_enable", 00:06:12.749 "sock_get_default_impl", 00:06:12.749 "sock_set_default_impl", 00:06:12.749 "sock_impl_set_options", 00:06:12.749 "sock_impl_get_options", 00:06:12.749 "iobuf_get_stats", 00:06:12.749 "iobuf_set_options", 00:06:12.749 "keyring_get_keys", 00:06:12.749 "framework_get_pci_devices", 00:06:12.749 "framework_get_config", 00:06:12.749 "framework_get_subsystems", 00:06:12.749 "vfu_tgt_set_base_path", 00:06:12.749 "trace_get_info", 00:06:12.749 "trace_get_tpoint_group_mask", 00:06:12.749 "trace_disable_tpoint_group", 00:06:12.749 "trace_enable_tpoint_group", 00:06:12.749 "trace_clear_tpoint_mask", 00:06:12.749 "trace_set_tpoint_mask", 00:06:12.749 "spdk_get_version", 00:06:12.749 "rpc_get_methods" 00:06:12.749 ] 00:06:12.749 09:54:57 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:12.749 09:54:57 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:12.749 09:54:57 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:12.749 09:54:57 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:12.749 09:54:57 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 316128 00:06:12.749 09:54:57 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 316128 ']' 00:06:12.749 09:54:57 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 316128 00:06:12.749 09:54:57 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:06:12.749 09:54:57 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:12.749 09:54:57 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 316128 00:06:13.006 09:54:57 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:13.006 09:54:57 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:13.006 09:54:57 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 316128' 00:06:13.006 killing process with pid 316128 00:06:13.006 09:54:57 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 316128 00:06:13.006 09:54:57 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 316128 00:06:13.264 00:06:13.264 real 0m1.478s 00:06:13.264 user 0m2.626s 00:06:13.264 sys 0m0.502s 00:06:13.264 09:54:58 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:13.264 09:54:58 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:13.264 ************************************ 00:06:13.264 END TEST spdkcli_tcp 00:06:13.264 ************************************ 00:06:13.264 09:54:58 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:13.264 09:54:58 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:13.264 09:54:58 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:13.264 09:54:58 -- common/autotest_common.sh@10 -- # set +x 00:06:13.522 ************************************ 00:06:13.522 START TEST dpdk_mem_utility 00:06:13.522 ************************************ 00:06:13.522 09:54:58 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:13.522 * Looking for test storage... 00:06:13.522 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:06:13.522 09:54:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:13.522 09:54:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=316340 00:06:13.522 09:54:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:13.522 09:54:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 316340 00:06:13.522 09:54:58 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 316340 ']' 00:06:13.522 09:54:58 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:13.522 09:54:58 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:13.522 09:54:58 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:13.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:13.522 09:54:58 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:13.522 09:54:58 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:13.522 [2024-07-25 09:54:58.639049] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:13.522 [2024-07-25 09:54:58.639233] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid316340 ] 00:06:13.780 EAL: No free 2048 kB hugepages reported on node 1 00:06:13.780 [2024-07-25 09:54:58.745869] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.780 [2024-07-25 09:54:58.873289] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.039 09:54:59 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:14.039 09:54:59 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:06:14.039 09:54:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:14.039 09:54:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:14.039 09:54:59 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:14.039 09:54:59 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:14.039 { 00:06:14.039 "filename": "/tmp/spdk_mem_dump.txt" 00:06:14.039 } 00:06:14.039 09:54:59 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:14.039 09:54:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:14.297 DPDK memory size 814.000000 MiB in 1 heap(s) 00:06:14.297 1 heaps totaling size 814.000000 MiB 00:06:14.297 size: 814.000000 MiB heap id: 0 00:06:14.297 end heaps---------- 00:06:14.297 8 mempools totaling size 598.116089 MiB 00:06:14.297 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:14.297 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:14.297 size: 84.521057 MiB name: bdev_io_316340 00:06:14.297 size: 51.011292 MiB name: evtpool_316340 00:06:14.297 size: 50.003479 MiB name: msgpool_316340 00:06:14.297 size: 21.763794 MiB name: PDU_Pool 00:06:14.297 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:14.297 size: 0.026123 MiB name: Session_Pool 00:06:14.297 end mempools------- 00:06:14.297 6 memzones totaling size 4.142822 MiB 00:06:14.297 size: 1.000366 MiB name: RG_ring_0_316340 00:06:14.297 size: 1.000366 MiB name: RG_ring_1_316340 00:06:14.297 size: 1.000366 MiB name: RG_ring_4_316340 00:06:14.297 size: 1.000366 MiB name: RG_ring_5_316340 00:06:14.297 size: 0.125366 MiB name: RG_ring_2_316340 00:06:14.297 size: 0.015991 MiB name: RG_ring_3_316340 00:06:14.297 end memzones------- 00:06:14.297 09:54:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:14.297 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:06:14.297 list of free elements. size: 12.519348 MiB 00:06:14.297 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:14.297 element at address: 0x200018e00000 with size: 0.999878 MiB 00:06:14.297 element at address: 0x200019000000 with size: 0.999878 MiB 00:06:14.297 element at address: 0x200003e00000 with size: 0.996277 MiB 00:06:14.297 element at address: 0x200031c00000 with size: 0.994446 MiB 00:06:14.297 element at address: 0x200013800000 with size: 0.978699 MiB 00:06:14.297 element at address: 0x200007000000 with size: 0.959839 MiB 00:06:14.297 element at address: 0x200019200000 with size: 0.936584 MiB 00:06:14.297 element at address: 0x200000200000 with size: 0.841614 MiB 00:06:14.297 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:06:14.297 element at address: 0x20000b200000 with size: 0.490723 MiB 00:06:14.297 element at address: 0x200000800000 with size: 0.487793 MiB 00:06:14.297 element at address: 0x200019400000 with size: 0.485657 MiB 00:06:14.297 element at address: 0x200027e00000 with size: 0.410034 MiB 00:06:14.297 element at address: 0x200003a00000 with size: 0.355530 MiB 00:06:14.297 list of standard malloc elements. size: 199.218079 MiB 00:06:14.297 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:06:14.297 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:06:14.297 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:14.297 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:06:14.297 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:14.297 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:14.297 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:06:14.297 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:14.297 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:06:14.297 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:06:14.297 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:06:14.297 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:06:14.297 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:14.297 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:14.297 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:14.297 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:14.298 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:06:14.298 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:06:14.298 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:06:14.298 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:06:14.298 element at address: 0x200003adb300 with size: 0.000183 MiB 00:06:14.298 element at address: 0x200003adb500 with size: 0.000183 MiB 00:06:14.298 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:06:14.298 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:14.298 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:14.298 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:14.298 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:06:14.298 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:06:14.298 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:06:14.298 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:06:14.298 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:06:14.298 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:06:14.298 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:06:14.298 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:06:14.298 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:06:14.298 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:06:14.298 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:06:14.298 element at address: 0x200027e69040 with size: 0.000183 MiB 00:06:14.298 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:06:14.298 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:06:14.298 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:06:14.298 list of memzone associated elements. size: 602.262573 MiB 00:06:14.298 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:06:14.298 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:14.298 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:06:14.298 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:14.298 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:06:14.298 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_316340_0 00:06:14.298 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:14.298 associated memzone info: size: 48.002930 MiB name: MP_evtpool_316340_0 00:06:14.298 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:14.298 associated memzone info: size: 48.002930 MiB name: MP_msgpool_316340_0 00:06:14.298 element at address: 0x2000195be940 with size: 20.255554 MiB 00:06:14.298 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:14.298 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:06:14.298 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:14.298 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:14.298 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_316340 00:06:14.298 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:14.298 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_316340 00:06:14.298 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:14.298 associated memzone info: size: 1.007996 MiB name: MP_evtpool_316340 00:06:14.298 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:06:14.298 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:14.298 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:06:14.298 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:14.298 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:06:14.298 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:14.298 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:06:14.298 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:14.298 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:14.298 associated memzone info: size: 1.000366 MiB name: RG_ring_0_316340 00:06:14.298 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:14.298 associated memzone info: size: 1.000366 MiB name: RG_ring_1_316340 00:06:14.298 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:06:14.298 associated memzone info: size: 1.000366 MiB name: RG_ring_4_316340 00:06:14.298 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:06:14.298 associated memzone info: size: 1.000366 MiB name: RG_ring_5_316340 00:06:14.298 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:06:14.298 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_316340 00:06:14.298 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:06:14.298 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:14.298 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:06:14.298 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:14.298 element at address: 0x20001947c540 with size: 0.250488 MiB 00:06:14.298 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:14.298 element at address: 0x200003adf880 with size: 0.125488 MiB 00:06:14.298 associated memzone info: size: 0.125366 MiB name: RG_ring_2_316340 00:06:14.298 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:06:14.298 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:14.298 element at address: 0x200027e69100 with size: 0.023743 MiB 00:06:14.298 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:14.298 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:06:14.298 associated memzone info: size: 0.015991 MiB name: RG_ring_3_316340 00:06:14.298 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:06:14.298 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:14.298 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:06:14.298 associated memzone info: size: 0.000183 MiB name: MP_msgpool_316340 00:06:14.298 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:06:14.298 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_316340 00:06:14.298 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:06:14.298 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:14.298 09:54:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:14.298 09:54:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 316340 00:06:14.298 09:54:59 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 316340 ']' 00:06:14.298 09:54:59 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 316340 00:06:14.298 09:54:59 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:06:14.298 09:54:59 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:14.298 09:54:59 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 316340 00:06:14.298 09:54:59 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:14.298 09:54:59 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:14.298 09:54:59 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 316340' 00:06:14.298 killing process with pid 316340 00:06:14.298 09:54:59 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 316340 00:06:14.298 09:54:59 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 316340 00:06:14.865 00:06:14.865 real 0m1.329s 00:06:14.865 user 0m1.480s 00:06:14.865 sys 0m0.512s 00:06:14.865 09:54:59 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:14.865 09:54:59 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:14.865 ************************************ 00:06:14.865 END TEST dpdk_mem_utility 00:06:14.865 ************************************ 00:06:14.865 09:54:59 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:14.865 09:54:59 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:14.865 09:54:59 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:14.865 09:54:59 -- common/autotest_common.sh@10 -- # set +x 00:06:14.865 ************************************ 00:06:14.865 START TEST event 00:06:14.865 ************************************ 00:06:14.865 09:54:59 event -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:14.865 * Looking for test storage... 00:06:14.865 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:14.865 09:54:59 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:14.865 09:54:59 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:14.865 09:54:59 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:14.865 09:54:59 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:06:14.865 09:54:59 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:14.865 09:54:59 event -- common/autotest_common.sh@10 -- # set +x 00:06:14.865 ************************************ 00:06:14.865 START TEST event_perf 00:06:14.865 ************************************ 00:06:14.865 09:54:59 event.event_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:14.865 Running I/O for 1 seconds...[2024-07-25 09:54:59.965581] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:14.865 [2024-07-25 09:54:59.965657] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid316534 ] 00:06:14.865 EAL: No free 2048 kB hugepages reported on node 1 00:06:15.123 [2024-07-25 09:55:00.057313] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:15.123 [2024-07-25 09:55:00.183024] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:15.123 [2024-07-25 09:55:00.183081] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:15.123 [2024-07-25 09:55:00.183135] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:15.123 [2024-07-25 09:55:00.183138] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.496 Running I/O for 1 seconds... 00:06:16.496 lcore 0: 219947 00:06:16.496 lcore 1: 219946 00:06:16.496 lcore 2: 219945 00:06:16.496 lcore 3: 219945 00:06:16.496 done. 00:06:16.496 00:06:16.496 real 0m1.368s 00:06:16.496 user 0m4.254s 00:06:16.496 sys 0m0.107s 00:06:16.496 09:55:01 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:16.496 09:55:01 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:16.496 ************************************ 00:06:16.496 END TEST event_perf 00:06:16.496 ************************************ 00:06:16.496 09:55:01 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:16.496 09:55:01 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:06:16.496 09:55:01 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:16.496 09:55:01 event -- common/autotest_common.sh@10 -- # set +x 00:06:16.496 ************************************ 00:06:16.496 START TEST event_reactor 00:06:16.496 ************************************ 00:06:16.496 09:55:01 event.event_reactor -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:16.496 [2024-07-25 09:55:01.379266] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:16.496 [2024-07-25 09:55:01.379333] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid316805 ] 00:06:16.496 EAL: No free 2048 kB hugepages reported on node 1 00:06:16.496 [2024-07-25 09:55:01.446323] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:16.496 [2024-07-25 09:55:01.570795] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.869 test_start 00:06:17.869 oneshot 00:06:17.869 tick 100 00:06:17.869 tick 100 00:06:17.869 tick 250 00:06:17.869 tick 100 00:06:17.869 tick 100 00:06:17.869 tick 100 00:06:17.869 tick 250 00:06:17.869 tick 500 00:06:17.869 tick 100 00:06:17.869 tick 100 00:06:17.869 tick 250 00:06:17.869 tick 100 00:06:17.869 tick 100 00:06:17.869 test_end 00:06:17.869 00:06:17.869 real 0m1.334s 00:06:17.869 user 0m1.234s 00:06:17.869 sys 0m0.095s 00:06:17.869 09:55:02 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:17.869 09:55:02 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:17.869 ************************************ 00:06:17.869 END TEST event_reactor 00:06:17.869 ************************************ 00:06:17.869 09:55:02 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:17.869 09:55:02 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:06:17.869 09:55:02 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:17.869 09:55:02 event -- common/autotest_common.sh@10 -- # set +x 00:06:17.869 ************************************ 00:06:17.869 START TEST event_reactor_perf 00:06:17.869 ************************************ 00:06:17.869 09:55:02 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:17.869 [2024-07-25 09:55:02.778579] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:17.869 [2024-07-25 09:55:02.778645] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid316965 ] 00:06:17.869 EAL: No free 2048 kB hugepages reported on node 1 00:06:17.869 [2024-07-25 09:55:02.852788] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.869 [2024-07-25 09:55:02.978337] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.240 test_start 00:06:19.240 test_end 00:06:19.240 Performance: 355197 events per second 00:06:19.240 00:06:19.240 real 0m1.343s 00:06:19.240 user 0m1.242s 00:06:19.240 sys 0m0.095s 00:06:19.240 09:55:04 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:19.240 09:55:04 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:19.240 ************************************ 00:06:19.240 END TEST event_reactor_perf 00:06:19.240 ************************************ 00:06:19.240 09:55:04 event -- event/event.sh@49 -- # uname -s 00:06:19.240 09:55:04 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:19.240 09:55:04 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:19.240 09:55:04 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:19.240 09:55:04 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:19.240 09:55:04 event -- common/autotest_common.sh@10 -- # set +x 00:06:19.240 ************************************ 00:06:19.240 START TEST event_scheduler 00:06:19.240 ************************************ 00:06:19.240 09:55:04 event.event_scheduler -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:19.240 * Looking for test storage... 00:06:19.240 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:06:19.240 09:55:04 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:19.240 09:55:04 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=317143 00:06:19.240 09:55:04 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:19.240 09:55:04 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:19.240 09:55:04 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 317143 00:06:19.240 09:55:04 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 317143 ']' 00:06:19.240 09:55:04 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:19.240 09:55:04 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:19.240 09:55:04 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:19.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:19.240 09:55:04 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:19.240 09:55:04 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:19.240 [2024-07-25 09:55:04.315679] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:19.240 [2024-07-25 09:55:04.315838] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid317143 ] 00:06:19.240 EAL: No free 2048 kB hugepages reported on node 1 00:06:19.560 [2024-07-25 09:55:04.414869] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:19.560 [2024-07-25 09:55:04.545313] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.560 [2024-07-25 09:55:04.545362] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:19.560 [2024-07-25 09:55:04.545413] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:19.560 [2024-07-25 09:55:04.545417] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:19.818 09:55:04 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:19.818 09:55:04 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:06:19.818 09:55:04 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:19.818 09:55:04 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:19.818 09:55:04 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:19.818 [2024-07-25 09:55:04.718590] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:06:19.818 [2024-07-25 09:55:04.718617] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:06:19.818 [2024-07-25 09:55:04.718635] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:19.818 [2024-07-25 09:55:04.718647] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:19.818 [2024-07-25 09:55:04.718658] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:19.818 09:55:04 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:19.818 09:55:04 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:19.818 09:55:04 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:19.818 09:55:04 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:19.818 [2024-07-25 09:55:04.816247] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:19.818 09:55:04 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:19.818 09:55:04 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:19.818 09:55:04 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:19.818 09:55:04 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:19.818 09:55:04 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:19.818 ************************************ 00:06:19.818 START TEST scheduler_create_thread 00:06:19.818 ************************************ 00:06:19.818 09:55:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:06:19.818 09:55:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:19.818 09:55:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:19.818 09:55:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:19.818 2 00:06:19.818 09:55:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:19.818 09:55:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:19.818 09:55:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:19.818 09:55:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:19.818 3 00:06:19.818 09:55:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:19.818 09:55:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:19.818 09:55:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:19.818 09:55:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:19.818 4 00:06:19.818 09:55:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:19.818 09:55:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:19.818 09:55:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:19.818 09:55:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:19.818 5 00:06:19.818 09:55:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:19.818 09:55:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:19.818 09:55:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:19.818 09:55:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:19.818 6 00:06:19.818 09:55:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:19.818 09:55:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:19.818 09:55:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:19.818 09:55:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:19.818 7 00:06:19.819 09:55:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:19.819 09:55:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:19.819 09:55:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:19.819 09:55:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:19.819 8 00:06:19.819 09:55:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:19.819 09:55:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:19.819 09:55:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:19.819 09:55:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:19.819 9 00:06:19.819 09:55:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:19.819 09:55:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:19.819 09:55:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:19.819 09:55:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:19.819 10 00:06:19.819 09:55:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:19.819 09:55:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:19.819 09:55:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:19.819 09:55:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:19.819 09:55:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:19.819 09:55:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:19.819 09:55:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:19.819 09:55:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:19.819 09:55:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:19.819 09:55:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:19.819 09:55:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:19.819 09:55:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:19.819 09:55:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:21.714 09:55:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:21.714 09:55:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:21.714 09:55:06 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:21.714 09:55:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:21.714 09:55:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:22.647 09:55:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:22.647 00:06:22.647 real 0m2.618s 00:06:22.647 user 0m0.014s 00:06:22.647 sys 0m0.004s 00:06:22.647 09:55:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:22.647 09:55:07 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:22.647 ************************************ 00:06:22.647 END TEST scheduler_create_thread 00:06:22.647 ************************************ 00:06:22.647 09:55:07 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:22.647 09:55:07 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 317143 00:06:22.647 09:55:07 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 317143 ']' 00:06:22.647 09:55:07 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 317143 00:06:22.647 09:55:07 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:06:22.647 09:55:07 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:22.647 09:55:07 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 317143 00:06:22.647 09:55:07 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:06:22.647 09:55:07 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:06:22.647 09:55:07 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 317143' 00:06:22.647 killing process with pid 317143 00:06:22.647 09:55:07 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 317143 00:06:22.647 09:55:07 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 317143 00:06:22.904 [2024-07-25 09:55:07.943066] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:23.162 00:06:23.162 real 0m4.059s 00:06:23.162 user 0m6.440s 00:06:23.162 sys 0m0.466s 00:06:23.162 09:55:08 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:23.162 09:55:08 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:23.162 ************************************ 00:06:23.163 END TEST event_scheduler 00:06:23.163 ************************************ 00:06:23.163 09:55:08 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:23.163 09:55:08 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:23.163 09:55:08 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:23.163 09:55:08 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:23.163 09:55:08 event -- common/autotest_common.sh@10 -- # set +x 00:06:23.163 ************************************ 00:06:23.163 START TEST app_repeat 00:06:23.163 ************************************ 00:06:23.163 09:55:08 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:06:23.163 09:55:08 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:23.163 09:55:08 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:23.163 09:55:08 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:23.163 09:55:08 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:23.163 09:55:08 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:23.163 09:55:08 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:23.163 09:55:08 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:23.163 09:55:08 event.app_repeat -- event/event.sh@19 -- # repeat_pid=317681 00:06:23.163 09:55:08 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:23.163 09:55:08 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:23.163 09:55:08 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 317681' 00:06:23.163 Process app_repeat pid: 317681 00:06:23.163 09:55:08 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:23.163 09:55:08 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:23.163 spdk_app_start Round 0 00:06:23.163 09:55:08 event.app_repeat -- event/event.sh@25 -- # waitforlisten 317681 /var/tmp/spdk-nbd.sock 00:06:23.163 09:55:08 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 317681 ']' 00:06:23.163 09:55:08 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:23.163 09:55:08 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:23.163 09:55:08 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:23.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:23.163 09:55:08 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:23.163 09:55:08 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:23.163 [2024-07-25 09:55:08.295482] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:23.163 [2024-07-25 09:55:08.295545] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid317681 ] 00:06:23.163 EAL: No free 2048 kB hugepages reported on node 1 00:06:23.420 [2024-07-25 09:55:08.387650] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:23.420 [2024-07-25 09:55:08.548567] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:23.420 [2024-07-25 09:55:08.548576] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.678 09:55:08 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:23.678 09:55:08 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:23.678 09:55:08 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:23.935 Malloc0 00:06:23.935 09:55:09 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:24.502 Malloc1 00:06:24.502 09:55:09 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:24.502 09:55:09 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:24.502 09:55:09 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:24.502 09:55:09 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:24.502 09:55:09 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:24.502 09:55:09 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:24.502 09:55:09 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:24.502 09:55:09 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:24.502 09:55:09 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:24.502 09:55:09 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:24.502 09:55:09 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:24.502 09:55:09 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:24.502 09:55:09 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:24.502 09:55:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:24.502 09:55:09 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:24.502 09:55:09 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:25.102 /dev/nbd0 00:06:25.102 09:55:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:25.102 09:55:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:25.102 09:55:10 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:25.102 09:55:10 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:25.102 09:55:10 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:25.102 09:55:10 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:25.102 09:55:10 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:25.102 09:55:10 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:25.102 09:55:10 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:25.102 09:55:10 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:25.103 09:55:10 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:25.103 1+0 records in 00:06:25.103 1+0 records out 00:06:25.103 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000265352 s, 15.4 MB/s 00:06:25.103 09:55:10 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:25.103 09:55:10 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:25.103 09:55:10 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:25.103 09:55:10 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:25.103 09:55:10 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:25.103 09:55:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:25.103 09:55:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:25.103 09:55:10 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:25.667 /dev/nbd1 00:06:25.667 09:55:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:25.667 09:55:10 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:25.667 09:55:10 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:25.667 09:55:10 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:25.667 09:55:10 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:25.667 09:55:10 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:25.667 09:55:10 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:25.667 09:55:10 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:25.667 09:55:10 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:25.667 09:55:10 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:25.667 09:55:10 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:25.667 1+0 records in 00:06:25.667 1+0 records out 00:06:25.667 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000196521 s, 20.8 MB/s 00:06:25.667 09:55:10 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:25.667 09:55:10 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:25.667 09:55:10 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:25.667 09:55:10 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:25.667 09:55:10 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:25.667 09:55:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:25.667 09:55:10 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:25.667 09:55:10 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:25.667 09:55:10 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:25.667 09:55:10 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:25.925 09:55:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:25.925 { 00:06:25.925 "nbd_device": "/dev/nbd0", 00:06:25.925 "bdev_name": "Malloc0" 00:06:25.925 }, 00:06:25.925 { 00:06:25.925 "nbd_device": "/dev/nbd1", 00:06:25.925 "bdev_name": "Malloc1" 00:06:25.925 } 00:06:25.925 ]' 00:06:25.925 09:55:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:25.925 { 00:06:25.925 "nbd_device": "/dev/nbd0", 00:06:25.925 "bdev_name": "Malloc0" 00:06:25.925 }, 00:06:25.925 { 00:06:25.925 "nbd_device": "/dev/nbd1", 00:06:25.925 "bdev_name": "Malloc1" 00:06:25.925 } 00:06:25.925 ]' 00:06:25.925 09:55:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:26.183 09:55:11 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:26.183 /dev/nbd1' 00:06:26.183 09:55:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:26.183 /dev/nbd1' 00:06:26.183 09:55:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:26.183 09:55:11 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:26.183 09:55:11 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:26.183 09:55:11 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:26.183 09:55:11 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:26.183 09:55:11 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:26.183 09:55:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:26.183 09:55:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:26.183 09:55:11 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:26.183 09:55:11 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:26.183 09:55:11 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:26.183 09:55:11 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:26.183 256+0 records in 00:06:26.183 256+0 records out 00:06:26.183 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00848723 s, 124 MB/s 00:06:26.183 09:55:11 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:26.183 09:55:11 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:26.183 256+0 records in 00:06:26.183 256+0 records out 00:06:26.183 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0261229 s, 40.1 MB/s 00:06:26.183 09:55:11 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:26.183 09:55:11 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:26.183 256+0 records in 00:06:26.183 256+0 records out 00:06:26.183 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0291039 s, 36.0 MB/s 00:06:26.183 09:55:11 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:26.183 09:55:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:26.183 09:55:11 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:26.183 09:55:11 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:26.183 09:55:11 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:26.183 09:55:11 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:26.183 09:55:11 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:26.183 09:55:11 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:26.183 09:55:11 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:26.183 09:55:11 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:26.183 09:55:11 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:26.183 09:55:11 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:26.183 09:55:11 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:26.183 09:55:11 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:26.183 09:55:11 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:26.183 09:55:11 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:26.183 09:55:11 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:26.183 09:55:11 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:26.183 09:55:11 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:26.441 09:55:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:26.441 09:55:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:26.441 09:55:11 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:26.441 09:55:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:26.441 09:55:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:26.441 09:55:11 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:26.441 09:55:11 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:26.441 09:55:11 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:26.441 09:55:11 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:26.441 09:55:11 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:27.006 09:55:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:27.006 09:55:11 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:27.006 09:55:11 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:27.006 09:55:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:27.006 09:55:11 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:27.006 09:55:11 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:27.006 09:55:11 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:27.006 09:55:11 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:27.006 09:55:11 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:27.006 09:55:11 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:27.006 09:55:11 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:27.263 09:55:12 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:27.263 09:55:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:27.263 09:55:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:27.263 09:55:12 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:27.263 09:55:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:27.263 09:55:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:27.263 09:55:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:27.263 09:55:12 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:27.263 09:55:12 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:27.263 09:55:12 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:27.263 09:55:12 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:27.263 09:55:12 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:27.263 09:55:12 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:27.829 09:55:12 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:28.087 [2024-07-25 09:55:13.122121] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:28.087 [2024-07-25 09:55:13.243052] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.087 [2024-07-25 09:55:13.243052] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:28.357 [2024-07-25 09:55:13.305888] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:28.357 [2024-07-25 09:55:13.305972] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:30.880 09:55:15 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:30.880 09:55:15 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:30.880 spdk_app_start Round 1 00:06:30.880 09:55:15 event.app_repeat -- event/event.sh@25 -- # waitforlisten 317681 /var/tmp/spdk-nbd.sock 00:06:30.880 09:55:15 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 317681 ']' 00:06:30.880 09:55:15 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:30.880 09:55:15 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:30.880 09:55:15 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:30.880 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:30.880 09:55:15 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:30.880 09:55:15 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:31.446 09:55:16 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:31.446 09:55:16 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:31.446 09:55:16 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:32.011 Malloc0 00:06:32.011 09:55:16 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:32.269 Malloc1 00:06:32.269 09:55:17 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:32.269 09:55:17 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:32.269 09:55:17 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:32.269 09:55:17 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:32.269 09:55:17 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:32.269 09:55:17 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:32.269 09:55:17 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:32.269 09:55:17 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:32.269 09:55:17 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:32.269 09:55:17 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:32.269 09:55:17 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:32.269 09:55:17 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:32.269 09:55:17 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:32.269 09:55:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:32.269 09:55:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:32.269 09:55:17 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:32.527 /dev/nbd0 00:06:32.527 09:55:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:32.527 09:55:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:32.527 09:55:17 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:32.527 09:55:17 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:32.527 09:55:17 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:32.527 09:55:17 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:32.527 09:55:17 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:32.527 09:55:17 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:32.527 09:55:17 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:32.527 09:55:17 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:32.527 09:55:17 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:32.527 1+0 records in 00:06:32.527 1+0 records out 00:06:32.527 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000159826 s, 25.6 MB/s 00:06:32.527 09:55:17 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:32.527 09:55:17 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:32.527 09:55:17 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:32.527 09:55:17 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:32.527 09:55:17 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:32.527 09:55:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:32.527 09:55:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:32.527 09:55:17 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:32.784 /dev/nbd1 00:06:33.042 09:55:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:33.042 09:55:17 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:33.042 09:55:17 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:33.042 09:55:17 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:33.042 09:55:17 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:33.042 09:55:17 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:33.042 09:55:17 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:33.042 09:55:17 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:33.042 09:55:17 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:33.042 09:55:17 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:33.042 09:55:17 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:33.042 1+0 records in 00:06:33.042 1+0 records out 00:06:33.042 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0001817 s, 22.5 MB/s 00:06:33.042 09:55:17 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:33.042 09:55:17 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:33.042 09:55:17 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:33.042 09:55:17 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:33.042 09:55:17 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:33.042 09:55:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:33.042 09:55:17 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:33.042 09:55:17 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:33.042 09:55:17 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:33.042 09:55:17 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:33.300 09:55:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:33.300 { 00:06:33.300 "nbd_device": "/dev/nbd0", 00:06:33.300 "bdev_name": "Malloc0" 00:06:33.300 }, 00:06:33.300 { 00:06:33.300 "nbd_device": "/dev/nbd1", 00:06:33.300 "bdev_name": "Malloc1" 00:06:33.300 } 00:06:33.300 ]' 00:06:33.300 09:55:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:33.300 { 00:06:33.300 "nbd_device": "/dev/nbd0", 00:06:33.300 "bdev_name": "Malloc0" 00:06:33.300 }, 00:06:33.300 { 00:06:33.300 "nbd_device": "/dev/nbd1", 00:06:33.300 "bdev_name": "Malloc1" 00:06:33.300 } 00:06:33.300 ]' 00:06:33.300 09:55:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:33.300 09:55:18 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:33.300 /dev/nbd1' 00:06:33.300 09:55:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:33.300 /dev/nbd1' 00:06:33.300 09:55:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:33.300 09:55:18 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:33.300 09:55:18 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:33.300 09:55:18 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:33.300 09:55:18 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:33.300 09:55:18 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:33.300 09:55:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:33.300 09:55:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:33.300 09:55:18 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:33.300 09:55:18 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:33.300 09:55:18 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:33.300 09:55:18 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:33.300 256+0 records in 00:06:33.300 256+0 records out 00:06:33.300 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00852583 s, 123 MB/s 00:06:33.300 09:55:18 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:33.300 09:55:18 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:33.300 256+0 records in 00:06:33.300 256+0 records out 00:06:33.300 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0242227 s, 43.3 MB/s 00:06:33.301 09:55:18 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:33.301 09:55:18 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:33.301 256+0 records in 00:06:33.301 256+0 records out 00:06:33.301 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0277078 s, 37.8 MB/s 00:06:33.301 09:55:18 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:33.301 09:55:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:33.301 09:55:18 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:33.301 09:55:18 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:33.301 09:55:18 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:33.301 09:55:18 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:33.301 09:55:18 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:33.301 09:55:18 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:33.301 09:55:18 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:33.301 09:55:18 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:33.301 09:55:18 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:33.301 09:55:18 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:33.301 09:55:18 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:33.301 09:55:18 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:33.301 09:55:18 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:33.301 09:55:18 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:33.301 09:55:18 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:33.301 09:55:18 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:33.301 09:55:18 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:33.866 09:55:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:33.866 09:55:18 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:33.866 09:55:18 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:33.866 09:55:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:33.866 09:55:18 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:33.866 09:55:18 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:33.866 09:55:18 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:33.867 09:55:18 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:33.867 09:55:18 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:33.867 09:55:18 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:34.124 09:55:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:34.124 09:55:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:34.124 09:55:19 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:34.124 09:55:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:34.124 09:55:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:34.124 09:55:19 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:34.124 09:55:19 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:34.124 09:55:19 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:34.124 09:55:19 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:34.124 09:55:19 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:34.124 09:55:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:34.382 09:55:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:34.382 09:55:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:34.382 09:55:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:34.382 09:55:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:34.382 09:55:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:34.382 09:55:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:34.382 09:55:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:34.382 09:55:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:34.382 09:55:19 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:34.382 09:55:19 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:34.382 09:55:19 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:34.382 09:55:19 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:34.382 09:55:19 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:34.948 09:55:19 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:35.206 [2024-07-25 09:55:20.199619] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:35.206 [2024-07-25 09:55:20.320513] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:35.206 [2024-07-25 09:55:20.320517] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.463 [2024-07-25 09:55:20.383871] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:35.463 [2024-07-25 09:55:20.383940] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:37.999 09:55:22 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:37.999 09:55:22 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:37.999 spdk_app_start Round 2 00:06:37.999 09:55:22 event.app_repeat -- event/event.sh@25 -- # waitforlisten 317681 /var/tmp/spdk-nbd.sock 00:06:37.999 09:55:22 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 317681 ']' 00:06:37.999 09:55:22 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:37.999 09:55:22 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:37.999 09:55:22 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:37.999 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:37.999 09:55:22 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:37.999 09:55:22 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:38.256 09:55:23 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:38.256 09:55:23 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:38.256 09:55:23 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:38.514 Malloc0 00:06:38.514 09:55:23 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:39.079 Malloc1 00:06:39.079 09:55:24 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:39.079 09:55:24 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:39.079 09:55:24 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:39.079 09:55:24 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:39.079 09:55:24 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:39.079 09:55:24 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:39.079 09:55:24 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:39.079 09:55:24 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:39.079 09:55:24 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:39.079 09:55:24 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:39.079 09:55:24 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:39.079 09:55:24 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:39.079 09:55:24 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:39.079 09:55:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:39.079 09:55:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:39.079 09:55:24 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:39.336 /dev/nbd0 00:06:39.594 09:55:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:39.594 09:55:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:39.594 09:55:24 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:06:39.594 09:55:24 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:39.594 09:55:24 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:39.594 09:55:24 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:39.594 09:55:24 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:06:39.594 09:55:24 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:39.594 09:55:24 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:39.594 09:55:24 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:39.594 09:55:24 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:39.594 1+0 records in 00:06:39.594 1+0 records out 00:06:39.594 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000175729 s, 23.3 MB/s 00:06:39.594 09:55:24 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:39.594 09:55:24 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:39.594 09:55:24 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:39.594 09:55:24 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:39.594 09:55:24 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:39.594 09:55:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:39.594 09:55:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:39.594 09:55:24 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:39.852 /dev/nbd1 00:06:39.852 09:55:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:39.852 09:55:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:39.852 09:55:24 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:06:39.852 09:55:24 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:06:39.852 09:55:24 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:06:39.852 09:55:24 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:06:39.852 09:55:24 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:06:39.852 09:55:24 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:06:39.852 09:55:24 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:06:39.852 09:55:24 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:06:39.852 09:55:24 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:39.852 1+0 records in 00:06:39.852 1+0 records out 00:06:39.852 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00020257 s, 20.2 MB/s 00:06:39.852 09:55:24 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:39.852 09:55:24 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:06:39.852 09:55:24 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:39.852 09:55:24 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:06:39.852 09:55:24 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:06:39.852 09:55:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:39.852 09:55:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:39.852 09:55:24 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:39.852 09:55:24 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:39.852 09:55:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:40.417 09:55:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:40.417 { 00:06:40.417 "nbd_device": "/dev/nbd0", 00:06:40.417 "bdev_name": "Malloc0" 00:06:40.417 }, 00:06:40.417 { 00:06:40.417 "nbd_device": "/dev/nbd1", 00:06:40.417 "bdev_name": "Malloc1" 00:06:40.417 } 00:06:40.417 ]' 00:06:40.417 09:55:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:40.417 { 00:06:40.417 "nbd_device": "/dev/nbd0", 00:06:40.417 "bdev_name": "Malloc0" 00:06:40.417 }, 00:06:40.417 { 00:06:40.417 "nbd_device": "/dev/nbd1", 00:06:40.417 "bdev_name": "Malloc1" 00:06:40.417 } 00:06:40.417 ]' 00:06:40.417 09:55:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:40.417 09:55:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:40.417 /dev/nbd1' 00:06:40.418 09:55:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:40.418 /dev/nbd1' 00:06:40.418 09:55:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:40.418 09:55:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:40.418 09:55:25 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:40.418 09:55:25 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:40.418 09:55:25 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:40.418 09:55:25 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:40.418 09:55:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:40.418 09:55:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:40.418 09:55:25 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:40.418 09:55:25 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:40.418 09:55:25 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:40.418 09:55:25 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:40.418 256+0 records in 00:06:40.418 256+0 records out 00:06:40.418 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00813385 s, 129 MB/s 00:06:40.418 09:55:25 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:40.418 09:55:25 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:40.418 256+0 records in 00:06:40.418 256+0 records out 00:06:40.418 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0268862 s, 39.0 MB/s 00:06:40.418 09:55:25 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:40.418 09:55:25 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:40.418 256+0 records in 00:06:40.418 256+0 records out 00:06:40.418 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0279723 s, 37.5 MB/s 00:06:40.418 09:55:25 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:40.418 09:55:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:40.418 09:55:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:40.418 09:55:25 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:40.418 09:55:25 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:40.418 09:55:25 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:40.418 09:55:25 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:40.418 09:55:25 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:40.418 09:55:25 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:40.418 09:55:25 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:40.418 09:55:25 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:40.418 09:55:25 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:40.418 09:55:25 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:40.418 09:55:25 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:40.418 09:55:25 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:40.418 09:55:25 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:40.418 09:55:25 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:40.418 09:55:25 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:40.418 09:55:25 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:40.992 09:55:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:40.992 09:55:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:40.992 09:55:25 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:40.992 09:55:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:40.992 09:55:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:40.992 09:55:25 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:40.992 09:55:25 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:40.992 09:55:25 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:40.992 09:55:25 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:40.992 09:55:25 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:41.610 09:55:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:41.610 09:55:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:41.610 09:55:26 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:41.610 09:55:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:41.610 09:55:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:41.610 09:55:26 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:41.610 09:55:26 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:41.610 09:55:26 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:41.610 09:55:26 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:41.610 09:55:26 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:41.610 09:55:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:41.868 09:55:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:41.868 09:55:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:41.868 09:55:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:41.868 09:55:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:41.868 09:55:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:41.868 09:55:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:41.868 09:55:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:41.868 09:55:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:41.868 09:55:26 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:41.868 09:55:26 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:41.868 09:55:26 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:41.868 09:55:26 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:41.868 09:55:26 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:42.126 09:55:27 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:42.692 [2024-07-25 09:55:27.555894] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:42.692 [2024-07-25 09:55:27.677073] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:42.692 [2024-07-25 09:55:27.677078] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.692 [2024-07-25 09:55:27.738920] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:42.692 [2024-07-25 09:55:27.739003] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:45.219 09:55:30 event.app_repeat -- event/event.sh@38 -- # waitforlisten 317681 /var/tmp/spdk-nbd.sock 00:06:45.219 09:55:30 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 317681 ']' 00:06:45.219 09:55:30 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:45.219 09:55:30 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:45.219 09:55:30 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:45.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:45.219 09:55:30 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:45.219 09:55:30 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:45.476 09:55:30 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:45.476 09:55:30 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:06:45.476 09:55:30 event.app_repeat -- event/event.sh@39 -- # killprocess 317681 00:06:45.477 09:55:30 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 317681 ']' 00:06:45.477 09:55:30 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 317681 00:06:45.477 09:55:30 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:06:45.477 09:55:30 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:45.477 09:55:30 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 317681 00:06:45.733 09:55:30 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:45.733 09:55:30 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:45.733 09:55:30 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 317681' 00:06:45.733 killing process with pid 317681 00:06:45.733 09:55:30 event.app_repeat -- common/autotest_common.sh@969 -- # kill 317681 00:06:45.733 09:55:30 event.app_repeat -- common/autotest_common.sh@974 -- # wait 317681 00:06:45.991 spdk_app_start is called in Round 0. 00:06:45.991 Shutdown signal received, stop current app iteration 00:06:45.991 Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 reinitialization... 00:06:45.991 spdk_app_start is called in Round 1. 00:06:45.991 Shutdown signal received, stop current app iteration 00:06:45.991 Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 reinitialization... 00:06:45.991 spdk_app_start is called in Round 2. 00:06:45.991 Shutdown signal received, stop current app iteration 00:06:45.991 Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 reinitialization... 00:06:45.991 spdk_app_start is called in Round 3. 00:06:45.991 Shutdown signal received, stop current app iteration 00:06:45.991 09:55:30 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:45.991 09:55:30 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:45.991 00:06:45.991 real 0m22.655s 00:06:45.991 user 0m51.363s 00:06:45.991 sys 0m4.578s 00:06:45.991 09:55:30 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:45.991 09:55:30 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:45.991 ************************************ 00:06:45.991 END TEST app_repeat 00:06:45.991 ************************************ 00:06:45.991 09:55:30 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:45.991 09:55:30 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:45.991 09:55:30 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:45.991 09:55:30 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:45.991 09:55:30 event -- common/autotest_common.sh@10 -- # set +x 00:06:45.991 ************************************ 00:06:45.991 START TEST cpu_locks 00:06:45.991 ************************************ 00:06:45.991 09:55:30 event.cpu_locks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:45.991 * Looking for test storage... 00:06:45.991 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:45.991 09:55:31 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:45.991 09:55:31 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:45.991 09:55:31 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:45.991 09:55:31 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:45.991 09:55:31 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:45.991 09:55:31 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:45.991 09:55:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:45.991 ************************************ 00:06:45.991 START TEST default_locks 00:06:45.991 ************************************ 00:06:45.991 09:55:31 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:06:45.991 09:55:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=320517 00:06:45.991 09:55:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:45.991 09:55:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 320517 00:06:45.992 09:55:31 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 320517 ']' 00:06:45.992 09:55:31 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:45.992 09:55:31 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:45.992 09:55:31 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:45.992 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:45.992 09:55:31 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:45.992 09:55:31 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:45.992 [2024-07-25 09:55:31.129146] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:45.992 [2024-07-25 09:55:31.129229] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid320517 ] 00:06:45.992 EAL: No free 2048 kB hugepages reported on node 1 00:06:46.249 [2024-07-25 09:55:31.193708] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.249 [2024-07-25 09:55:31.315838] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.180 09:55:32 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:47.180 09:55:32 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:06:47.180 09:55:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 320517 00:06:47.180 09:55:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 320517 00:06:47.180 09:55:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:47.746 lslocks: write error 00:06:47.746 09:55:32 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 320517 00:06:47.746 09:55:32 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 320517 ']' 00:06:47.746 09:55:32 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 320517 00:06:47.746 09:55:32 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:06:47.746 09:55:32 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:47.746 09:55:32 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 320517 00:06:47.746 09:55:32 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:47.746 09:55:32 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:47.746 09:55:32 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 320517' 00:06:47.746 killing process with pid 320517 00:06:47.746 09:55:32 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 320517 00:06:47.746 09:55:32 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 320517 00:06:48.317 09:55:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 320517 00:06:48.317 09:55:33 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:06:48.317 09:55:33 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 320517 00:06:48.317 09:55:33 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:48.317 09:55:33 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:48.317 09:55:33 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:48.317 09:55:33 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:48.317 09:55:33 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 320517 00:06:48.317 09:55:33 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 320517 ']' 00:06:48.317 09:55:33 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:48.317 09:55:33 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:48.317 09:55:33 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:48.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:48.317 09:55:33 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:48.317 09:55:33 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:48.317 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (320517) - No such process 00:06:48.317 ERROR: process (pid: 320517) is no longer running 00:06:48.317 09:55:33 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:48.317 09:55:33 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:06:48.317 09:55:33 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:06:48.317 09:55:33 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:48.317 09:55:33 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:48.317 09:55:33 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:48.317 09:55:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:48.317 09:55:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:48.317 09:55:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:48.317 09:55:33 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:48.317 00:06:48.317 real 0m2.208s 00:06:48.317 user 0m2.391s 00:06:48.317 sys 0m0.735s 00:06:48.317 09:55:33 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:48.317 09:55:33 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:48.317 ************************************ 00:06:48.317 END TEST default_locks 00:06:48.317 ************************************ 00:06:48.317 09:55:33 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:48.317 09:55:33 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:48.317 09:55:33 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:48.317 09:55:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:48.317 ************************************ 00:06:48.317 START TEST default_locks_via_rpc 00:06:48.317 ************************************ 00:06:48.317 09:55:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:06:48.317 09:55:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=320804 00:06:48.317 09:55:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:48.317 09:55:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 320804 00:06:48.317 09:55:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 320804 ']' 00:06:48.317 09:55:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:48.317 09:55:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:48.317 09:55:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:48.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:48.317 09:55:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:48.317 09:55:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:48.317 [2024-07-25 09:55:33.404660] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:48.317 [2024-07-25 09:55:33.404757] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid320804 ] 00:06:48.317 EAL: No free 2048 kB hugepages reported on node 1 00:06:48.317 [2024-07-25 09:55:33.478223] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.574 [2024-07-25 09:55:33.600607] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.832 09:55:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:48.832 09:55:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:06:48.832 09:55:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:48.832 09:55:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.832 09:55:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:48.832 09:55:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:48.832 09:55:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:48.832 09:55:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:48.832 09:55:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:48.832 09:55:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:48.832 09:55:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:48.833 09:55:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:48.833 09:55:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:48.833 09:55:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:48.833 09:55:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 320804 00:06:48.833 09:55:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 320804 00:06:48.833 09:55:33 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:49.091 09:55:34 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 320804 00:06:49.091 09:55:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 320804 ']' 00:06:49.091 09:55:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 320804 00:06:49.091 09:55:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:06:49.091 09:55:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:49.091 09:55:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 320804 00:06:49.091 09:55:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:49.091 09:55:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:49.091 09:55:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 320804' 00:06:49.091 killing process with pid 320804 00:06:49.091 09:55:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 320804 00:06:49.091 09:55:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 320804 00:06:49.657 00:06:49.657 real 0m1.337s 00:06:49.657 user 0m1.321s 00:06:49.657 sys 0m0.582s 00:06:49.657 09:55:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:49.657 09:55:34 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:49.657 ************************************ 00:06:49.657 END TEST default_locks_via_rpc 00:06:49.657 ************************************ 00:06:49.657 09:55:34 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:49.657 09:55:34 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:49.657 09:55:34 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:49.657 09:55:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:49.657 ************************************ 00:06:49.657 START TEST non_locking_app_on_locked_coremask 00:06:49.657 ************************************ 00:06:49.657 09:55:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:06:49.657 09:55:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=321072 00:06:49.657 09:55:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:49.657 09:55:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 321072 /var/tmp/spdk.sock 00:06:49.657 09:55:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 321072 ']' 00:06:49.657 09:55:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:49.657 09:55:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:49.657 09:55:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:49.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:49.657 09:55:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:49.657 09:55:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:49.657 [2024-07-25 09:55:34.781446] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:49.657 [2024-07-25 09:55:34.781545] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid321072 ] 00:06:49.657 EAL: No free 2048 kB hugepages reported on node 1 00:06:49.915 [2024-07-25 09:55:34.848756] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:49.915 [2024-07-25 09:55:34.974098] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.173 09:55:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:50.173 09:55:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:50.173 09:55:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=321092 00:06:50.173 09:55:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:50.173 09:55:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 321092 /var/tmp/spdk2.sock 00:06:50.173 09:55:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 321092 ']' 00:06:50.173 09:55:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:50.173 09:55:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:50.173 09:55:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:50.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:50.173 09:55:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:50.173 09:55:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:50.173 [2024-07-25 09:55:35.287758] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:50.173 [2024-07-25 09:55:35.287849] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid321092 ] 00:06:50.173 EAL: No free 2048 kB hugepages reported on node 1 00:06:50.431 [2024-07-25 09:55:35.383599] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:50.431 [2024-07-25 09:55:35.383632] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.689 [2024-07-25 09:55:35.630533] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.622 09:55:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:51.622 09:55:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:51.622 09:55:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 321072 00:06:51.622 09:55:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 321072 00:06:51.622 09:55:36 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:52.554 lslocks: write error 00:06:52.555 09:55:37 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 321072 00:06:52.555 09:55:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 321072 ']' 00:06:52.555 09:55:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 321072 00:06:52.555 09:55:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:52.555 09:55:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:52.555 09:55:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 321072 00:06:52.555 09:55:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:52.555 09:55:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:52.555 09:55:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 321072' 00:06:52.555 killing process with pid 321072 00:06:52.555 09:55:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 321072 00:06:52.555 09:55:37 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 321072 00:06:53.490 09:55:38 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 321092 00:06:53.490 09:55:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 321092 ']' 00:06:53.490 09:55:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 321092 00:06:53.490 09:55:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:53.490 09:55:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:53.490 09:55:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 321092 00:06:53.490 09:55:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:53.490 09:55:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:53.490 09:55:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 321092' 00:06:53.490 killing process with pid 321092 00:06:53.490 09:55:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 321092 00:06:53.490 09:55:38 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 321092 00:06:54.056 00:06:54.056 real 0m4.322s 00:06:54.056 user 0m4.757s 00:06:54.056 sys 0m1.459s 00:06:54.056 09:55:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:54.056 09:55:39 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:54.056 ************************************ 00:06:54.056 END TEST non_locking_app_on_locked_coremask 00:06:54.056 ************************************ 00:06:54.056 09:55:39 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:54.056 09:55:39 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:54.056 09:55:39 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:54.056 09:55:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:54.056 ************************************ 00:06:54.056 START TEST locking_app_on_unlocked_coremask 00:06:54.056 ************************************ 00:06:54.056 09:55:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:06:54.056 09:55:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=321530 00:06:54.056 09:55:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:54.056 09:55:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 321530 /var/tmp/spdk.sock 00:06:54.056 09:55:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 321530 ']' 00:06:54.056 09:55:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:54.056 09:55:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:54.056 09:55:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:54.056 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:54.056 09:55:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:54.056 09:55:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:54.056 [2024-07-25 09:55:39.170848] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:54.056 [2024-07-25 09:55:39.170945] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid321530 ] 00:06:54.056 EAL: No free 2048 kB hugepages reported on node 1 00:06:54.314 [2024-07-25 09:55:39.240638] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:54.315 [2024-07-25 09:55:39.240676] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.315 [2024-07-25 09:55:39.366165] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.573 09:55:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:54.573 09:55:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:54.573 09:55:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=321650 00:06:54.573 09:55:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:54.573 09:55:39 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 321650 /var/tmp/spdk2.sock 00:06:54.573 09:55:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 321650 ']' 00:06:54.573 09:55:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:54.573 09:55:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:54.573 09:55:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:54.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:54.573 09:55:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:54.573 09:55:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:54.573 [2024-07-25 09:55:39.710504] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:54.573 [2024-07-25 09:55:39.710609] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid321650 ] 00:06:54.831 EAL: No free 2048 kB hugepages reported on node 1 00:06:54.831 [2024-07-25 09:55:39.819293] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.089 [2024-07-25 09:55:40.069733] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.654 09:55:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:55.654 09:55:40 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:55.654 09:55:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 321650 00:06:55.654 09:55:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 321650 00:06:55.654 09:55:40 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:57.049 lslocks: write error 00:06:57.049 09:55:41 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 321530 00:06:57.049 09:55:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 321530 ']' 00:06:57.049 09:55:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 321530 00:06:57.049 09:55:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:57.049 09:55:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:57.049 09:55:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 321530 00:06:57.049 09:55:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:57.049 09:55:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:57.049 09:55:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 321530' 00:06:57.049 killing process with pid 321530 00:06:57.049 09:55:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 321530 00:06:57.049 09:55:41 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 321530 00:06:57.997 09:55:42 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 321650 00:06:57.997 09:55:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 321650 ']' 00:06:57.997 09:55:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 321650 00:06:57.997 09:55:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:06:57.997 09:55:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:57.997 09:55:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 321650 00:06:57.997 09:55:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:57.997 09:55:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:57.997 09:55:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 321650' 00:06:57.997 killing process with pid 321650 00:06:57.997 09:55:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 321650 00:06:57.997 09:55:42 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 321650 00:06:58.256 00:06:58.256 real 0m4.247s 00:06:58.256 user 0m4.754s 00:06:58.256 sys 0m1.431s 00:06:58.256 09:55:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:58.256 09:55:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:58.256 ************************************ 00:06:58.256 END TEST locking_app_on_unlocked_coremask 00:06:58.256 ************************************ 00:06:58.256 09:55:43 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:58.256 09:55:43 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:06:58.256 09:55:43 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:58.256 09:55:43 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:58.256 ************************************ 00:06:58.256 START TEST locking_app_on_locked_coremask 00:06:58.256 ************************************ 00:06:58.256 09:55:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:06:58.256 09:55:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=322096 00:06:58.256 09:55:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:58.256 09:55:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 322096 /var/tmp/spdk.sock 00:06:58.256 09:55:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 322096 ']' 00:06:58.256 09:55:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:58.257 09:55:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:58.257 09:55:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:58.257 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:58.257 09:55:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:58.257 09:55:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:58.514 [2024-07-25 09:55:43.468743] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:58.514 [2024-07-25 09:55:43.468840] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid322096 ] 00:06:58.514 EAL: No free 2048 kB hugepages reported on node 1 00:06:58.514 [2024-07-25 09:55:43.536529] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.514 [2024-07-25 09:55:43.658670] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.772 09:55:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:58.772 09:55:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:06:58.772 09:55:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=322217 00:06:58.772 09:55:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:58.772 09:55:43 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 322217 /var/tmp/spdk2.sock 00:06:58.772 09:55:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:06:58.773 09:55:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 322217 /var/tmp/spdk2.sock 00:06:58.773 09:55:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:06:58.773 09:55:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:58.773 09:55:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:06:58.773 09:55:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:58.773 09:55:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 322217 /var/tmp/spdk2.sock 00:06:58.773 09:55:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 322217 ']' 00:06:58.773 09:55:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:58.773 09:55:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:58.773 09:55:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:58.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:58.773 09:55:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:58.773 09:55:43 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:59.030 [2024-07-25 09:55:43.985257] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:06:59.030 [2024-07-25 09:55:43.985351] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid322217 ] 00:06:59.030 EAL: No free 2048 kB hugepages reported on node 1 00:06:59.030 [2024-07-25 09:55:44.089541] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 322096 has claimed it. 00:06:59.030 [2024-07-25 09:55:44.089600] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:59.963 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (322217) - No such process 00:06:59.963 ERROR: process (pid: 322217) is no longer running 00:06:59.963 09:55:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:59.963 09:55:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:06:59.963 09:55:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:06:59.963 09:55:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:59.963 09:55:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:59.963 09:55:44 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:59.963 09:55:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 322096 00:06:59.963 09:55:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 322096 00:06:59.963 09:55:44 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:00.528 lslocks: write error 00:07:00.528 09:55:45 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 322096 00:07:00.528 09:55:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 322096 ']' 00:07:00.528 09:55:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 322096 00:07:00.528 09:55:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:07:00.528 09:55:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:00.528 09:55:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 322096 00:07:00.528 09:55:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:00.528 09:55:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:00.528 09:55:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 322096' 00:07:00.528 killing process with pid 322096 00:07:00.528 09:55:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 322096 00:07:00.528 09:55:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 322096 00:07:00.787 00:07:00.787 real 0m2.500s 00:07:00.787 user 0m2.914s 00:07:00.787 sys 0m0.717s 00:07:00.787 09:55:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:00.787 09:55:45 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:00.787 ************************************ 00:07:00.787 END TEST locking_app_on_locked_coremask 00:07:00.787 ************************************ 00:07:00.787 09:55:45 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:00.787 09:55:45 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:00.787 09:55:45 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:00.787 09:55:45 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:01.045 ************************************ 00:07:01.045 START TEST locking_overlapped_coremask 00:07:01.045 ************************************ 00:07:01.045 09:55:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:07:01.045 09:55:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=322395 00:07:01.045 09:55:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:07:01.045 09:55:45 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 322395 /var/tmp/spdk.sock 00:07:01.045 09:55:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 322395 ']' 00:07:01.045 09:55:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:01.045 09:55:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:01.045 09:55:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:01.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:01.045 09:55:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:01.045 09:55:45 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:01.045 [2024-07-25 09:55:46.079547] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:01.045 [2024-07-25 09:55:46.079635] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid322395 ] 00:07:01.045 EAL: No free 2048 kB hugepages reported on node 1 00:07:01.045 [2024-07-25 09:55:46.184550] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:01.304 [2024-07-25 09:55:46.314192] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:01.304 [2024-07-25 09:55:46.314244] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:01.304 [2024-07-25 09:55:46.314248] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.562 09:55:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:01.562 09:55:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:07:01.562 09:55:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=322521 00:07:01.562 09:55:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:01.562 09:55:46 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 322521 /var/tmp/spdk2.sock 00:07:01.562 09:55:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:07:01.562 09:55:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 322521 /var/tmp/spdk2.sock 00:07:01.562 09:55:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:07:01.562 09:55:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:01.562 09:55:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:07:01.562 09:55:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:01.562 09:55:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 322521 /var/tmp/spdk2.sock 00:07:01.562 09:55:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 322521 ']' 00:07:01.562 09:55:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:01.562 09:55:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:01.562 09:55:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:01.562 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:01.562 09:55:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:01.562 09:55:46 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:01.562 [2024-07-25 09:55:46.661806] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:01.562 [2024-07-25 09:55:46.661919] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid322521 ] 00:07:01.562 EAL: No free 2048 kB hugepages reported on node 1 00:07:01.820 [2024-07-25 09:55:46.769749] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 322395 has claimed it. 00:07:01.820 [2024-07-25 09:55:46.769813] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:02.385 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (322521) - No such process 00:07:02.385 ERROR: process (pid: 322521) is no longer running 00:07:02.385 09:55:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:02.385 09:55:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:07:02.385 09:55:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:07:02.385 09:55:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:02.385 09:55:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:02.385 09:55:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:02.385 09:55:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:02.385 09:55:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:02.385 09:55:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:02.385 09:55:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:02.385 09:55:47 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 322395 00:07:02.385 09:55:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 322395 ']' 00:07:02.385 09:55:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 322395 00:07:02.385 09:55:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:07:02.385 09:55:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:02.385 09:55:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 322395 00:07:02.385 09:55:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:02.385 09:55:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:02.385 09:55:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 322395' 00:07:02.385 killing process with pid 322395 00:07:02.385 09:55:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 322395 00:07:02.385 09:55:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 322395 00:07:02.951 00:07:02.951 real 0m1.976s 00:07:02.951 user 0m5.307s 00:07:02.951 sys 0m0.623s 00:07:02.951 09:55:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:02.951 09:55:47 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:02.951 ************************************ 00:07:02.951 END TEST locking_overlapped_coremask 00:07:02.951 ************************************ 00:07:02.951 09:55:47 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:02.951 09:55:47 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:02.951 09:55:47 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:02.951 09:55:47 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:02.951 ************************************ 00:07:02.951 START TEST locking_overlapped_coremask_via_rpc 00:07:02.951 ************************************ 00:07:02.951 09:55:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:07:02.951 09:55:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=322687 00:07:02.951 09:55:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:02.951 09:55:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 322687 /var/tmp/spdk.sock 00:07:02.951 09:55:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 322687 ']' 00:07:02.951 09:55:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:02.951 09:55:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:02.951 09:55:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:02.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:02.951 09:55:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:02.951 09:55:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:02.951 [2024-07-25 09:55:48.102595] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:02.951 [2024-07-25 09:55:48.102692] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid322687 ] 00:07:03.210 EAL: No free 2048 kB hugepages reported on node 1 00:07:03.210 [2024-07-25 09:55:48.194460] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:03.210 [2024-07-25 09:55:48.194515] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:03.210 [2024-07-25 09:55:48.323230] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:03.210 [2024-07-25 09:55:48.323285] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:03.210 [2024-07-25 09:55:48.323289] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.468 09:55:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:03.468 09:55:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:03.468 09:55:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=322821 00:07:03.468 09:55:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 322821 /var/tmp/spdk2.sock 00:07:03.468 09:55:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 322821 ']' 00:07:03.468 09:55:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:03.468 09:55:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:03.468 09:55:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:03.468 09:55:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:03.468 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:03.468 09:55:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:03.468 09:55:48 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:03.726 [2024-07-25 09:55:48.656866] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:03.726 [2024-07-25 09:55:48.656961] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid322821 ] 00:07:03.726 EAL: No free 2048 kB hugepages reported on node 1 00:07:03.726 [2024-07-25 09:55:48.758469] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:03.726 [2024-07-25 09:55:48.758507] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:03.984 [2024-07-25 09:55:48.983004] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:03.984 [2024-07-25 09:55:48.983065] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:07:03.984 [2024-07-25 09:55:48.983067] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:04.917 09:55:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:04.917 09:55:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:04.917 09:55:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:04.917 09:55:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:04.917 09:55:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:04.917 09:55:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:04.917 09:55:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:04.917 09:55:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:07:04.917 09:55:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:04.917 09:55:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:04.917 09:55:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:04.917 09:55:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:04.917 09:55:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:04.917 09:55:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:04.917 09:55:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:04.917 09:55:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:04.917 [2024-07-25 09:55:49.801528] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 322687 has claimed it. 00:07:04.917 request: 00:07:04.917 { 00:07:04.917 "method": "framework_enable_cpumask_locks", 00:07:04.917 "req_id": 1 00:07:04.917 } 00:07:04.917 Got JSON-RPC error response 00:07:04.917 response: 00:07:04.917 { 00:07:04.917 "code": -32603, 00:07:04.917 "message": "Failed to claim CPU core: 2" 00:07:04.917 } 00:07:04.917 09:55:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:04.917 09:55:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:07:04.917 09:55:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:04.917 09:55:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:04.917 09:55:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:04.917 09:55:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 322687 /var/tmp/spdk.sock 00:07:04.917 09:55:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 322687 ']' 00:07:04.917 09:55:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:04.917 09:55:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:04.917 09:55:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:04.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:04.917 09:55:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:04.917 09:55:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:05.175 09:55:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:05.175 09:55:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:05.175 09:55:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 322821 /var/tmp/spdk2.sock 00:07:05.175 09:55:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 322821 ']' 00:07:05.175 09:55:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:05.175 09:55:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:05.175 09:55:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:05.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:05.175 09:55:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:05.175 09:55:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:05.433 09:55:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:05.433 09:55:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:07:05.433 09:55:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:05.433 09:55:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:05.433 09:55:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:05.433 09:55:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:05.433 00:07:05.433 real 0m2.403s 00:07:05.433 user 0m1.382s 00:07:05.433 sys 0m0.208s 00:07:05.433 09:55:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:05.433 09:55:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:05.433 ************************************ 00:07:05.433 END TEST locking_overlapped_coremask_via_rpc 00:07:05.433 ************************************ 00:07:05.433 09:55:50 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:05.433 09:55:50 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 322687 ]] 00:07:05.433 09:55:50 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 322687 00:07:05.433 09:55:50 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 322687 ']' 00:07:05.433 09:55:50 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 322687 00:07:05.433 09:55:50 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:07:05.433 09:55:50 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:05.433 09:55:50 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 322687 00:07:05.433 09:55:50 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:05.433 09:55:50 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:05.433 09:55:50 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 322687' 00:07:05.433 killing process with pid 322687 00:07:05.433 09:55:50 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 322687 00:07:05.433 09:55:50 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 322687 00:07:05.998 09:55:50 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 322821 ]] 00:07:05.998 09:55:50 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 322821 00:07:05.998 09:55:50 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 322821 ']' 00:07:05.998 09:55:50 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 322821 00:07:05.998 09:55:50 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:07:05.998 09:55:50 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:05.998 09:55:50 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 322821 00:07:05.998 09:55:50 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:07:05.998 09:55:50 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:07:05.998 09:55:50 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 322821' 00:07:05.998 killing process with pid 322821 00:07:05.998 09:55:50 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 322821 00:07:05.998 09:55:50 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 322821 00:07:06.564 09:55:51 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:06.564 09:55:51 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:06.564 09:55:51 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 322687 ]] 00:07:06.564 09:55:51 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 322687 00:07:06.564 09:55:51 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 322687 ']' 00:07:06.564 09:55:51 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 322687 00:07:06.564 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (322687) - No such process 00:07:06.564 09:55:51 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 322687 is not found' 00:07:06.564 Process with pid 322687 is not found 00:07:06.564 09:55:51 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 322821 ]] 00:07:06.564 09:55:51 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 322821 00:07:06.564 09:55:51 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 322821 ']' 00:07:06.564 09:55:51 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 322821 00:07:06.564 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (322821) - No such process 00:07:06.564 09:55:51 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 322821 is not found' 00:07:06.564 Process with pid 322821 is not found 00:07:06.564 09:55:51 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:06.564 00:07:06.564 real 0m20.463s 00:07:06.564 user 0m35.575s 00:07:06.564 sys 0m6.758s 00:07:06.564 09:55:51 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:06.564 09:55:51 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:06.565 ************************************ 00:07:06.565 END TEST cpu_locks 00:07:06.565 ************************************ 00:07:06.565 00:07:06.565 real 0m51.613s 00:07:06.565 user 1m40.248s 00:07:06.565 sys 0m12.374s 00:07:06.565 09:55:51 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:06.565 09:55:51 event -- common/autotest_common.sh@10 -- # set +x 00:07:06.565 ************************************ 00:07:06.565 END TEST event 00:07:06.565 ************************************ 00:07:06.565 09:55:51 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:06.565 09:55:51 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:06.565 09:55:51 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:06.565 09:55:51 -- common/autotest_common.sh@10 -- # set +x 00:07:06.565 ************************************ 00:07:06.565 START TEST thread 00:07:06.565 ************************************ 00:07:06.565 09:55:51 thread -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:06.565 * Looking for test storage... 00:07:06.565 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:07:06.565 09:55:51 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:06.565 09:55:51 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:07:06.565 09:55:51 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:06.565 09:55:51 thread -- common/autotest_common.sh@10 -- # set +x 00:07:06.565 ************************************ 00:07:06.565 START TEST thread_poller_perf 00:07:06.565 ************************************ 00:07:06.565 09:55:51 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:06.565 [2024-07-25 09:55:51.638208] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:06.565 [2024-07-25 09:55:51.638290] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid323190 ] 00:07:06.565 EAL: No free 2048 kB hugepages reported on node 1 00:07:06.565 [2024-07-25 09:55:51.710834] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.824 [2024-07-25 09:55:51.836857] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.824 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:08.197 ====================================== 00:07:08.197 busy:2714202412 (cyc) 00:07:08.197 total_run_count: 290000 00:07:08.197 tsc_hz: 2700000000 (cyc) 00:07:08.197 ====================================== 00:07:08.197 poller_cost: 9359 (cyc), 3466 (nsec) 00:07:08.197 00:07:08.197 real 0m1.354s 00:07:08.197 user 0m1.259s 00:07:08.197 sys 0m0.089s 00:07:08.197 09:55:52 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:08.197 09:55:52 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:08.197 ************************************ 00:07:08.197 END TEST thread_poller_perf 00:07:08.197 ************************************ 00:07:08.197 09:55:53 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:08.197 09:55:53 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:07:08.197 09:55:53 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:08.197 09:55:53 thread -- common/autotest_common.sh@10 -- # set +x 00:07:08.197 ************************************ 00:07:08.197 START TEST thread_poller_perf 00:07:08.197 ************************************ 00:07:08.197 09:55:53 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:08.197 [2024-07-25 09:55:53.043617] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:08.197 [2024-07-25 09:55:53.043680] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid323357 ] 00:07:08.197 EAL: No free 2048 kB hugepages reported on node 1 00:07:08.197 [2024-07-25 09:55:53.112719] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.197 [2024-07-25 09:55:53.236468] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.197 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:09.567 ====================================== 00:07:09.567 busy:2702843108 (cyc) 00:07:09.567 total_run_count: 3816000 00:07:09.567 tsc_hz: 2700000000 (cyc) 00:07:09.567 ====================================== 00:07:09.567 poller_cost: 708 (cyc), 262 (nsec) 00:07:09.567 00:07:09.567 real 0m1.336s 00:07:09.567 user 0m1.240s 00:07:09.567 sys 0m0.090s 00:07:09.567 09:55:54 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:09.567 09:55:54 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:09.567 ************************************ 00:07:09.567 END TEST thread_poller_perf 00:07:09.567 ************************************ 00:07:09.567 09:55:54 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:09.567 00:07:09.567 real 0m2.871s 00:07:09.567 user 0m2.578s 00:07:09.567 sys 0m0.294s 00:07:09.567 09:55:54 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:09.567 09:55:54 thread -- common/autotest_common.sh@10 -- # set +x 00:07:09.567 ************************************ 00:07:09.567 END TEST thread 00:07:09.567 ************************************ 00:07:09.567 09:55:54 -- spdk/autotest.sh@184 -- # [[ 0 -eq 1 ]] 00:07:09.567 09:55:54 -- spdk/autotest.sh@189 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:09.567 09:55:54 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:09.567 09:55:54 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:09.567 09:55:54 -- common/autotest_common.sh@10 -- # set +x 00:07:09.567 ************************************ 00:07:09.567 START TEST app_cmdline 00:07:09.567 ************************************ 00:07:09.567 09:55:54 app_cmdline -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:09.567 * Looking for test storage... 00:07:09.567 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:09.567 09:55:54 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:09.567 09:55:54 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=323666 00:07:09.567 09:55:54 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:09.567 09:55:54 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 323666 00:07:09.567 09:55:54 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 323666 ']' 00:07:09.567 09:55:54 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:09.567 09:55:54 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:09.567 09:55:54 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:09.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:09.567 09:55:54 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:09.567 09:55:54 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:09.567 [2024-07-25 09:55:54.617165] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:09.567 [2024-07-25 09:55:54.617354] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid323666 ] 00:07:09.567 EAL: No free 2048 kB hugepages reported on node 1 00:07:09.567 [2024-07-25 09:55:54.707440] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.825 [2024-07-25 09:55:54.834179] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.756 09:55:55 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:10.756 09:55:55 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:07:10.756 09:55:55 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:10.756 { 00:07:10.756 "version": "SPDK v24.09-pre git sha1 704257090", 00:07:10.756 "fields": { 00:07:10.756 "major": 24, 00:07:10.756 "minor": 9, 00:07:10.756 "patch": 0, 00:07:10.756 "suffix": "-pre", 00:07:10.756 "commit": "704257090" 00:07:10.756 } 00:07:10.756 } 00:07:10.756 09:55:55 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:10.756 09:55:55 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:10.756 09:55:55 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:10.756 09:55:55 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:10.756 09:55:55 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:10.756 09:55:55 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:10.756 09:55:55 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:10.756 09:55:55 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:10.756 09:55:55 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:10.756 09:55:55 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:11.013 09:55:55 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:11.013 09:55:55 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:11.013 09:55:55 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:11.013 09:55:55 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:07:11.013 09:55:55 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:11.013 09:55:55 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:11.013 09:55:55 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:11.013 09:55:55 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:11.013 09:55:55 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:11.013 09:55:55 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:11.013 09:55:55 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:11.013 09:55:55 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:11.013 09:55:55 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:11.013 09:55:55 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:11.578 request: 00:07:11.578 { 00:07:11.578 "method": "env_dpdk_get_mem_stats", 00:07:11.578 "req_id": 1 00:07:11.578 } 00:07:11.578 Got JSON-RPC error response 00:07:11.578 response: 00:07:11.578 { 00:07:11.578 "code": -32601, 00:07:11.578 "message": "Method not found" 00:07:11.578 } 00:07:11.578 09:55:56 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:07:11.578 09:55:56 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:11.578 09:55:56 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:11.578 09:55:56 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:11.578 09:55:56 app_cmdline -- app/cmdline.sh@1 -- # killprocess 323666 00:07:11.578 09:55:56 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 323666 ']' 00:07:11.578 09:55:56 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 323666 00:07:11.578 09:55:56 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:07:11.578 09:55:56 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:11.578 09:55:56 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 323666 00:07:11.578 09:55:56 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:11.578 09:55:56 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:11.578 09:55:56 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 323666' 00:07:11.578 killing process with pid 323666 00:07:11.578 09:55:56 app_cmdline -- common/autotest_common.sh@969 -- # kill 323666 00:07:11.578 09:55:56 app_cmdline -- common/autotest_common.sh@974 -- # wait 323666 00:07:12.143 00:07:12.143 real 0m2.641s 00:07:12.143 user 0m3.580s 00:07:12.143 sys 0m0.609s 00:07:12.143 09:55:57 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:12.143 09:55:57 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:12.143 ************************************ 00:07:12.143 END TEST app_cmdline 00:07:12.143 ************************************ 00:07:12.143 09:55:57 -- spdk/autotest.sh@190 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:12.143 09:55:57 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:12.143 09:55:57 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:12.143 09:55:57 -- common/autotest_common.sh@10 -- # set +x 00:07:12.143 ************************************ 00:07:12.143 START TEST version 00:07:12.143 ************************************ 00:07:12.143 09:55:57 version -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:12.143 * Looking for test storage... 00:07:12.143 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:12.143 09:55:57 version -- app/version.sh@17 -- # get_header_version major 00:07:12.143 09:55:57 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:12.143 09:55:57 version -- app/version.sh@14 -- # cut -f2 00:07:12.143 09:55:57 version -- app/version.sh@14 -- # tr -d '"' 00:07:12.143 09:55:57 version -- app/version.sh@17 -- # major=24 00:07:12.143 09:55:57 version -- app/version.sh@18 -- # get_header_version minor 00:07:12.143 09:55:57 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:12.143 09:55:57 version -- app/version.sh@14 -- # cut -f2 00:07:12.143 09:55:57 version -- app/version.sh@14 -- # tr -d '"' 00:07:12.143 09:55:57 version -- app/version.sh@18 -- # minor=9 00:07:12.143 09:55:57 version -- app/version.sh@19 -- # get_header_version patch 00:07:12.143 09:55:57 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:12.143 09:55:57 version -- app/version.sh@14 -- # cut -f2 00:07:12.143 09:55:57 version -- app/version.sh@14 -- # tr -d '"' 00:07:12.143 09:55:57 version -- app/version.sh@19 -- # patch=0 00:07:12.143 09:55:57 version -- app/version.sh@20 -- # get_header_version suffix 00:07:12.143 09:55:57 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:12.143 09:55:57 version -- app/version.sh@14 -- # cut -f2 00:07:12.143 09:55:57 version -- app/version.sh@14 -- # tr -d '"' 00:07:12.143 09:55:57 version -- app/version.sh@20 -- # suffix=-pre 00:07:12.143 09:55:57 version -- app/version.sh@22 -- # version=24.9 00:07:12.143 09:55:57 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:12.143 09:55:57 version -- app/version.sh@28 -- # version=24.9rc0 00:07:12.143 09:55:57 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:12.143 09:55:57 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:12.143 09:55:57 version -- app/version.sh@30 -- # py_version=24.9rc0 00:07:12.144 09:55:57 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:07:12.144 00:07:12.144 real 0m0.125s 00:07:12.144 user 0m0.068s 00:07:12.144 sys 0m0.081s 00:07:12.144 09:55:57 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:12.144 09:55:57 version -- common/autotest_common.sh@10 -- # set +x 00:07:12.144 ************************************ 00:07:12.144 END TEST version 00:07:12.144 ************************************ 00:07:12.144 09:55:57 -- spdk/autotest.sh@192 -- # '[' 0 -eq 1 ']' 00:07:12.144 09:55:57 -- spdk/autotest.sh@202 -- # uname -s 00:07:12.144 09:55:57 -- spdk/autotest.sh@202 -- # [[ Linux == Linux ]] 00:07:12.144 09:55:57 -- spdk/autotest.sh@203 -- # [[ 0 -eq 1 ]] 00:07:12.144 09:55:57 -- spdk/autotest.sh@203 -- # [[ 0 -eq 1 ]] 00:07:12.144 09:55:57 -- spdk/autotest.sh@215 -- # '[' 0 -eq 1 ']' 00:07:12.144 09:55:57 -- spdk/autotest.sh@260 -- # '[' 0 -eq 1 ']' 00:07:12.144 09:55:57 -- spdk/autotest.sh@264 -- # timing_exit lib 00:07:12.144 09:55:57 -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:12.144 09:55:57 -- common/autotest_common.sh@10 -- # set +x 00:07:12.403 09:55:57 -- spdk/autotest.sh@266 -- # '[' 0 -eq 1 ']' 00:07:12.403 09:55:57 -- spdk/autotest.sh@274 -- # '[' 0 -eq 1 ']' 00:07:12.403 09:55:57 -- spdk/autotest.sh@283 -- # '[' 1 -eq 1 ']' 00:07:12.403 09:55:57 -- spdk/autotest.sh@284 -- # export NET_TYPE 00:07:12.403 09:55:57 -- spdk/autotest.sh@287 -- # '[' tcp = rdma ']' 00:07:12.403 09:55:57 -- spdk/autotest.sh@290 -- # '[' tcp = tcp ']' 00:07:12.403 09:55:57 -- spdk/autotest.sh@291 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:12.403 09:55:57 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:12.403 09:55:57 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:12.403 09:55:57 -- common/autotest_common.sh@10 -- # set +x 00:07:12.403 ************************************ 00:07:12.403 START TEST nvmf_tcp 00:07:12.403 ************************************ 00:07:12.403 09:55:57 nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:12.403 * Looking for test storage... 00:07:12.403 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:12.403 09:55:57 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:12.403 09:55:57 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:12.403 09:55:57 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:12.403 09:55:57 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:12.403 09:55:57 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:12.403 09:55:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:12.403 ************************************ 00:07:12.403 START TEST nvmf_target_core 00:07:12.403 ************************************ 00:07:12.403 09:55:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:12.403 * Looking for test storage... 00:07:12.403 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:12.403 09:55:57 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:07:12.403 09:55:57 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:12.403 09:55:57 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:12.403 09:55:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:07:12.403 09:55:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:12.403 09:55:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:12.403 09:55:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:12.403 09:55:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:12.403 09:55:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:12.403 09:55:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:12.403 09:55:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:12.403 09:55:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:12.403 09:55:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:12.403 09:55:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:12.403 09:55:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:07:12.403 09:55:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:07:12.403 09:55:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:12.403 09:55:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:12.403 09:55:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:12.403 09:55:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:12.403 09:55:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:12.403 09:55:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:12.403 09:55:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:12.403 09:55:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:12.403 09:55:57 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.403 09:55:57 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.403 09:55:57 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.403 09:55:57 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:07:12.403 09:55:57 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.403 09:55:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@47 -- # : 0 00:07:12.403 09:55:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:12.403 09:55:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:12.403 09:55:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:12.403 09:55:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:12.403 09:55:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:12.403 09:55:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:12.403 09:55:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:12.403 09:55:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:12.404 09:55:57 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:12.404 09:55:57 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:07:12.404 09:55:57 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:07:12.404 09:55:57 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:12.404 09:55:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:12.404 09:55:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:12.404 09:55:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:12.404 ************************************ 00:07:12.404 START TEST nvmf_abort 00:07:12.404 ************************************ 00:07:12.404 09:55:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:12.682 * Looking for test storage... 00:07:12.682 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:12.682 09:55:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:12.682 09:55:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:07:12.682 09:55:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:12.682 09:55:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:12.682 09:55:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:12.682 09:55:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:12.682 09:55:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:12.682 09:55:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:12.682 09:55:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:12.682 09:55:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:12.682 09:55:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:12.682 09:55:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:12.682 09:55:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:07:12.682 09:55:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:07:12.682 09:55:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:12.682 09:55:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:12.682 09:55:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:12.682 09:55:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:12.682 09:55:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:12.682 09:55:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:12.682 09:55:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:12.682 09:55:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:12.682 09:55:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.682 09:55:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.682 09:55:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.682 09:55:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:07:12.682 09:55:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.682 09:55:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:07:12.682 09:55:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:12.682 09:55:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:12.683 09:55:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:12.683 09:55:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:12.683 09:55:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:12.683 09:55:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:12.683 09:55:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:12.683 09:55:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:12.683 09:55:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:12.683 09:55:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:07:12.683 09:55:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:07:12.683 09:55:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:12.683 09:55:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:12.683 09:55:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:12.683 09:55:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:12.683 09:55:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:12.683 09:55:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:12.683 09:55:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:12.683 09:55:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:12.683 09:55:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:12.683 09:55:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:12.683 09:55:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:07:12.683 09:55:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:15.224 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:15.224 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:07:15.224 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:15.224 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:15.224 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:15.224 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:15.224 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:15.224 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:07:15.224 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:15.224 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:07:15.224 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:07:15.224 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:07:15.224 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:07:15.224 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:07:15.224 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:07:15.224 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:15.224 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:15.224 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:15.224 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:15.224 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:15.224 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:15.224 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:15.224 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:15.224 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:15.224 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:15.224 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:15.224 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:15.224 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:15.224 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:15.224 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:15.224 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:15.224 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:15.224 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:15.224 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:07:15.224 Found 0000:84:00.0 (0x8086 - 0x159b) 00:07:15.224 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:15.224 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:15.224 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:15.224 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:15.224 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:15.224 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:15.224 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:07:15.224 Found 0000:84:00.1 (0x8086 - 0x159b) 00:07:15.224 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:15.224 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:15.224 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:15.224 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:15.224 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:15.224 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:15.224 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:15.224 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:15.224 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:15.224 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:15.224 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:15.224 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:15.224 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:15.224 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:15.224 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:15.224 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:07:15.224 Found net devices under 0000:84:00.0: cvl_0_0 00:07:15.224 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:15.224 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:15.224 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:15.224 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:15.224 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:15.224 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:15.224 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:15.224 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:15.224 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:07:15.224 Found net devices under 0000:84:00.1: cvl_0_1 00:07:15.224 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:15.224 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:15.224 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:07:15.224 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:15.224 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:15.224 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:15.224 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:15.224 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:15.224 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:15.225 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:15.225 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:15.225 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:15.225 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:15.225 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:15.225 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:15.225 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:15.225 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:15.225 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:15.225 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:15.225 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:15.225 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:15.225 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:15.225 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:15.225 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:15.225 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:15.225 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:15.225 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:15.225 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.296 ms 00:07:15.225 00:07:15.225 --- 10.0.0.2 ping statistics --- 00:07:15.225 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:15.225 rtt min/avg/max/mdev = 0.296/0.296/0.296/0.000 ms 00:07:15.225 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:15.225 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:15.225 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.182 ms 00:07:15.225 00:07:15.225 --- 10.0.0.1 ping statistics --- 00:07:15.225 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:15.225 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:07:15.225 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:15.225 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:07:15.225 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:15.225 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:15.225 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:15.225 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:15.225 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:15.225 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:15.225 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:15.225 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:07:15.225 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:15.225 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:15.225 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:15.225 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=325868 00:07:15.225 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 325868 00:07:15.225 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:15.225 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 325868 ']' 00:07:15.225 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:15.225 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:15.225 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:15.225 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:15.225 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:15.225 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:15.225 [2024-07-25 09:56:00.347194] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:15.225 [2024-07-25 09:56:00.347302] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:15.225 EAL: No free 2048 kB hugepages reported on node 1 00:07:15.483 [2024-07-25 09:56:00.427293] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:15.483 [2024-07-25 09:56:00.554350] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:15.483 [2024-07-25 09:56:00.554424] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:15.483 [2024-07-25 09:56:00.554450] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:15.483 [2024-07-25 09:56:00.554464] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:15.483 [2024-07-25 09:56:00.554476] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:15.483 [2024-07-25 09:56:00.554561] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:15.483 [2024-07-25 09:56:00.554616] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:15.483 [2024-07-25 09:56:00.554619] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:15.740 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:15.740 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:07:15.740 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:15.740 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:15.740 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:15.740 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:15.740 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:07:15.740 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.740 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:15.740 [2024-07-25 09:56:00.843003] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:15.740 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.740 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:07:15.740 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.740 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:15.740 Malloc0 00:07:15.740 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.740 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:15.740 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.740 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:15.740 Delay0 00:07:15.740 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.740 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:15.740 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.741 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:15.999 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.999 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:07:15.999 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.999 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:15.999 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.999 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:15.999 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.999 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:15.999 [2024-07-25 09:56:00.921342] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:15.999 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.999 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:15.999 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:15.999 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:15.999 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:15.999 09:56:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:07:15.999 EAL: No free 2048 kB hugepages reported on node 1 00:07:15.999 [2024-07-25 09:56:01.027741] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:07:18.522 Initializing NVMe Controllers 00:07:18.522 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:18.522 controller IO queue size 128 less than required 00:07:18.522 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:07:18.522 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:07:18.522 Initialization complete. Launching workers. 00:07:18.522 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 30721 00:07:18.522 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 30782, failed to submit 62 00:07:18.522 success 30725, unsuccess 57, failed 0 00:07:18.522 09:56:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:18.522 09:56:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:18.522 09:56:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:18.522 09:56:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:18.522 09:56:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:07:18.522 09:56:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:07:18.522 09:56:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:18.522 09:56:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:07:18.522 09:56:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:18.522 09:56:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:07:18.522 09:56:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:18.522 09:56:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:18.522 rmmod nvme_tcp 00:07:18.522 rmmod nvme_fabrics 00:07:18.522 rmmod nvme_keyring 00:07:18.522 09:56:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:18.522 09:56:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:07:18.522 09:56:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:07:18.522 09:56:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 325868 ']' 00:07:18.522 09:56:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 325868 00:07:18.522 09:56:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 325868 ']' 00:07:18.522 09:56:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 325868 00:07:18.522 09:56:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:07:18.522 09:56:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:18.522 09:56:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 325868 00:07:18.523 09:56:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:18.523 09:56:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:18.523 09:56:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 325868' 00:07:18.523 killing process with pid 325868 00:07:18.523 09:56:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@969 -- # kill 325868 00:07:18.523 09:56:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@974 -- # wait 325868 00:07:18.523 09:56:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:18.523 09:56:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:18.523 09:56:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:18.523 09:56:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:18.523 09:56:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:18.523 09:56:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:18.523 09:56:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:18.523 09:56:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:20.427 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:20.427 00:07:20.427 real 0m8.005s 00:07:20.427 user 0m11.225s 00:07:20.427 sys 0m3.091s 00:07:20.427 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:20.427 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:20.427 ************************************ 00:07:20.427 END TEST nvmf_abort 00:07:20.427 ************************************ 00:07:20.427 09:56:05 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:20.427 09:56:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:20.427 09:56:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:20.427 09:56:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:20.686 ************************************ 00:07:20.686 START TEST nvmf_ns_hotplug_stress 00:07:20.686 ************************************ 00:07:20.686 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:20.686 * Looking for test storage... 00:07:20.686 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:20.686 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:20.686 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:07:20.686 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:20.686 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:20.686 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:20.686 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:20.686 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:20.686 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:20.686 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:20.686 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:20.686 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:20.686 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:20.686 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:07:20.686 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:07:20.686 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:20.686 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:20.686 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:20.686 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:20.686 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:20.686 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:20.686 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:20.686 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:20.686 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.686 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.686 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.686 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:07:20.686 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:20.686 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:07:20.686 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:20.686 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:20.686 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:20.686 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:20.686 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:20.686 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:20.686 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:20.686 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:20.686 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:20.686 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:07:20.686 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:20.686 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:20.686 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:20.686 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:20.686 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:20.686 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:20.686 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:20.686 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:20.686 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:20.686 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:20.686 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:07:20.686 09:56:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:23.219 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:23.219 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:07:23.219 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:23.219 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:23.219 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:23.219 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:23.219 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:23.219 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:07:23.219 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:23.219 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:07:23.219 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:07:23.219 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:07:23.219 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:07:23.219 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:07:23.219 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:07:23.219 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:23.219 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:23.219 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:23.219 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:23.219 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:23.219 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:23.219 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:23.219 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:23.219 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:23.219 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:23.219 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:23.219 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:23.219 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:23.219 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:23.219 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:23.219 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:23.219 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:23.220 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:23.220 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:07:23.220 Found 0000:84:00.0 (0x8086 - 0x159b) 00:07:23.220 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:23.220 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:23.220 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:23.220 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:23.220 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:23.220 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:23.220 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:07:23.220 Found 0000:84:00.1 (0x8086 - 0x159b) 00:07:23.220 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:23.220 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:23.220 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:23.220 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:23.220 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:23.220 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:23.220 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:23.220 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:23.220 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:23.220 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:23.220 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:23.220 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:23.220 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:23.220 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:23.220 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:23.220 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:07:23.220 Found net devices under 0000:84:00.0: cvl_0_0 00:07:23.220 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:23.220 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:23.220 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:23.220 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:23.220 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:23.220 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:23.220 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:23.220 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:23.220 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:07:23.220 Found net devices under 0000:84:00.1: cvl_0_1 00:07:23.220 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:23.220 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:23.220 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:07:23.220 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:23.220 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:23.220 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:23.220 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:23.220 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:23.220 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:23.220 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:23.220 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:23.220 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:23.220 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:23.220 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:23.220 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:23.220 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:23.220 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:23.220 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:23.220 09:56:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:23.220 09:56:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:23.220 09:56:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:23.220 09:56:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:23.220 09:56:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:23.220 09:56:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:23.220 09:56:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:23.220 09:56:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:23.220 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:23.220 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.154 ms 00:07:23.220 00:07:23.220 --- 10.0.0.2 ping statistics --- 00:07:23.220 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:23.220 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:07:23.220 09:56:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:23.220 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:23.220 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.128 ms 00:07:23.220 00:07:23.220 --- 10.0.0.1 ping statistics --- 00:07:23.220 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:23.220 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:07:23.220 09:56:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:23.220 09:56:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:07:23.220 09:56:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:23.220 09:56:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:23.220 09:56:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:23.220 09:56:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:23.220 09:56:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:23.220 09:56:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:23.220 09:56:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:23.220 09:56:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:07:23.220 09:56:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:23.220 09:56:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:23.220 09:56:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:23.220 09:56:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=328228 00:07:23.220 09:56:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:23.220 09:56:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 328228 00:07:23.220 09:56:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 328228 ']' 00:07:23.220 09:56:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:23.220 09:56:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:23.220 09:56:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:23.220 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:23.220 09:56:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:23.220 09:56:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:23.220 [2024-07-25 09:56:08.171886] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:07:23.220 [2024-07-25 09:56:08.171981] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:23.220 EAL: No free 2048 kB hugepages reported on node 1 00:07:23.220 [2024-07-25 09:56:08.255696] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:23.220 [2024-07-25 09:56:08.381210] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:23.220 [2024-07-25 09:56:08.381279] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:23.220 [2024-07-25 09:56:08.381296] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:23.220 [2024-07-25 09:56:08.381310] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:23.220 [2024-07-25 09:56:08.381322] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:23.221 [2024-07-25 09:56:08.381392] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:23.221 [2024-07-25 09:56:08.381475] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:23.221 [2024-07-25 09:56:08.381479] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:23.479 09:56:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:23.479 09:56:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:07:23.479 09:56:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:23.479 09:56:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:23.479 09:56:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:23.479 09:56:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:23.479 09:56:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:07:23.479 09:56:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:23.737 [2024-07-25 09:56:08.858028] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:23.737 09:56:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:24.302 09:56:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:24.559 [2024-07-25 09:56:09.614111] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:24.559 09:56:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:25.125 09:56:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:07:25.690 Malloc0 00:07:25.690 09:56:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:25.948 Delay0 00:07:25.948 09:56:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:26.513 09:56:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:07:26.770 NULL1 00:07:26.771 09:56:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:07:27.028 09:56:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=328665 00:07:27.028 09:56:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 328665 00:07:27.028 09:56:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:07:27.028 09:56:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:27.028 EAL: No free 2048 kB hugepages reported on node 1 00:07:27.286 09:56:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:27.543 09:56:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:07:27.543 09:56:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:07:28.108 true 00:07:28.109 09:56:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 328665 00:07:28.109 09:56:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:29.480 Read completed with error (sct=0, sc=11) 00:07:29.480 09:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:29.480 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:29.480 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:29.480 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:29.480 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:29.480 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:29.480 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:29.480 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:29.738 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:29.738 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:29.738 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:29.738 09:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:07:29.738 09:56:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:07:29.995 true 00:07:29.995 09:56:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 328665 00:07:29.995 09:56:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:30.964 09:56:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:30.964 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:30.964 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:30.964 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:30.964 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:30.964 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:30.964 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:30.964 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:30.964 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:30.964 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:30.964 09:56:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:07:30.964 09:56:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:07:31.529 true 00:07:31.529 09:56:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 328665 00:07:31.529 09:56:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:32.095 09:56:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:32.095 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:32.095 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:32.095 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:32.095 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:32.352 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:32.352 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:32.352 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:32.352 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:32.352 09:56:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:07:32.352 09:56:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:07:32.917 true 00:07:32.917 09:56:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 328665 00:07:32.917 09:56:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:33.482 09:56:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:33.482 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:33.482 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:33.482 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:33.482 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:33.482 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:33.482 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:33.740 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:33.740 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:33.740 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:33.740 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:33.740 09:56:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:07:33.740 09:56:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:07:33.997 true 00:07:33.997 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 328665 00:07:33.997 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:34.930 09:56:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:34.930 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:34.930 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:34.930 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:34.930 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:34.930 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:34.930 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:35.187 09:56:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:07:35.187 09:56:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:07:35.753 true 00:07:35.753 09:56:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 328665 00:07:35.753 09:56:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:37.125 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:37.125 09:56:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:37.125 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:37.125 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:37.125 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:37.125 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:37.382 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:37.383 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:37.383 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:37.383 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:37.383 09:56:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:07:37.383 09:56:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:07:37.948 true 00:07:37.948 09:56:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 328665 00:07:37.948 09:56:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:38.512 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:38.512 09:56:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:38.512 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:38.512 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:38.512 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:38.512 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:38.512 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:38.770 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:38.770 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:38.770 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:38.770 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:38.770 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:38.770 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:39.028 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:39.028 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:39.028 09:56:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:07:39.028 09:56:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:07:39.285 true 00:07:39.285 09:56:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 328665 00:07:39.285 09:56:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:40.219 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:40.219 09:56:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:40.219 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:40.219 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:40.219 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:40.219 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:40.219 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:40.219 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:40.219 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:40.219 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:40.476 09:56:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:07:40.476 09:56:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:07:40.734 true 00:07:40.734 09:56:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 328665 00:07:40.734 09:56:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:41.299 09:56:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:41.864 09:56:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:07:41.864 09:56:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:07:42.429 true 00:07:42.429 09:56:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 328665 00:07:42.429 09:56:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:43.362 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:43.362 09:56:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:43.362 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:43.619 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:43.619 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:43.619 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:43.619 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:43.619 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:43.619 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:43.876 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:43.876 09:56:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:07:43.877 09:56:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:07:44.134 true 00:07:44.134 09:56:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 328665 00:07:44.134 09:56:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:44.700 09:56:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:44.700 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:44.958 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:44.958 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:44.958 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:44.958 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:44.958 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:44.958 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:44.958 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:45.215 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:45.215 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:45.215 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:45.215 09:56:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:07:45.215 09:56:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:07:45.780 true 00:07:45.780 09:56:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 328665 00:07:45.780 09:56:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:46.376 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:46.376 09:56:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:46.377 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:46.377 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:46.377 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:46.377 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:46.377 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:46.377 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:46.377 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:46.633 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:46.889 09:56:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:07:46.889 09:56:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:07:47.145 true 00:07:47.146 09:56:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 328665 00:07:47.146 09:56:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:47.707 09:56:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:47.963 09:56:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:07:47.963 09:56:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:07:48.220 true 00:07:48.220 09:56:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 328665 00:07:48.220 09:56:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:48.477 09:56:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:49.041 09:56:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:07:49.041 09:56:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:07:49.298 true 00:07:49.298 09:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 328665 00:07:49.298 09:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:49.863 09:56:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:49.863 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:49.863 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:49.863 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:49.863 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:49.863 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:49.863 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:50.121 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:50.121 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:50.121 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:50.121 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:50.121 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:50.121 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:50.121 09:56:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:07:50.121 09:56:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:07:50.684 true 00:07:50.684 09:56:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 328665 00:07:50.684 09:56:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:51.248 09:56:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:51.248 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:51.248 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:51.248 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:51.248 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:51.506 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:51.506 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:51.506 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:51.506 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:51.506 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:51.506 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:51.763 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:51.763 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:51.763 09:56:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:07:51.763 09:56:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:07:52.019 true 00:07:52.019 09:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 328665 00:07:52.019 09:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:52.950 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:52.950 09:56:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:52.950 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:52.950 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:52.950 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:52.950 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:52.950 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:52.950 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:52.950 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:53.207 09:56:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:07:53.207 09:56:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:07:53.463 true 00:07:53.463 09:56:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 328665 00:07:53.463 09:56:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:53.721 09:56:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:54.285 09:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:07:54.285 09:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:07:54.542 true 00:07:54.542 09:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 328665 00:07:54.542 09:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:54.800 09:56:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:55.363 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:07:55.363 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:07:55.620 true 00:07:55.620 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 328665 00:07:55.620 09:56:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:55.877 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:56.442 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:07:56.442 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:07:57.008 true 00:07:57.008 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 328665 00:07:57.008 09:56:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:57.265 09:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:57.265 09:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:07:57.265 09:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:07:57.523 Initializing NVMe Controllers 00:07:57.523 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:57.523 Controller IO queue size 128, less than required. 00:07:57.523 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:57.523 Controller IO queue size 128, less than required. 00:07:57.523 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:57.523 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:57.523 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:07:57.523 Initialization complete. Launching workers. 00:07:57.523 ======================================================== 00:07:57.523 Latency(us) 00:07:57.523 Device Information : IOPS MiB/s Average min max 00:07:57.523 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4609.52 2.25 17212.73 1682.59 1076245.37 00:07:57.523 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 12936.56 6.32 9864.85 1603.20 364695.52 00:07:57.523 ======================================================== 00:07:57.523 Total : 17546.09 8.57 11795.21 1603.20 1076245.37 00:07:57.523 00:07:57.523 true 00:07:57.523 09:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 328665 00:07:57.523 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (328665) - No such process 00:07:57.523 09:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 328665 00:07:57.523 09:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:57.781 09:56:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:58.038 09:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:07:58.038 09:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:07:58.038 09:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:07:58.038 09:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:58.038 09:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:07:58.602 null0 00:07:58.602 09:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:58.602 09:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:58.602 09:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:07:58.859 null1 00:07:58.859 09:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:58.859 09:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:58.859 09:56:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:07:59.423 null2 00:07:59.424 09:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:59.424 09:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:59.424 09:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:07:59.682 null3 00:07:59.682 09:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:59.682 09:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:59.682 09:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:07:59.940 null4 00:07:59.940 09:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:59.940 09:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:59.940 09:56:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:08:00.198 null5 00:08:00.456 09:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:00.456 09:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:00.456 09:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:08:00.750 null6 00:08:00.750 09:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:00.750 09:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:00.750 09:56:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:08:01.316 null7 00:08:01.316 09:56:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:01.316 09:56:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:01.316 09:56:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:08:01.316 09:56:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:01.316 09:56:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:01.316 09:56:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:01.316 09:56:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:01.316 09:56:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:08:01.316 09:56:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:01.316 09:56:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:01.316 09:56:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:01.316 09:56:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:08:01.316 09:56:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:01.316 09:56:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:08:01.316 09:56:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.316 09:56:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:01.316 09:56:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:08:01.316 09:56:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:01.316 09:56:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:01.316 09:56:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:08:01.316 09:56:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:01.316 09:56:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.317 09:56:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:01.317 09:56:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:01.317 09:56:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:08:01.317 09:56:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:01.317 09:56:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.317 09:56:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:01.317 09:56:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:01.317 09:56:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:01.317 09:56:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:08:01.317 09:56:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:01.317 09:56:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:08:01.317 09:56:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:01.317 09:56:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.317 09:56:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:01.317 09:56:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:08:01.317 09:56:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:01.317 09:56:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:01.317 09:56:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:08:01.317 09:56:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:01.317 09:56:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:01.317 09:56:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.317 09:56:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:01.317 09:56:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:01.317 09:56:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:08:01.317 09:56:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:08:01.317 09:56:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:01.317 09:56:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:01.317 09:56:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:01.317 09:56:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.317 09:56:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:01.317 09:56:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:01.317 09:56:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:08:01.317 09:56:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:01.317 09:56:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:08:01.317 09:56:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:01.317 09:56:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:01.317 09:56:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.317 09:56:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:01.317 09:56:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:01.317 09:56:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:08:01.317 09:56:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:01.317 09:56:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:08:01.317 09:56:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:01.317 09:56:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:01.317 09:56:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 332712 332713 332714 332716 332719 332721 332723 332725 00:08:01.317 09:56:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.317 09:56:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:01.575 09:56:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:01.575 09:56:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:01.575 09:56:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:01.575 09:56:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:01.575 09:56:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:01.575 09:56:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:01.575 09:56:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:01.575 09:56:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:01.834 09:56:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.834 09:56:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.834 09:56:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:01.834 09:56:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.834 09:56:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.834 09:56:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:01.834 09:56:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:01.834 09:56:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:01.834 09:56:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:02.093 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.093 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.093 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:02.093 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.093 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.093 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:02.093 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.093 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.093 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:02.093 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.093 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.093 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:02.093 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.093 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.093 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:02.093 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:02.093 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:02.351 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:02.351 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:02.351 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:02.351 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:02.351 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:02.351 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:02.351 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.351 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.351 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:02.609 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.609 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.609 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:02.609 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.609 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.609 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:02.609 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.609 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.609 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:02.609 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.609 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.609 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:02.609 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.609 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.609 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:02.609 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.609 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.609 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:02.609 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:02.609 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:02.609 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:02.609 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:02.867 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:02.867 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:02.867 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:02.867 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:02.867 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:02.867 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:02.867 09:56:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:03.125 09:56:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.125 09:56:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.125 09:56:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:03.125 09:56:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.125 09:56:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.125 09:56:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:03.125 09:56:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.125 09:56:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.125 09:56:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:03.125 09:56:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.125 09:56:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.125 09:56:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:03.125 09:56:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.125 09:56:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.125 09:56:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:03.125 09:56:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.125 09:56:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.125 09:56:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:03.125 09:56:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.125 09:56:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.126 09:56:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:03.126 09:56:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.126 09:56:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.126 09:56:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:03.384 09:56:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:03.384 09:56:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:03.384 09:56:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:03.384 09:56:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:03.384 09:56:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:03.384 09:56:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:03.384 09:56:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:03.384 09:56:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:03.642 09:56:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.642 09:56:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.642 09:56:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:03.642 09:56:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.642 09:56:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.643 09:56:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:03.643 09:56:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.643 09:56:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.643 09:56:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:03.643 09:56:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.643 09:56:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.643 09:56:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:03.643 09:56:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.643 09:56:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.643 09:56:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:03.643 09:56:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.643 09:56:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.643 09:56:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:03.643 09:56:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.643 09:56:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.643 09:56:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:03.643 09:56:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:03.643 09:56:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:03.643 09:56:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:03.901 09:56:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:03.901 09:56:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:03.901 09:56:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:03.901 09:56:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:03.901 09:56:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:03.901 09:56:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:03.901 09:56:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:03.901 09:56:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:04.160 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.160 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.160 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:04.160 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.160 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.160 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:04.160 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.160 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.160 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:04.160 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.160 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.160 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:04.160 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.160 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.160 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:04.160 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.160 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.160 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:04.160 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.160 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.160 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:04.160 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.160 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.160 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:04.418 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:04.418 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:04.418 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:04.418 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:04.418 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:04.676 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:04.676 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:04.676 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:04.676 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.676 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.676 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:04.676 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.676 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.676 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:04.676 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.676 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.676 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:04.935 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.935 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.935 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:04.935 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.935 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.935 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.935 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:04.935 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.935 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:04.935 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.935 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.935 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:04.935 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:04.935 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:04.935 09:56:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:04.935 09:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:05.193 09:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:05.193 09:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:05.193 09:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:05.193 09:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:05.193 09:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:05.193 09:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:05.193 09:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:05.193 09:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.193 09:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.193 09:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:05.451 09:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.451 09:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.451 09:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:05.451 09:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.451 09:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.451 09:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:05.451 09:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.451 09:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.451 09:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:05.451 09:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.451 09:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.451 09:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:05.451 09:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.451 09:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.451 09:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:05.451 09:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.451 09:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.451 09:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:05.451 09:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.452 09:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.452 09:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:05.452 09:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:05.709 09:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:05.709 09:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:05.709 09:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:05.709 09:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:05.709 09:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:05.709 09:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.709 09:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.709 09:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:05.967 09:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:05.967 09:56:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:05.967 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.967 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.967 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:05.967 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.967 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.967 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:05.967 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.967 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.967 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:05.967 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.967 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.967 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:05.967 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:05.967 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:05.967 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:05.967 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:06.225 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.225 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.225 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:06.225 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.225 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.225 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:06.225 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:06.225 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:06.225 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:06.483 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:06.483 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.483 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.483 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:06.484 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:06.484 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:06.484 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:06.484 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.484 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.484 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:06.484 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.484 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.484 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:06.741 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.741 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.741 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:06.741 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.741 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.741 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:06.741 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.741 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.741 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:06.741 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:06.741 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.741 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.741 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:06.741 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.741 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.741 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:06.741 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:06.998 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:06.999 09:56:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:06.999 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:06.999 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:06.999 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:06.999 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.999 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:06.999 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:06.999 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:06.999 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.256 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.256 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.256 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.256 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.256 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.256 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.256 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.256 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.256 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.256 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.256 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:07.256 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:07.256 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:08:07.256 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:08:07.256 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:07.256 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:08:07.256 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:07.256 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:08:07.256 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:07.256 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:07.256 rmmod nvme_tcp 00:08:07.256 rmmod nvme_fabrics 00:08:07.256 rmmod nvme_keyring 00:08:07.256 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:07.256 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:08:07.256 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:08:07.256 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 328228 ']' 00:08:07.256 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 328228 00:08:07.256 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 328228 ']' 00:08:07.256 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 328228 00:08:07.256 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:08:07.256 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:07.256 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 328228 00:08:07.513 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:07.513 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:07.513 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 328228' 00:08:07.513 killing process with pid 328228 00:08:07.513 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 328228 00:08:07.513 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 328228 00:08:07.771 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:07.771 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:07.771 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:07.771 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:07.771 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:07.771 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:07.771 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:07.771 09:56:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:09.684 09:56:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:09.684 00:08:09.684 real 0m49.162s 00:08:09.684 user 3m45.407s 00:08:09.684 sys 0m18.613s 00:08:09.684 09:56:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:09.684 09:56:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:09.684 ************************************ 00:08:09.684 END TEST nvmf_ns_hotplug_stress 00:08:09.684 ************************************ 00:08:09.684 09:56:54 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:09.684 09:56:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:09.684 09:56:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:09.684 09:56:54 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:09.684 ************************************ 00:08:09.684 START TEST nvmf_delete_subsystem 00:08:09.684 ************************************ 00:08:09.684 09:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:09.942 * Looking for test storage... 00:08:09.942 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:09.942 09:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:09.942 09:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:08:09.942 09:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:09.942 09:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:09.942 09:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:09.942 09:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:09.942 09:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:09.942 09:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:09.942 09:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:09.942 09:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:09.942 09:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:09.942 09:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:09.942 09:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:08:09.942 09:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:08:09.942 09:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:09.942 09:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:09.942 09:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:09.942 09:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:09.942 09:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:09.942 09:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:09.942 09:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:09.942 09:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:09.942 09:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.942 09:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.943 09:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.943 09:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:08:09.943 09:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:09.943 09:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:08:09.943 09:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:09.943 09:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:09.943 09:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:09.943 09:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:09.943 09:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:09.943 09:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:09.943 09:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:09.943 09:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:09.943 09:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:08:09.943 09:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:09.943 09:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:09.943 09:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:09.943 09:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:09.943 09:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:09.943 09:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:09.943 09:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:09.943 09:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:09.943 09:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:09.943 09:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:09.943 09:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:08:09.943 09:56:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:12.477 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:12.477 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:08:12.477 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:12.477 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:12.477 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:12.477 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:12.477 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:12.477 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:08:12.477 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:12.477 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:08:12.477 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:08:12.477 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:08:12.477 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:08:12.477 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:08:12.477 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:08:12.477 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:12.477 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:12.477 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:12.477 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:12.477 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:12.477 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:12.477 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:12.477 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:12.477 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:12.477 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:12.477 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:12.477 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:12.477 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:12.477 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:12.477 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:12.477 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:12.477 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:12.477 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:12.477 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:08:12.477 Found 0000:84:00.0 (0x8086 - 0x159b) 00:08:12.477 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:12.477 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:12.477 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:12.477 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:12.477 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:12.477 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:12.477 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:08:12.477 Found 0000:84:00.1 (0x8086 - 0x159b) 00:08:12.477 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:12.477 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:12.477 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:12.477 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:12.477 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:12.477 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:12.477 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:12.477 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:12.477 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:12.477 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:12.477 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:12.477 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:12.477 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:12.477 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:12.477 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:12.477 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:08:12.477 Found net devices under 0000:84:00.0: cvl_0_0 00:08:12.477 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:12.477 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:12.477 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:12.477 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:12.477 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:12.477 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:12.477 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:12.477 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:12.477 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:08:12.477 Found net devices under 0000:84:00.1: cvl_0_1 00:08:12.477 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:12.477 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:12.477 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:08:12.477 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:12.477 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:12.477 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:12.477 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:12.477 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:12.477 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:12.477 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:12.477 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:12.477 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:12.477 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:12.477 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:12.477 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:12.477 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:12.477 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:12.477 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:12.477 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:12.477 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:12.477 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:12.478 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:12.478 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:12.478 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:12.478 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:12.478 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:12.478 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:12.478 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.184 ms 00:08:12.478 00:08:12.478 --- 10.0.0.2 ping statistics --- 00:08:12.478 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:12.478 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:08:12.478 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:12.478 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:12.478 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.120 ms 00:08:12.478 00:08:12.478 --- 10.0.0.1 ping statistics --- 00:08:12.478 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:12.478 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:08:12.478 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:12.478 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:08:12.478 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:12.478 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:12.478 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:12.478 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:12.478 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:12.478 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:12.478 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:12.478 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:08:12.478 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:12.478 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:12.478 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:12.478 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=335615 00:08:12.478 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:08:12.478 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 335615 00:08:12.478 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 335615 ']' 00:08:12.478 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:12.478 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:12.478 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:12.478 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:12.478 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:12.478 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:12.478 [2024-07-25 09:56:57.497715] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:12.478 [2024-07-25 09:56:57.497810] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:12.478 EAL: No free 2048 kB hugepages reported on node 1 00:08:12.478 [2024-07-25 09:56:57.577997] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:12.735 [2024-07-25 09:56:57.702897] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:12.735 [2024-07-25 09:56:57.702963] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:12.735 [2024-07-25 09:56:57.702980] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:12.735 [2024-07-25 09:56:57.702994] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:12.736 [2024-07-25 09:56:57.703005] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:12.736 [2024-07-25 09:56:57.703065] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:12.736 [2024-07-25 09:56:57.703071] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.736 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:12.736 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:08:12.736 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:12.736 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:12.736 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:12.736 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:12.736 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:12.736 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.736 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:12.736 [2024-07-25 09:56:57.857786] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:12.736 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.736 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:12.736 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.736 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:12.736 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.736 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:12.736 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.736 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:12.736 [2024-07-25 09:56:57.874025] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:12.736 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.736 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:12.736 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.736 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:12.736 NULL1 00:08:12.736 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.736 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:12.736 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.736 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:12.736 Delay0 00:08:12.736 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.736 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:12.736 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:12.736 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:12.736 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:12.993 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=335658 00:08:12.993 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:12.993 09:56:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:08:12.993 EAL: No free 2048 kB hugepages reported on node 1 00:08:12.993 [2024-07-25 09:56:57.958784] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:14.890 09:56:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:14.890 09:56:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:14.890 09:56:59 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:15.148 Write completed with error (sct=0, sc=8) 00:08:15.148 starting I/O failed: -6 00:08:15.148 Read completed with error (sct=0, sc=8) 00:08:15.148 Read completed with error (sct=0, sc=8) 00:08:15.148 Write completed with error (sct=0, sc=8) 00:08:15.148 Write completed with error (sct=0, sc=8) 00:08:15.148 starting I/O failed: -6 00:08:15.148 Read completed with error (sct=0, sc=8) 00:08:15.148 Write completed with error (sct=0, sc=8) 00:08:15.148 Write completed with error (sct=0, sc=8) 00:08:15.148 Write completed with error (sct=0, sc=8) 00:08:15.148 starting I/O failed: -6 00:08:15.148 Write completed with error (sct=0, sc=8) 00:08:15.148 Write completed with error (sct=0, sc=8) 00:08:15.148 Read completed with error (sct=0, sc=8) 00:08:15.148 Read completed with error (sct=0, sc=8) 00:08:15.148 starting I/O failed: -6 00:08:15.148 Read completed with error (sct=0, sc=8) 00:08:15.148 Read completed with error (sct=0, sc=8) 00:08:15.148 Read completed with error (sct=0, sc=8) 00:08:15.148 Read completed with error (sct=0, sc=8) 00:08:15.148 starting I/O failed: -6 00:08:15.148 Read completed with error (sct=0, sc=8) 00:08:15.148 Read completed with error (sct=0, sc=8) 00:08:15.148 Read completed with error (sct=0, sc=8) 00:08:15.148 Read completed with error (sct=0, sc=8) 00:08:15.148 Read completed with error (sct=0, sc=8) 00:08:15.148 starting I/O failed: -6 00:08:15.148 starting I/O failed: -6 00:08:15.148 Read completed with error (sct=0, sc=8) 00:08:15.148 Read completed with error (sct=0, sc=8) 00:08:15.148 Read completed with error (sct=0, sc=8) 00:08:15.148 Read completed with error (sct=0, sc=8) 00:08:15.148 Read completed with error (sct=0, sc=8) 00:08:15.148 Write completed with error (sct=0, sc=8) 00:08:15.148 Write completed with error (sct=0, sc=8) 00:08:15.148 Read completed with error (sct=0, sc=8) 00:08:15.148 starting I/O failed: -6 00:08:15.148 starting I/O failed: -6 00:08:15.148 Read completed with error (sct=0, sc=8) 00:08:15.148 Read completed with error (sct=0, sc=8) 00:08:15.148 Read completed with error (sct=0, sc=8) 00:08:15.148 Write completed with error (sct=0, sc=8) 00:08:15.148 Read completed with error (sct=0, sc=8) 00:08:15.148 Read completed with error (sct=0, sc=8) 00:08:15.148 Read completed with error (sct=0, sc=8) 00:08:15.148 Read completed with error (sct=0, sc=8) 00:08:15.148 starting I/O failed: -6 00:08:15.148 starting I/O failed: -6 00:08:15.148 Read completed with error (sct=0, sc=8) 00:08:15.148 Read completed with error (sct=0, sc=8) 00:08:15.148 Read completed with error (sct=0, sc=8) 00:08:15.148 Read completed with error (sct=0, sc=8) 00:08:15.148 Write completed with error (sct=0, sc=8) 00:08:15.148 Read completed with error (sct=0, sc=8) 00:08:15.148 Write completed with error (sct=0, sc=8) 00:08:15.148 Read completed with error (sct=0, sc=8) 00:08:15.148 starting I/O failed: -6 00:08:15.148 starting I/O failed: -6 00:08:15.148 Read completed with error (sct=0, sc=8) 00:08:15.148 Read completed with error (sct=0, sc=8) 00:08:15.148 Read completed with error (sct=0, sc=8) 00:08:15.148 Read completed with error (sct=0, sc=8) 00:08:15.148 Read completed with error (sct=0, sc=8) 00:08:15.148 Read completed with error (sct=0, sc=8) 00:08:15.148 Write completed with error (sct=0, sc=8) 00:08:15.148 starting I/O failed: -6 00:08:15.148 Write completed with error (sct=0, sc=8) 00:08:15.148 Write completed with error (sct=0, sc=8) 00:08:15.148 starting I/O failed: -6 00:08:15.148 Write completed with error (sct=0, sc=8) 00:08:15.148 Read completed with error (sct=0, sc=8) 00:08:15.148 Read completed with error (sct=0, sc=8) 00:08:15.148 Read completed with error (sct=0, sc=8) 00:08:15.148 Read completed with error (sct=0, sc=8) 00:08:15.148 Read completed with error (sct=0, sc=8) 00:08:15.148 starting I/O failed: -6 00:08:15.148 Read completed with error (sct=0, sc=8) 00:08:15.148 Read completed with error (sct=0, sc=8) 00:08:15.148 starting I/O failed: -6 00:08:15.148 Write completed with error (sct=0, sc=8) 00:08:15.148 Read completed with error (sct=0, sc=8) 00:08:15.148 Read completed with error (sct=0, sc=8) 00:08:15.148 Write completed with error (sct=0, sc=8) 00:08:15.148 Read completed with error (sct=0, sc=8) 00:08:15.148 Write completed with error (sct=0, sc=8) 00:08:15.148 starting I/O failed: -6 00:08:15.148 Write completed with error (sct=0, sc=8) 00:08:15.148 Read completed with error (sct=0, sc=8) 00:08:15.148 Write completed with error (sct=0, sc=8) 00:08:15.148 Read completed with error (sct=0, sc=8) 00:08:15.148 [2024-07-25 09:57:00.104758] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1b8f0 is same with the state(5) to be set 00:08:15.148 starting I/O failed: -6 00:08:15.148 Read completed with error (sct=0, sc=8) 00:08:15.148 Read completed with error (sct=0, sc=8) 00:08:15.148 Read completed with error (sct=0, sc=8) 00:08:15.148 Write completed with error (sct=0, sc=8) 00:08:15.148 starting I/O failed: -6 00:08:15.148 Read completed with error (sct=0, sc=8) 00:08:15.148 Read completed with error (sct=0, sc=8) 00:08:15.148 Read completed with error (sct=0, sc=8) 00:08:15.148 Write completed with error (sct=0, sc=8) 00:08:15.148 Write completed with error (sct=0, sc=8) 00:08:15.148 Write completed with error (sct=0, sc=8) 00:08:15.148 Read completed with error (sct=0, sc=8) 00:08:15.148 Write completed with error (sct=0, sc=8) 00:08:15.148 Read completed with error (sct=0, sc=8) 00:08:15.148 Read completed with error (sct=0, sc=8) 00:08:15.148 starting I/O failed: -6 00:08:15.148 Read completed with error (sct=0, sc=8) 00:08:15.148 Write completed with error (sct=0, sc=8) 00:08:15.148 Read completed with error (sct=0, sc=8) 00:08:15.148 Read completed with error (sct=0, sc=8) 00:08:15.148 Read completed with error (sct=0, sc=8) 00:08:15.148 Read completed with error (sct=0, sc=8) 00:08:15.148 Write completed with error (sct=0, sc=8) 00:08:15.148 Read completed with error (sct=0, sc=8) 00:08:15.148 Read completed with error (sct=0, sc=8) 00:08:15.148 Write completed with error (sct=0, sc=8) 00:08:15.148 Read completed with error (sct=0, sc=8) 00:08:15.148 Read completed with error (sct=0, sc=8) 00:08:15.148 starting I/O failed: -6 00:08:15.148 Read completed with error (sct=0, sc=8) 00:08:15.148 Write completed with error (sct=0, sc=8) 00:08:15.148 Read completed with error (sct=0, sc=8) 00:08:15.148 Read completed with error (sct=0, sc=8) 00:08:15.148 Write completed with error (sct=0, sc=8) 00:08:15.148 Write completed with error (sct=0, sc=8) 00:08:15.148 Write completed with error (sct=0, sc=8) 00:08:15.148 Write completed with error (sct=0, sc=8) 00:08:15.148 Read completed with error (sct=0, sc=8) 00:08:15.148 Read completed with error (sct=0, sc=8) 00:08:15.148 Read completed with error (sct=0, sc=8) 00:08:15.148 Read completed with error (sct=0, sc=8) 00:08:15.148 Read completed with error (sct=0, sc=8) 00:08:15.149 Read completed with error (sct=0, sc=8) 00:08:15.149 Read completed with error (sct=0, sc=8) 00:08:15.149 Write completed with error (sct=0, sc=8) 00:08:15.149 Read completed with error (sct=0, sc=8) 00:08:15.149 starting I/O failed: -6 00:08:15.149 Read completed with error (sct=0, sc=8) 00:08:15.149 Read completed with error (sct=0, sc=8) 00:08:15.149 Read completed with error (sct=0, sc=8) 00:08:15.149 Read completed with error (sct=0, sc=8) 00:08:15.149 Write completed with error (sct=0, sc=8) 00:08:15.149 Read completed with error (sct=0, sc=8) 00:08:15.149 Read completed with error (sct=0, sc=8) 00:08:15.149 Read completed with error (sct=0, sc=8) 00:08:15.149 starting I/O failed: -6 00:08:15.149 Write completed with error (sct=0, sc=8) 00:08:15.149 Write completed with error (sct=0, sc=8) 00:08:15.149 Write completed with error (sct=0, sc=8) 00:08:15.149 Write completed with error (sct=0, sc=8) 00:08:15.149 Read completed with error (sct=0, sc=8) 00:08:15.149 Read completed with error (sct=0, sc=8) 00:08:15.149 Read completed with error (sct=0, sc=8) 00:08:15.149 Write completed with error (sct=0, sc=8) 00:08:15.149 starting I/O failed: -6 00:08:15.149 Read completed with error (sct=0, sc=8) 00:08:15.149 Read completed with error (sct=0, sc=8) 00:08:15.149 Read completed with error (sct=0, sc=8) 00:08:15.149 Read completed with error (sct=0, sc=8) 00:08:15.149 Read completed with error (sct=0, sc=8) 00:08:15.149 Read completed with error (sct=0, sc=8) 00:08:15.149 Read completed with error (sct=0, sc=8) 00:08:15.149 Read completed with error (sct=0, sc=8) 00:08:15.149 Read completed with error (sct=0, sc=8) 00:08:15.149 starting I/O failed: -6 00:08:15.149 starting I/O failed: -6 00:08:15.149 starting I/O failed: -6 00:08:15.149 Read completed with error (sct=0, sc=8) 00:08:15.149 Read completed with error (sct=0, sc=8) 00:08:15.149 starting I/O failed: -6 00:08:15.149 Read completed with error (sct=0, sc=8) 00:08:15.149 Read completed with error (sct=0, sc=8) 00:08:15.149 starting I/O failed: -6 00:08:15.149 Read completed with error (sct=0, sc=8) 00:08:15.149 Read completed with error (sct=0, sc=8) 00:08:15.149 starting I/O failed: -6 00:08:15.149 Write completed with error (sct=0, sc=8) 00:08:15.149 Write completed with error (sct=0, sc=8) 00:08:15.149 starting I/O failed: -6 00:08:15.149 Write completed with error (sct=0, sc=8) 00:08:15.149 Read completed with error (sct=0, sc=8) 00:08:15.149 starting I/O failed: -6 00:08:15.149 Read completed with error (sct=0, sc=8) 00:08:15.149 Read completed with error (sct=0, sc=8) 00:08:15.149 starting I/O failed: -6 00:08:15.149 Read completed with error (sct=0, sc=8) 00:08:15.149 Write completed with error (sct=0, sc=8) 00:08:15.149 starting I/O failed: -6 00:08:15.149 Write completed with error (sct=0, sc=8) 00:08:15.149 Read completed with error (sct=0, sc=8) 00:08:15.149 starting I/O failed: -6 00:08:15.149 Read completed with error (sct=0, sc=8) 00:08:15.149 Read completed with error (sct=0, sc=8) 00:08:15.149 starting I/O failed: -6 00:08:15.149 Write completed with error (sct=0, sc=8) 00:08:15.149 Read completed with error (sct=0, sc=8) 00:08:15.149 starting I/O failed: -6 00:08:15.149 Read completed with error (sct=0, sc=8) 00:08:15.149 Read completed with error (sct=0, sc=8) 00:08:15.149 starting I/O failed: -6 00:08:15.149 Read completed with error (sct=0, sc=8) 00:08:15.149 Write completed with error (sct=0, sc=8) 00:08:15.149 starting I/O failed: -6 00:08:15.149 Read completed with error (sct=0, sc=8) 00:08:15.149 Read completed with error (sct=0, sc=8) 00:08:15.149 starting I/O failed: -6 00:08:15.149 Read completed with error (sct=0, sc=8) 00:08:15.149 Read completed with error (sct=0, sc=8) 00:08:15.149 starting I/O failed: -6 00:08:15.149 Read completed with error (sct=0, sc=8) 00:08:15.149 Write completed with error (sct=0, sc=8) 00:08:15.149 starting I/O failed: -6 00:08:15.149 Read completed with error (sct=0, sc=8) 00:08:15.149 Read completed with error (sct=0, sc=8) 00:08:15.149 starting I/O failed: -6 00:08:15.149 Read completed with error (sct=0, sc=8) 00:08:15.149 Read completed with error (sct=0, sc=8) 00:08:15.149 starting I/O failed: -6 00:08:15.149 Write completed with error (sct=0, sc=8) 00:08:15.149 Read completed with error (sct=0, sc=8) 00:08:15.149 starting I/O failed: -6 00:08:15.149 Read completed with error (sct=0, sc=8) 00:08:15.149 Read completed with error (sct=0, sc=8) 00:08:15.149 starting I/O failed: -6 00:08:15.149 Read completed with error (sct=0, sc=8) 00:08:15.149 Read completed with error (sct=0, sc=8) 00:08:15.149 starting I/O failed: -6 00:08:15.149 Read completed with error (sct=0, sc=8) 00:08:15.149 Read completed with error (sct=0, sc=8) 00:08:15.149 starting I/O failed: -6 00:08:15.149 Read completed with error (sct=0, sc=8) 00:08:15.149 Read completed with error (sct=0, sc=8) 00:08:15.149 starting I/O failed: -6 00:08:15.149 Read completed with error (sct=0, sc=8) 00:08:15.149 Write completed with error (sct=0, sc=8) 00:08:15.149 starting I/O failed: -6 00:08:15.149 Read completed with error (sct=0, sc=8) 00:08:15.149 Write completed with error (sct=0, sc=8) 00:08:15.149 starting I/O failed: -6 00:08:15.149 Read completed with error (sct=0, sc=8) 00:08:15.149 Read completed with error (sct=0, sc=8) 00:08:15.149 starting I/O failed: -6 00:08:15.149 Write completed with error (sct=0, sc=8) 00:08:15.149 Read completed with error (sct=0, sc=8) 00:08:15.149 starting I/O failed: -6 00:08:15.149 Read completed with error (sct=0, sc=8) 00:08:15.149 Read completed with error (sct=0, sc=8) 00:08:15.149 starting I/O failed: -6 00:08:15.149 Read completed with error (sct=0, sc=8) 00:08:15.149 Read completed with error (sct=0, sc=8) 00:08:15.149 starting I/O failed: -6 00:08:15.149 Read completed with error (sct=0, sc=8) 00:08:15.149 Read completed with error (sct=0, sc=8) 00:08:15.149 starting I/O failed: -6 00:08:15.149 Read completed with error (sct=0, sc=8) 00:08:15.149 [2024-07-25 09:57:00.106103] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f96e0000c00 is same with the state(5) to be set 00:08:16.081 [2024-07-25 09:57:01.056699] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1cac0 is same with the state(5) to be set 00:08:16.081 Write completed with error (sct=0, sc=8) 00:08:16.081 Write completed with error (sct=0, sc=8) 00:08:16.081 Read completed with error (sct=0, sc=8) 00:08:16.081 Read completed with error (sct=0, sc=8) 00:08:16.081 Write completed with error (sct=0, sc=8) 00:08:16.081 Write completed with error (sct=0, sc=8) 00:08:16.081 Read completed with error (sct=0, sc=8) 00:08:16.081 Read completed with error (sct=0, sc=8) 00:08:16.081 Read completed with error (sct=0, sc=8) 00:08:16.081 Read completed with error (sct=0, sc=8) 00:08:16.081 Write completed with error (sct=0, sc=8) 00:08:16.081 Read completed with error (sct=0, sc=8) 00:08:16.081 Write completed with error (sct=0, sc=8) 00:08:16.081 Read completed with error (sct=0, sc=8) 00:08:16.081 Read completed with error (sct=0, sc=8) 00:08:16.081 Read completed with error (sct=0, sc=8) 00:08:16.082 Read completed with error (sct=0, sc=8) 00:08:16.082 Write completed with error (sct=0, sc=8) 00:08:16.082 Write completed with error (sct=0, sc=8) 00:08:16.082 Read completed with error (sct=0, sc=8) 00:08:16.082 Read completed with error (sct=0, sc=8) 00:08:16.082 Read completed with error (sct=0, sc=8) 00:08:16.082 Write completed with error (sct=0, sc=8) 00:08:16.082 Write completed with error (sct=0, sc=8) 00:08:16.082 Write completed with error (sct=0, sc=8) 00:08:16.082 Read completed with error (sct=0, sc=8) 00:08:16.082 Write completed with error (sct=0, sc=8) 00:08:16.082 Write completed with error (sct=0, sc=8) 00:08:16.082 Write completed with error (sct=0, sc=8) 00:08:16.082 Read completed with error (sct=0, sc=8) 00:08:16.082 Write completed with error (sct=0, sc=8) 00:08:16.082 Read completed with error (sct=0, sc=8) 00:08:16.082 Write completed with error (sct=0, sc=8) 00:08:16.082 Read completed with error (sct=0, sc=8) 00:08:16.082 Read completed with error (sct=0, sc=8) 00:08:16.082 Write completed with error (sct=0, sc=8) 00:08:16.082 Write completed with error (sct=0, sc=8) 00:08:16.082 Write completed with error (sct=0, sc=8) 00:08:16.082 Read completed with error (sct=0, sc=8) 00:08:16.082 Read completed with error (sct=0, sc=8) 00:08:16.082 Read completed with error (sct=0, sc=8) 00:08:16.082 [2024-07-25 09:57:01.103239] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f96e000d660 is same with the state(5) to be set 00:08:16.082 Read completed with error (sct=0, sc=8) 00:08:16.082 Read completed with error (sct=0, sc=8) 00:08:16.082 Read completed with error (sct=0, sc=8) 00:08:16.082 Read completed with error (sct=0, sc=8) 00:08:16.082 Read completed with error (sct=0, sc=8) 00:08:16.082 Read completed with error (sct=0, sc=8) 00:08:16.082 Read completed with error (sct=0, sc=8) 00:08:16.082 Read completed with error (sct=0, sc=8) 00:08:16.082 Write completed with error (sct=0, sc=8) 00:08:16.082 Read completed with error (sct=0, sc=8) 00:08:16.082 Read completed with error (sct=0, sc=8) 00:08:16.082 Write completed with error (sct=0, sc=8) 00:08:16.082 Read completed with error (sct=0, sc=8) 00:08:16.082 Read completed with error (sct=0, sc=8) 00:08:16.082 Read completed with error (sct=0, sc=8) 00:08:16.082 Read completed with error (sct=0, sc=8) 00:08:16.082 Write completed with error (sct=0, sc=8) 00:08:16.082 Read completed with error (sct=0, sc=8) 00:08:16.082 Read completed with error (sct=0, sc=8) 00:08:16.082 Write completed with error (sct=0, sc=8) 00:08:16.082 Read completed with error (sct=0, sc=8) 00:08:16.082 Read completed with error (sct=0, sc=8) 00:08:16.082 Read completed with error (sct=0, sc=8) 00:08:16.082 [2024-07-25 09:57:01.105551] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1bc20 is same with the state(5) to be set 00:08:16.082 Read completed with error (sct=0, sc=8) 00:08:16.082 Write completed with error (sct=0, sc=8) 00:08:16.082 Read completed with error (sct=0, sc=8) 00:08:16.082 Write completed with error (sct=0, sc=8) 00:08:16.082 Read completed with error (sct=0, sc=8) 00:08:16.082 Read completed with error (sct=0, sc=8) 00:08:16.082 Read completed with error (sct=0, sc=8) 00:08:16.082 Write completed with error (sct=0, sc=8) 00:08:16.082 Write completed with error (sct=0, sc=8) 00:08:16.082 Read completed with error (sct=0, sc=8) 00:08:16.082 Write completed with error (sct=0, sc=8) 00:08:16.082 Read completed with error (sct=0, sc=8) 00:08:16.082 Read completed with error (sct=0, sc=8) 00:08:16.082 Read completed with error (sct=0, sc=8) 00:08:16.082 Read completed with error (sct=0, sc=8) 00:08:16.082 Write completed with error (sct=0, sc=8) 00:08:16.082 Read completed with error (sct=0, sc=8) 00:08:16.082 Read completed with error (sct=0, sc=8) 00:08:16.082 Read completed with error (sct=0, sc=8) 00:08:16.082 Read completed with error (sct=0, sc=8) 00:08:16.082 Read completed with error (sct=0, sc=8) 00:08:16.082 Read completed with error (sct=0, sc=8) 00:08:16.082 Write completed with error (sct=0, sc=8) 00:08:16.082 Write completed with error (sct=0, sc=8) 00:08:16.082 [2024-07-25 09:57:01.106914] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc1b5c0 is same with the state(5) to be set 00:08:16.082 Write completed with error (sct=0, sc=8) 00:08:16.082 Read completed with error (sct=0, sc=8) 00:08:16.082 Read completed with error (sct=0, sc=8) 00:08:16.082 Read completed with error (sct=0, sc=8) 00:08:16.082 Write completed with error (sct=0, sc=8) 00:08:16.082 Read completed with error (sct=0, sc=8) 00:08:16.082 Read completed with error (sct=0, sc=8) 00:08:16.082 Write completed with error (sct=0, sc=8) 00:08:16.082 Read completed with error (sct=0, sc=8) 00:08:16.082 Read completed with error (sct=0, sc=8) 00:08:16.082 Read completed with error (sct=0, sc=8) 00:08:16.082 Write completed with error (sct=0, sc=8) 00:08:16.082 Read completed with error (sct=0, sc=8) 00:08:16.082 Read completed with error (sct=0, sc=8) 00:08:16.082 Read completed with error (sct=0, sc=8) 00:08:16.082 Write completed with error (sct=0, sc=8) 00:08:16.082 Read completed with error (sct=0, sc=8) 00:08:16.082 Read completed with error (sct=0, sc=8) 00:08:16.082 Read completed with error (sct=0, sc=8) 00:08:16.082 Read completed with error (sct=0, sc=8) 00:08:16.082 Read completed with error (sct=0, sc=8) 00:08:16.082 Read completed with error (sct=0, sc=8) 00:08:16.082 Read completed with error (sct=0, sc=8) 00:08:16.082 Read completed with error (sct=0, sc=8) 00:08:16.082 Write completed with error (sct=0, sc=8) 00:08:16.082 Read completed with error (sct=0, sc=8) 00:08:16.082 Read completed with error (sct=0, sc=8) 00:08:16.082 Read completed with error (sct=0, sc=8) 00:08:16.082 Read completed with error (sct=0, sc=8) 00:08:16.082 Read completed with error (sct=0, sc=8) 00:08:16.082 Read completed with error (sct=0, sc=8) 00:08:16.082 Write completed with error (sct=0, sc=8) 00:08:16.082 Read completed with error (sct=0, sc=8) 00:08:16.082 Read completed with error (sct=0, sc=8) 00:08:16.082 Read completed with error (sct=0, sc=8) 00:08:16.082 Read completed with error (sct=0, sc=8) 00:08:16.082 Read completed with error (sct=0, sc=8) 00:08:16.082 Read completed with error (sct=0, sc=8) 00:08:16.082 Write completed with error (sct=0, sc=8) 00:08:16.082 Write completed with error (sct=0, sc=8) 00:08:16.082 Write completed with error (sct=0, sc=8) 00:08:16.082 [2024-07-25 09:57:01.108085] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f96e000d000 is same with the state(5) to be set 00:08:16.082 Initializing NVMe Controllers 00:08:16.082 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:16.082 Controller IO queue size 128, less than required. 00:08:16.082 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:16.082 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:16.082 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:16.082 Initialization complete. Launching workers. 00:08:16.082 ======================================================== 00:08:16.082 Latency(us) 00:08:16.082 Device Information : IOPS MiB/s Average min max 00:08:16.082 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 168.19 0.08 899609.13 572.29 1013024.00 00:08:16.082 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 185.55 0.09 908732.67 653.64 1013096.66 00:08:16.082 ======================================================== 00:08:16.082 Total : 353.74 0.17 904394.83 572.29 1013096.66 00:08:16.082 00:08:16.082 [2024-07-25 09:57:01.108617] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc1cac0 (9): Bad file descriptor 00:08:16.082 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:08:16.082 09:57:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.082 09:57:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:08:16.082 09:57:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 335658 00:08:16.082 09:57:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:08:16.665 09:57:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:08:16.665 09:57:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 335658 00:08:16.665 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (335658) - No such process 00:08:16.665 09:57:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 335658 00:08:16.665 09:57:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:08:16.665 09:57:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 335658 00:08:16.665 09:57:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:08:16.665 09:57:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:16.665 09:57:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:08:16.665 09:57:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:16.665 09:57:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 335658 00:08:16.665 09:57:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:08:16.665 09:57:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:16.665 09:57:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:16.665 09:57:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:16.665 09:57:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:16.666 09:57:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.666 09:57:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:16.666 09:57:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.666 09:57:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:16.666 09:57:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.666 09:57:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:16.666 [2024-07-25 09:57:01.632099] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:16.666 09:57:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.666 09:57:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:16.666 09:57:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:16.666 09:57:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:16.666 09:57:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:16.666 09:57:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=336248 00:08:16.666 09:57:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:16.666 09:57:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:08:16.666 09:57:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 336248 00:08:16.666 09:57:01 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:16.666 EAL: No free 2048 kB hugepages reported on node 1 00:08:16.666 [2024-07-25 09:57:01.705298] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:17.230 09:57:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:17.230 09:57:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 336248 00:08:17.230 09:57:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:17.489 09:57:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:17.489 09:57:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 336248 00:08:17.489 09:57:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:18.053 09:57:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:18.053 09:57:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 336248 00:08:18.053 09:57:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:18.618 09:57:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:18.618 09:57:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 336248 00:08:18.618 09:57:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:19.183 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:19.183 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 336248 00:08:19.183 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:19.748 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:19.748 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 336248 00:08:19.748 09:57:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:19.748 Initializing NVMe Controllers 00:08:19.748 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:19.748 Controller IO queue size 128, less than required. 00:08:19.748 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:19.748 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:19.748 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:19.748 Initialization complete. Launching workers. 00:08:19.748 ======================================================== 00:08:19.748 Latency(us) 00:08:19.748 Device Information : IOPS MiB/s Average min max 00:08:19.748 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004532.19 1000204.14 1012661.87 00:08:19.748 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004318.85 1000232.42 1011347.33 00:08:19.748 ======================================================== 00:08:19.748 Total : 256.00 0.12 1004425.52 1000204.14 1012661.87 00:08:19.748 00:08:20.006 09:57:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:20.006 09:57:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 336248 00:08:20.006 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (336248) - No such process 00:08:20.006 09:57:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 336248 00:08:20.006 09:57:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:08:20.006 09:57:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:08:20.006 09:57:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:20.006 09:57:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:08:20.006 09:57:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:20.006 09:57:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:08:20.006 09:57:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:20.006 09:57:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:20.006 rmmod nvme_tcp 00:08:20.264 rmmod nvme_fabrics 00:08:20.264 rmmod nvme_keyring 00:08:20.264 09:57:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:20.264 09:57:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:08:20.264 09:57:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:08:20.264 09:57:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 335615 ']' 00:08:20.264 09:57:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 335615 00:08:20.264 09:57:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 335615 ']' 00:08:20.264 09:57:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 335615 00:08:20.264 09:57:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:08:20.264 09:57:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:20.264 09:57:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 335615 00:08:20.264 09:57:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:20.264 09:57:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:20.264 09:57:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 335615' 00:08:20.264 killing process with pid 335615 00:08:20.264 09:57:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 335615 00:08:20.264 09:57:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 335615 00:08:20.522 09:57:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:20.522 09:57:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:20.522 09:57:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:20.522 09:57:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:20.522 09:57:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:20.522 09:57:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:20.522 09:57:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:20.522 09:57:05 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:22.427 09:57:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:22.686 00:08:22.686 real 0m12.757s 00:08:22.686 user 0m27.867s 00:08:22.686 sys 0m3.379s 00:08:22.686 09:57:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:22.686 09:57:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:22.686 ************************************ 00:08:22.686 END TEST nvmf_delete_subsystem 00:08:22.686 ************************************ 00:08:22.686 09:57:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:22.686 09:57:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:22.686 09:57:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:22.686 09:57:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:22.686 ************************************ 00:08:22.686 START TEST nvmf_host_management 00:08:22.686 ************************************ 00:08:22.686 09:57:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:08:22.686 * Looking for test storage... 00:08:22.686 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:22.686 09:57:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:22.686 09:57:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:08:22.686 09:57:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:22.686 09:57:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:22.686 09:57:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:22.686 09:57:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:22.686 09:57:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:22.686 09:57:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:22.686 09:57:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:22.686 09:57:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:22.686 09:57:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:22.686 09:57:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:22.686 09:57:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:08:22.686 09:57:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:08:22.686 09:57:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:22.686 09:57:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:22.687 09:57:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:22.687 09:57:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:22.687 09:57:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:22.687 09:57:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:22.687 09:57:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:22.687 09:57:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:22.687 09:57:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.687 09:57:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.687 09:57:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.687 09:57:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:08:22.687 09:57:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.687 09:57:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:08:22.687 09:57:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:22.687 09:57:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:22.687 09:57:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:22.687 09:57:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:22.687 09:57:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:22.687 09:57:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:22.687 09:57:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:22.687 09:57:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:22.687 09:57:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:22.687 09:57:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:22.687 09:57:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:08:22.687 09:57:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:22.687 09:57:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:22.687 09:57:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:22.687 09:57:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:22.687 09:57:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:22.687 09:57:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:22.687 09:57:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:22.687 09:57:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:22.687 09:57:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:22.687 09:57:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:22.687 09:57:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:08:22.687 09:57:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:25.217 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:25.217 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:08:25.217 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:25.217 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:25.217 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:25.217 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:25.217 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:25.217 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:08:25.217 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:25.217 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:08:25.217 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:08:25.217 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:08:25.217 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:08:25.217 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:08:25.217 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:08:25.217 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:25.217 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:25.217 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:25.217 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:25.217 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:25.217 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:25.217 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:25.217 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:25.217 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:25.217 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:25.217 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:25.217 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:25.217 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:25.217 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:25.217 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:25.217 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:25.217 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:25.217 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:25.217 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:08:25.217 Found 0000:84:00.0 (0x8086 - 0x159b) 00:08:25.217 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:25.217 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:25.217 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:25.217 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:25.217 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:25.217 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:25.217 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:08:25.217 Found 0000:84:00.1 (0x8086 - 0x159b) 00:08:25.217 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:25.217 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:25.217 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:25.217 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:25.217 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:25.217 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:25.217 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:25.217 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:25.217 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:25.217 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:25.217 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:25.217 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:25.217 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:25.217 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:25.217 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:25.217 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:08:25.217 Found net devices under 0000:84:00.0: cvl_0_0 00:08:25.217 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:25.217 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:25.217 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:25.217 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:25.217 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:25.217 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:25.217 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:25.217 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:25.217 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:08:25.217 Found net devices under 0000:84:00.1: cvl_0_1 00:08:25.217 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:25.217 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:25.217 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:08:25.217 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:25.217 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:25.217 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:25.217 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:25.217 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:25.217 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:25.217 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:25.217 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:25.217 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:25.217 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:25.217 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:25.217 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:25.218 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:25.218 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:25.218 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:25.218 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:25.218 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:25.218 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:25.218 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:25.218 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:25.218 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:25.218 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:25.218 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:25.218 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:25.218 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.200 ms 00:08:25.218 00:08:25.218 --- 10.0.0.2 ping statistics --- 00:08:25.218 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:25.218 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:08:25.218 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:25.218 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:25.218 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.122 ms 00:08:25.218 00:08:25.218 --- 10.0.0.1 ping statistics --- 00:08:25.218 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:25.218 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:08:25.218 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:25.218 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:08:25.218 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:25.218 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:25.218 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:25.218 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:25.218 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:25.218 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:25.218 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:25.218 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:08:25.218 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:08:25.218 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:08:25.218 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:25.218 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:25.218 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:25.218 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=339152 00:08:25.218 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:08:25.218 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 339152 00:08:25.218 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 339152 ']' 00:08:25.218 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:25.218 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:25.218 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:25.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:25.218 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:25.218 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:25.218 [2024-07-25 09:57:10.311450] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:25.218 [2024-07-25 09:57:10.311549] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:25.218 EAL: No free 2048 kB hugepages reported on node 1 00:08:25.476 [2024-07-25 09:57:10.395509] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:25.476 [2024-07-25 09:57:10.522711] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:25.476 [2024-07-25 09:57:10.522782] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:25.476 [2024-07-25 09:57:10.522799] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:25.477 [2024-07-25 09:57:10.522813] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:25.477 [2024-07-25 09:57:10.522824] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:25.477 [2024-07-25 09:57:10.522919] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:25.477 [2024-07-25 09:57:10.522974] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:25.477 [2024-07-25 09:57:10.523024] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:08:25.477 [2024-07-25 09:57:10.523027] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:25.735 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:25.735 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:08:25.735 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:25.735 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:25.735 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:25.735 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:25.735 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:25.735 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.735 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:25.735 [2024-07-25 09:57:10.688097] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:25.735 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.735 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:08:25.735 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:25.735 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:25.735 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:25.735 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:08:25.735 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:08:25.736 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:25.736 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:25.736 Malloc0 00:08:25.736 [2024-07-25 09:57:10.754644] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:25.736 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:25.736 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:08:25.736 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:25.736 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:25.736 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=339313 00:08:25.736 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 339313 /var/tmp/bdevperf.sock 00:08:25.736 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 339313 ']' 00:08:25.736 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:08:25.736 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:08:25.736 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:25.736 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:25.736 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:08:25.736 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:25.736 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:25.736 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:08:25.736 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:25.736 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:25.736 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:25.736 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:25.736 { 00:08:25.736 "params": { 00:08:25.736 "name": "Nvme$subsystem", 00:08:25.736 "trtype": "$TEST_TRANSPORT", 00:08:25.736 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:25.736 "adrfam": "ipv4", 00:08:25.736 "trsvcid": "$NVMF_PORT", 00:08:25.736 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:25.736 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:25.736 "hdgst": ${hdgst:-false}, 00:08:25.736 "ddgst": ${ddgst:-false} 00:08:25.736 }, 00:08:25.736 "method": "bdev_nvme_attach_controller" 00:08:25.736 } 00:08:25.736 EOF 00:08:25.736 )") 00:08:25.736 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:08:25.736 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:08:25.736 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:08:25.736 09:57:10 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:25.736 "params": { 00:08:25.736 "name": "Nvme0", 00:08:25.736 "trtype": "tcp", 00:08:25.736 "traddr": "10.0.0.2", 00:08:25.736 "adrfam": "ipv4", 00:08:25.736 "trsvcid": "4420", 00:08:25.736 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:25.736 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:25.736 "hdgst": false, 00:08:25.736 "ddgst": false 00:08:25.736 }, 00:08:25.736 "method": "bdev_nvme_attach_controller" 00:08:25.736 }' 00:08:25.736 [2024-07-25 09:57:10.844396] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:25.736 [2024-07-25 09:57:10.844502] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid339313 ] 00:08:25.736 EAL: No free 2048 kB hugepages reported on node 1 00:08:25.994 [2024-07-25 09:57:10.917750] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:25.994 [2024-07-25 09:57:11.032152] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.252 Running I/O for 10 seconds... 00:08:26.252 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:26.252 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:08:26.252 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:08:26.252 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.252 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:26.252 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.252 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:26.252 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:08:26.252 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:08:26.252 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:08:26.252 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:08:26.252 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:08:26.252 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:08:26.252 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:26.252 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:26.252 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.252 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:26.252 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:26.252 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.252 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:08:26.252 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:08:26.252 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:08:26.512 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:08:26.512 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:08:26.512 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:08:26.512 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:08:26.512 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.512 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:26.512 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.512 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=526 00:08:26.512 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 526 -ge 100 ']' 00:08:26.512 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:08:26.513 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:08:26.513 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:08:26.513 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:26.513 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.513 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:26.513 [2024-07-25 09:57:11.637557] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbef3c0 is same with the state(5) to be set 00:08:26.513 [2024-07-25 09:57:11.637659] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbef3c0 is same with the state(5) to be set 00:08:26.513 [2024-07-25 09:57:11.637676] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbef3c0 is same with the state(5) to be set 00:08:26.513 [2024-07-25 09:57:11.637688] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbef3c0 is same with the state(5) to be set 00:08:26.513 [2024-07-25 09:57:11.637701] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbef3c0 is same with the state(5) to be set 00:08:26.513 [2024-07-25 09:57:11.637714] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbef3c0 is same with the state(5) to be set 00:08:26.513 [2024-07-25 09:57:11.637725] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbef3c0 is same with the state(5) to be set 00:08:26.513 [2024-07-25 09:57:11.637737] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbef3c0 is same with the state(5) to be set 00:08:26.513 [2024-07-25 09:57:11.637749] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbef3c0 is same with the state(5) to be set 00:08:26.513 [2024-07-25 09:57:11.637761] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbef3c0 is same with the state(5) to be set 00:08:26.513 [2024-07-25 09:57:11.637774] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbef3c0 is same with the state(5) to be set 00:08:26.513 [2024-07-25 09:57:11.637796] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbef3c0 is same with the state(5) to be set 00:08:26.513 [2024-07-25 09:57:11.637808] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbef3c0 is same with the state(5) to be set 00:08:26.513 [2024-07-25 09:57:11.637820] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbef3c0 is same with the state(5) to be set 00:08:26.513 [2024-07-25 09:57:11.637832] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbef3c0 is same with the state(5) to be set 00:08:26.513 [2024-07-25 09:57:11.637844] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbef3c0 is same with the state(5) to be set 00:08:26.513 [2024-07-25 09:57:11.637855] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbef3c0 is same with the state(5) to be set 00:08:26.513 [2024-07-25 09:57:11.637867] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbef3c0 is same with the state(5) to be set 00:08:26.513 [2024-07-25 09:57:11.637879] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbef3c0 is same with the state(5) to be set 00:08:26.513 [2024-07-25 09:57:11.637890] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbef3c0 is same with the state(5) to be set 00:08:26.513 [2024-07-25 09:57:11.637902] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbef3c0 is same with the state(5) to be set 00:08:26.513 [2024-07-25 09:57:11.637915] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbef3c0 is same with the state(5) to be set 00:08:26.513 [2024-07-25 09:57:11.637926] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbef3c0 is same with the state(5) to be set 00:08:26.513 [2024-07-25 09:57:11.637938] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbef3c0 is same with the state(5) to be set 00:08:26.513 [2024-07-25 09:57:11.637950] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbef3c0 is same with the state(5) to be set 00:08:26.513 [2024-07-25 09:57:11.637961] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbef3c0 is same with the state(5) to be set 00:08:26.513 [2024-07-25 09:57:11.637973] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbef3c0 is same with the state(5) to be set 00:08:26.513 [2024-07-25 09:57:11.637984] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbef3c0 is same with the state(5) to be set 00:08:26.513 [2024-07-25 09:57:11.637996] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbef3c0 is same with the state(5) to be set 00:08:26.513 [2024-07-25 09:57:11.638008] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbef3c0 is same with the state(5) to be set 00:08:26.513 [2024-07-25 09:57:11.638019] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbef3c0 is same with the state(5) to be set 00:08:26.513 [2024-07-25 09:57:11.638032] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbef3c0 is same with the state(5) to be set 00:08:26.513 [2024-07-25 09:57:11.638043] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbef3c0 is same with the state(5) to be set 00:08:26.513 [2024-07-25 09:57:11.638055] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbef3c0 is same with the state(5) to be set 00:08:26.513 [2024-07-25 09:57:11.638068] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbef3c0 is same with the state(5) to be set 00:08:26.513 [2024-07-25 09:57:11.638080] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbef3c0 is same with the state(5) to be set 00:08:26.513 [2024-07-25 09:57:11.638092] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbef3c0 is same with the state(5) to be set 00:08:26.513 [2024-07-25 09:57:11.638104] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbef3c0 is same with the state(5) to be set 00:08:26.513 [2024-07-25 09:57:11.638119] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbef3c0 is same with the state(5) to be set 00:08:26.513 [2024-07-25 09:57:11.638131] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbef3c0 is same with the state(5) to be set 00:08:26.513 [2024-07-25 09:57:11.638143] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbef3c0 is same with the state(5) to be set 00:08:26.513 [2024-07-25 09:57:11.638155] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbef3c0 is same with the state(5) to be set 00:08:26.513 [2024-07-25 09:57:11.638166] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbef3c0 is same with the state(5) to be set 00:08:26.513 [2024-07-25 09:57:11.638177] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbef3c0 is same with the state(5) to be set 00:08:26.513 [2024-07-25 09:57:11.638189] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbef3c0 is same with the state(5) to be set 00:08:26.513 [2024-07-25 09:57:11.638200] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbef3c0 is same with the state(5) to be set 00:08:26.513 [2024-07-25 09:57:11.638212] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbef3c0 is same with the state(5) to be set 00:08:26.513 [2024-07-25 09:57:11.638223] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbef3c0 is same with the state(5) to be set 00:08:26.513 [2024-07-25 09:57:11.638235] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbef3c0 is same with the state(5) to be set 00:08:26.513 [2024-07-25 09:57:11.638247] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbef3c0 is same with the state(5) to be set 00:08:26.513 [2024-07-25 09:57:11.638258] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbef3c0 is same with the state(5) to be set 00:08:26.513 [2024-07-25 09:57:11.638270] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbef3c0 is same with the state(5) to be set 00:08:26.513 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.513 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:08:26.513 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:26.513 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:26.513 [2024-07-25 09:57:11.642897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:76032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.513 [2024-07-25 09:57:11.642938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:26.513 [2024-07-25 09:57:11.642966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:76160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.513 [2024-07-25 09:57:11.642982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:26.513 [2024-07-25 09:57:11.642999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:76288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.513 [2024-07-25 09:57:11.643013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:26.513 [2024-07-25 09:57:11.643028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:76416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.513 [2024-07-25 09:57:11.643041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:26.513 [2024-07-25 09:57:11.643057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:76544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.513 [2024-07-25 09:57:11.643077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:26.513 [2024-07-25 09:57:11.643093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:76672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.513 [2024-07-25 09:57:11.643107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:26.513 [2024-07-25 09:57:11.643123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:76800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.513 [2024-07-25 09:57:11.643136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:26.513 [2024-07-25 09:57:11.643151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:76928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.513 [2024-07-25 09:57:11.643165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:26.513 [2024-07-25 09:57:11.643179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:77056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.513 [2024-07-25 09:57:11.643193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:26.513 [2024-07-25 09:57:11.643208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:77184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.513 [2024-07-25 09:57:11.643221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:26.513 [2024-07-25 09:57:11.643236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:77312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.513 [2024-07-25 09:57:11.643249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:26.514 [2024-07-25 09:57:11.643264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:77440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.514 [2024-07-25 09:57:11.643277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:26.514 [2024-07-25 09:57:11.643292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:77568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.514 [2024-07-25 09:57:11.643306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:26.514 [2024-07-25 09:57:11.643321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:77696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.514 [2024-07-25 09:57:11.643334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:26.514 [2024-07-25 09:57:11.643349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:77824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.514 [2024-07-25 09:57:11.643362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:26.514 [2024-07-25 09:57:11.643377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:77952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.514 [2024-07-25 09:57:11.643390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:26.514 [2024-07-25 09:57:11.643405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:78080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.514 [2024-07-25 09:57:11.643418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:26.514 [2024-07-25 09:57:11.643460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:78208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.514 [2024-07-25 09:57:11.643477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:26.514 [2024-07-25 09:57:11.643494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:78336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.514 [2024-07-25 09:57:11.643508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:26.514 [2024-07-25 09:57:11.643524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:78464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.514 [2024-07-25 09:57:11.643538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:26.514 [2024-07-25 09:57:11.643553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:78592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.514 [2024-07-25 09:57:11.643568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:26.514 [2024-07-25 09:57:11.643583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:78720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.514 [2024-07-25 09:57:11.643596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:26.514 [2024-07-25 09:57:11.643612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:78848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.514 [2024-07-25 09:57:11.643626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:26.514 [2024-07-25 09:57:11.643641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:78976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.514 [2024-07-25 09:57:11.643655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:26.514 [2024-07-25 09:57:11.643671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:79104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.514 [2024-07-25 09:57:11.643685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:26.514 [2024-07-25 09:57:11.643701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:79232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.514 [2024-07-25 09:57:11.643715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:26.514 [2024-07-25 09:57:11.643731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:79360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.514 [2024-07-25 09:57:11.643761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:26.514 [2024-07-25 09:57:11.643776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:79488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.514 [2024-07-25 09:57:11.643790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:26.514 [2024-07-25 09:57:11.643806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:79616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.514 [2024-07-25 09:57:11.643819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:26.514 [2024-07-25 09:57:11.643834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:79744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.514 [2024-07-25 09:57:11.643851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:26.514 [2024-07-25 09:57:11.643867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:79872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.514 [2024-07-25 09:57:11.643881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:26.514 [2024-07-25 09:57:11.643895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:80000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.514 [2024-07-25 09:57:11.643909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:26.514 [2024-07-25 09:57:11.643924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:80128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.514 [2024-07-25 09:57:11.643937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:26.514 [2024-07-25 09:57:11.643951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:80256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.514 [2024-07-25 09:57:11.643965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:26.514 [2024-07-25 09:57:11.643980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:80384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.514 [2024-07-25 09:57:11.643993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:26.514 [2024-07-25 09:57:11.644008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:80512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.514 [2024-07-25 09:57:11.644021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:26.514 [2024-07-25 09:57:11.644036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:80640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.514 [2024-07-25 09:57:11.644050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:26.514 [2024-07-25 09:57:11.644064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:80768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.514 [2024-07-25 09:57:11.644078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:26.514 [2024-07-25 09:57:11.644092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:80896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.514 [2024-07-25 09:57:11.644106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:26.514 [2024-07-25 09:57:11.644121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:81024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.514 [2024-07-25 09:57:11.644134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:26.514 [2024-07-25 09:57:11.644149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:81152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.514 [2024-07-25 09:57:11.644162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:26.514 [2024-07-25 09:57:11.644177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:81280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.514 [2024-07-25 09:57:11.644190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:26.514 [2024-07-25 09:57:11.644205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:81408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.514 [2024-07-25 09:57:11.644221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:26.514 [2024-07-25 09:57:11.644237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:81536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.514 [2024-07-25 09:57:11.644250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:26.514 [2024-07-25 09:57:11.644265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:81664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.514 [2024-07-25 09:57:11.644279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:26.514 [2024-07-25 09:57:11.644295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:81792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.514 [2024-07-25 09:57:11.644308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:26.514 [2024-07-25 09:57:11.644323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.514 [2024-07-25 09:57:11.644336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:26.514 [2024-07-25 09:57:11.644351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.514 [2024-07-25 09:57:11.644365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:26.514 [2024-07-25 09:57:11.644379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.514 [2024-07-25 09:57:11.644392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:26.514 [2024-07-25 09:57:11.644407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.514 [2024-07-25 09:57:11.644420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:26.515 [2024-07-25 09:57:11.644458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.515 [2024-07-25 09:57:11.644473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:26.515 [2024-07-25 09:57:11.644489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.515 [2024-07-25 09:57:11.644502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:26.515 [2024-07-25 09:57:11.644517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.515 [2024-07-25 09:57:11.644531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:26.515 [2024-07-25 09:57:11.644546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.515 [2024-07-25 09:57:11.644560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:26.515 [2024-07-25 09:57:11.644575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.515 [2024-07-25 09:57:11.644589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:26.515 [2024-07-25 09:57:11.644607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.515 [2024-07-25 09:57:11.644622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:26.515 [2024-07-25 09:57:11.644637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.515 [2024-07-25 09:57:11.644651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:26.515 [2024-07-25 09:57:11.644666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.515 [2024-07-25 09:57:11.644680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:26.515 [2024-07-25 09:57:11.644695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.515 [2024-07-25 09:57:11.644709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:26.515 [2024-07-25 09:57:11.644724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.515 [2024-07-25 09:57:11.644752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:26.515 [2024-07-25 09:57:11.644769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.515 [2024-07-25 09:57:11.644783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:26.515 [2024-07-25 09:57:11.644798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.515 [2024-07-25 09:57:11.644812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:26.515 [2024-07-25 09:57:11.644827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.515 [2024-07-25 09:57:11.644840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:26.515 [2024-07-25 09:57:11.644855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:08:26.515 [2024-07-25 09:57:11.644868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:26.515 [2024-07-25 09:57:11.644968] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1bafd70 was disconnected and freed. reset controller. 00:08:26.515 [2024-07-25 09:57:11.646133] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:08:26.515 task offset: 76032 on job bdev=Nvme0n1 fails 00:08:26.515 00:08:26.515 Latency(us) 00:08:26.515 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:26.515 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:26.515 Job: Nvme0n1 ended in about 0.43 seconds with error 00:08:26.515 Verification LBA range: start 0x0 length 0x400 00:08:26.515 Nvme0n1 : 0.43 1396.97 87.31 150.52 0.00 40260.25 2451.53 34175.81 00:08:26.515 =================================================================================================================== 00:08:26.515 Total : 1396.97 87.31 150.52 0.00 40260.25 2451.53 34175.81 00:08:26.515 [2024-07-25 09:57:11.648232] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:26.515 [2024-07-25 09:57:11.648262] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x179f540 (9): Bad file descriptor 00:08:26.515 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:26.515 09:57:11 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:08:26.773 [2024-07-25 09:57:11.790616] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:08:27.704 09:57:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 339313 00:08:27.704 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (339313) - No such process 00:08:27.704 09:57:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:08:27.704 09:57:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:08:27.704 09:57:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:08:27.704 09:57:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:08:27.704 09:57:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:08:27.704 09:57:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:08:27.704 09:57:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:27.704 09:57:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:27.704 { 00:08:27.704 "params": { 00:08:27.704 "name": "Nvme$subsystem", 00:08:27.704 "trtype": "$TEST_TRANSPORT", 00:08:27.704 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:27.704 "adrfam": "ipv4", 00:08:27.704 "trsvcid": "$NVMF_PORT", 00:08:27.704 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:27.704 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:27.704 "hdgst": ${hdgst:-false}, 00:08:27.704 "ddgst": ${ddgst:-false} 00:08:27.704 }, 00:08:27.704 "method": "bdev_nvme_attach_controller" 00:08:27.704 } 00:08:27.704 EOF 00:08:27.704 )") 00:08:27.704 09:57:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:08:27.704 09:57:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:08:27.704 09:57:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:08:27.704 09:57:12 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:27.704 "params": { 00:08:27.704 "name": "Nvme0", 00:08:27.704 "trtype": "tcp", 00:08:27.704 "traddr": "10.0.0.2", 00:08:27.704 "adrfam": "ipv4", 00:08:27.704 "trsvcid": "4420", 00:08:27.704 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:27.704 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:08:27.704 "hdgst": false, 00:08:27.704 "ddgst": false 00:08:27.704 }, 00:08:27.704 "method": "bdev_nvme_attach_controller" 00:08:27.704 }' 00:08:27.704 [2024-07-25 09:57:12.702360] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:27.704 [2024-07-25 09:57:12.702464] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid339475 ] 00:08:27.704 EAL: No free 2048 kB hugepages reported on node 1 00:08:27.704 [2024-07-25 09:57:12.762217] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.960 [2024-07-25 09:57:12.871347] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.216 Running I/O for 1 seconds... 00:08:29.146 00:08:29.146 Latency(us) 00:08:29.146 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:29.146 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:29.146 Verification LBA range: start 0x0 length 0x400 00:08:29.146 Nvme0n1 : 1.05 1403.16 87.70 0.00 0.00 43136.18 1759.76 43690.67 00:08:29.146 =================================================================================================================== 00:08:29.146 Total : 1403.16 87.70 0.00 0.00 43136.18 1759.76 43690.67 00:08:29.403 09:57:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:08:29.403 09:57:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:08:29.404 09:57:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:08:29.404 09:57:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:29.404 09:57:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:08:29.404 09:57:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:29.404 09:57:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:08:29.404 09:57:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:29.404 09:57:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:08:29.404 09:57:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:29.404 09:57:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:29.404 rmmod nvme_tcp 00:08:29.404 rmmod nvme_fabrics 00:08:29.404 rmmod nvme_keyring 00:08:29.404 09:57:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:29.404 09:57:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:08:29.404 09:57:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:08:29.404 09:57:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 339152 ']' 00:08:29.404 09:57:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 339152 00:08:29.404 09:57:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 339152 ']' 00:08:29.404 09:57:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 339152 00:08:29.404 09:57:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:08:29.661 09:57:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:29.661 09:57:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 339152 00:08:29.661 09:57:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:29.661 09:57:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:29.661 09:57:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 339152' 00:08:29.661 killing process with pid 339152 00:08:29.661 09:57:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 339152 00:08:29.661 09:57:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 339152 00:08:29.920 [2024-07-25 09:57:14.898074] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:08:29.920 09:57:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:29.920 09:57:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:29.920 09:57:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:29.920 09:57:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:29.920 09:57:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:29.920 09:57:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:29.920 09:57:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:29.920 09:57:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:31.825 09:57:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:31.825 09:57:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:08:31.825 00:08:31.825 real 0m9.332s 00:08:31.825 user 0m20.896s 00:08:31.825 sys 0m3.202s 00:08:31.825 09:57:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:31.825 09:57:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:31.825 ************************************ 00:08:31.825 END TEST nvmf_host_management 00:08:31.825 ************************************ 00:08:32.085 09:57:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:32.085 09:57:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:32.085 09:57:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:32.085 09:57:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:32.085 ************************************ 00:08:32.085 START TEST nvmf_lvol 00:08:32.085 ************************************ 00:08:32.085 09:57:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:32.085 * Looking for test storage... 00:08:32.085 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:32.085 09:57:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:32.085 09:57:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:08:32.085 09:57:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:32.085 09:57:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:32.085 09:57:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:32.085 09:57:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:32.085 09:57:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:32.085 09:57:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:32.085 09:57:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:32.085 09:57:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:32.085 09:57:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:32.085 09:57:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:32.085 09:57:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:08:32.085 09:57:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:08:32.085 09:57:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:32.085 09:57:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:32.085 09:57:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:32.085 09:57:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:32.085 09:57:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:32.085 09:57:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:32.085 09:57:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:32.085 09:57:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:32.085 09:57:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:32.085 09:57:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:32.085 09:57:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:32.085 09:57:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:08:32.085 09:57:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:32.085 09:57:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:08:32.085 09:57:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:32.085 09:57:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:32.085 09:57:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:32.086 09:57:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:32.086 09:57:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:32.086 09:57:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:32.086 09:57:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:32.086 09:57:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:32.086 09:57:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:32.086 09:57:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:32.086 09:57:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:08:32.086 09:57:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:08:32.086 09:57:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:32.086 09:57:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:08:32.086 09:57:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:32.086 09:57:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:32.086 09:57:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:32.086 09:57:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:32.086 09:57:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:32.086 09:57:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:32.086 09:57:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:32.086 09:57:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:32.086 09:57:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:32.086 09:57:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:32.086 09:57:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:08:32.086 09:57:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:34.668 09:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:34.668 09:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:08:34.668 09:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:34.668 09:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:34.668 09:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:34.668 09:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:34.668 09:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:34.668 09:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:08:34.668 09:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:34.668 09:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:08:34.668 09:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:08:34.668 09:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:08:34.668 09:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:08:34.668 09:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:08:34.668 09:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:08:34.668 09:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:34.668 09:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:34.668 09:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:34.668 09:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:34.668 09:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:34.668 09:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:34.668 09:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:34.668 09:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:34.668 09:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:34.669 09:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:34.669 09:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:34.669 09:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:34.669 09:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:34.669 09:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:34.669 09:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:34.669 09:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:34.669 09:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:34.669 09:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:34.669 09:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:08:34.669 Found 0000:84:00.0 (0x8086 - 0x159b) 00:08:34.669 09:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:34.669 09:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:34.669 09:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:34.669 09:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:34.669 09:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:34.669 09:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:34.669 09:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:08:34.669 Found 0000:84:00.1 (0x8086 - 0x159b) 00:08:34.669 09:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:34.669 09:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:34.669 09:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:34.669 09:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:34.669 09:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:34.669 09:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:34.669 09:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:34.669 09:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:34.669 09:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:34.669 09:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:34.669 09:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:34.669 09:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:34.669 09:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:34.669 09:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:34.669 09:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:34.669 09:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:08:34.669 Found net devices under 0000:84:00.0: cvl_0_0 00:08:34.669 09:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:34.669 09:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:34.669 09:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:34.669 09:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:34.669 09:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:34.669 09:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:34.669 09:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:34.669 09:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:34.669 09:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:08:34.669 Found net devices under 0000:84:00.1: cvl_0_1 00:08:34.669 09:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:34.669 09:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:34.669 09:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:08:34.669 09:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:34.669 09:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:34.669 09:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:34.669 09:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:34.669 09:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:34.669 09:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:34.669 09:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:34.669 09:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:34.669 09:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:34.669 09:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:34.669 09:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:34.669 09:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:34.669 09:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:34.669 09:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:34.669 09:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:34.669 09:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:34.669 09:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:34.669 09:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:34.669 09:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:34.669 09:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:34.669 09:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:34.669 09:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:34.669 09:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:34.669 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:34.669 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.257 ms 00:08:34.669 00:08:34.669 --- 10.0.0.2 ping statistics --- 00:08:34.669 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:34.669 rtt min/avg/max/mdev = 0.257/0.257/0.257/0.000 ms 00:08:34.669 09:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:34.669 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:34.669 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.139 ms 00:08:34.669 00:08:34.669 --- 10.0.0.1 ping statistics --- 00:08:34.669 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:34.669 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:08:34.669 09:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:34.669 09:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:08:34.669 09:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:34.669 09:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:34.669 09:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:34.669 09:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:34.669 09:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:34.669 09:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:34.669 09:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:34.669 09:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:08:34.669 09:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:34.669 09:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:34.669 09:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:34.669 09:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=341766 00:08:34.669 09:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:08:34.669 09:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 341766 00:08:34.669 09:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 341766 ']' 00:08:34.669 09:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:34.669 09:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:34.669 09:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:34.669 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:34.669 09:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:34.669 09:57:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:34.928 [2024-07-25 09:57:19.839334] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:34.928 [2024-07-25 09:57:19.839490] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:34.928 EAL: No free 2048 kB hugepages reported on node 1 00:08:34.928 [2024-07-25 09:57:19.932953] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:34.928 [2024-07-25 09:57:20.069799] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:34.928 [2024-07-25 09:57:20.069866] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:34.928 [2024-07-25 09:57:20.069884] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:34.928 [2024-07-25 09:57:20.069899] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:34.928 [2024-07-25 09:57:20.069911] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:34.928 [2024-07-25 09:57:20.070009] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:34.928 [2024-07-25 09:57:20.070067] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:34.928 [2024-07-25 09:57:20.070070] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:35.186 09:57:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:35.186 09:57:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:08:35.186 09:57:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:35.186 09:57:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:35.186 09:57:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:35.444 09:57:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:35.444 09:57:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:35.701 [2024-07-25 09:57:20.637180] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:35.701 09:57:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:35.960 09:57:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:08:35.960 09:57:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:36.218 09:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:08:36.218 09:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:08:36.475 09:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:08:37.040 09:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=8129083e-9d65-4ec0-bfa8-29f98c32a841 00:08:37.040 09:57:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 8129083e-9d65-4ec0-bfa8-29f98c32a841 lvol 20 00:08:37.298 09:57:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=75751883-9c1d-41bc-bbc6-cc23cdc641c5 00:08:37.298 09:57:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:37.556 09:57:22 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 75751883-9c1d-41bc-bbc6-cc23cdc641c5 00:08:38.121 09:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:38.377 [2024-07-25 09:57:23.409895] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:38.377 09:57:23 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:38.941 09:57:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=342256 00:08:38.941 09:57:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:08:38.941 09:57:24 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:08:38.941 EAL: No free 2048 kB hugepages reported on node 1 00:08:40.314 09:57:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 75751883-9c1d-41bc-bbc6-cc23cdc641c5 MY_SNAPSHOT 00:08:40.314 09:57:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=66d5cbf1-4bd9-4e21-869a-4af09d5aa27b 00:08:40.314 09:57:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 75751883-9c1d-41bc-bbc6-cc23cdc641c5 30 00:08:40.880 09:57:25 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 66d5cbf1-4bd9-4e21-869a-4af09d5aa27b MY_CLONE 00:08:41.137 09:57:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=c3676a2d-1d69-4fb9-a701-4a182a1636be 00:08:41.137 09:57:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate c3676a2d-1d69-4fb9-a701-4a182a1636be 00:08:42.070 09:57:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 342256 00:08:50.176 Initializing NVMe Controllers 00:08:50.176 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:50.176 Controller IO queue size 128, less than required. 00:08:50.176 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:50.176 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:08:50.176 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:08:50.176 Initialization complete. Launching workers. 00:08:50.176 ======================================================== 00:08:50.176 Latency(us) 00:08:50.176 Device Information : IOPS MiB/s Average min max 00:08:50.176 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10705.70 41.82 11965.98 1536.57 62592.17 00:08:50.176 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10578.40 41.32 12107.41 2125.76 77004.05 00:08:50.176 ======================================================== 00:08:50.176 Total : 21284.10 83.14 12036.27 1536.57 77004.05 00:08:50.176 00:08:50.176 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:50.176 09:57:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 75751883-9c1d-41bc-bbc6-cc23cdc641c5 00:08:50.176 09:57:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8129083e-9d65-4ec0-bfa8-29f98c32a841 00:08:50.742 09:57:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:08:50.742 09:57:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:08:50.742 09:57:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:08:50.742 09:57:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:50.742 09:57:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:08:50.742 09:57:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:50.742 09:57:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:08:50.742 09:57:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:50.742 09:57:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:50.742 rmmod nvme_tcp 00:08:50.742 rmmod nvme_fabrics 00:08:50.742 rmmod nvme_keyring 00:08:50.742 09:57:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:50.742 09:57:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:08:50.742 09:57:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:08:50.742 09:57:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 341766 ']' 00:08:50.742 09:57:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 341766 00:08:50.742 09:57:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 341766 ']' 00:08:50.742 09:57:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 341766 00:08:50.742 09:57:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:08:50.742 09:57:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:50.742 09:57:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 341766 00:08:50.742 09:57:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:50.742 09:57:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:50.742 09:57:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 341766' 00:08:50.742 killing process with pid 341766 00:08:50.742 09:57:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 341766 00:08:50.742 09:57:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 341766 00:08:51.001 09:57:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:51.001 09:57:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:51.001 09:57:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:51.001 09:57:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:51.001 09:57:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:51.001 09:57:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:51.001 09:57:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:51.001 09:57:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:53.534 09:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:53.534 00:08:53.534 real 0m21.135s 00:08:53.534 user 1m11.663s 00:08:53.534 sys 0m6.249s 00:08:53.534 09:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:53.534 09:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:53.534 ************************************ 00:08:53.534 END TEST nvmf_lvol 00:08:53.534 ************************************ 00:08:53.534 09:57:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:53.534 09:57:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:53.534 09:57:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:53.534 09:57:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:53.534 ************************************ 00:08:53.534 START TEST nvmf_lvs_grow 00:08:53.534 ************************************ 00:08:53.534 09:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:53.534 * Looking for test storage... 00:08:53.534 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:53.534 09:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:53.534 09:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:08:53.534 09:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:53.534 09:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:53.534 09:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:53.534 09:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:53.534 09:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:53.534 09:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:53.534 09:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:53.534 09:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:53.534 09:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:53.534 09:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:53.534 09:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:08:53.534 09:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:08:53.534 09:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:53.534 09:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:53.534 09:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:53.534 09:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:53.534 09:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:53.534 09:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:53.534 09:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:53.534 09:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:53.534 09:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.534 09:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.534 09:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.534 09:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:08:53.534 09:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:53.534 09:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:08:53.534 09:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:53.534 09:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:53.534 09:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:53.534 09:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:53.534 09:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:53.534 09:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:53.534 09:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:53.534 09:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:53.534 09:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:53.534 09:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:53.534 09:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:08:53.535 09:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:53.535 09:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:53.535 09:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:53.535 09:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:53.535 09:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:53.535 09:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:53.535 09:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:53.535 09:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:53.535 09:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:53.535 09:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:53.535 09:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:08:53.535 09:57:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:56.128 09:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:56.128 09:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:08:56.128 09:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:56.128 09:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:56.128 09:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:56.128 09:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:56.128 09:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:56.128 09:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:08:56.128 09:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:56.128 09:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:08:56.128 09:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:08:56.128 09:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:08:56.128 09:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:08:56.128 09:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:08:56.128 09:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:08:56.128 09:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:56.128 09:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:56.128 09:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:56.128 09:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:56.128 09:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:56.128 09:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:56.128 09:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:56.128 09:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:56.128 09:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:56.128 09:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:56.128 09:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:56.128 09:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:56.128 09:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:56.128 09:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:56.128 09:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:56.128 09:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:56.128 09:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:56.128 09:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:56.128 09:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:08:56.128 Found 0000:84:00.0 (0x8086 - 0x159b) 00:08:56.128 09:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:56.128 09:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:56.128 09:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:56.128 09:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:56.128 09:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:56.128 09:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:56.128 09:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:08:56.128 Found 0000:84:00.1 (0x8086 - 0x159b) 00:08:56.128 09:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:56.128 09:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:56.128 09:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:56.128 09:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:56.128 09:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:56.129 09:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:56.129 09:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:56.129 09:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:56.129 09:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:56.129 09:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:56.129 09:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:56.129 09:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:56.129 09:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:56.129 09:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:56.129 09:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:56.129 09:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:08:56.129 Found net devices under 0000:84:00.0: cvl_0_0 00:08:56.129 09:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:56.129 09:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:56.129 09:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:56.129 09:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:56.129 09:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:56.129 09:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:56.129 09:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:56.129 09:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:56.129 09:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:08:56.129 Found net devices under 0000:84:00.1: cvl_0_1 00:08:56.129 09:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:56.129 09:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:56.129 09:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:08:56.129 09:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:56.129 09:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:56.129 09:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:56.129 09:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:56.129 09:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:56.129 09:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:56.129 09:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:56.129 09:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:56.129 09:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:56.129 09:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:56.129 09:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:56.129 09:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:56.129 09:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:56.129 09:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:56.129 09:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:56.129 09:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:56.129 09:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:56.129 09:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:56.129 09:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:56.129 09:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:56.129 09:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:56.129 09:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:56.129 09:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:56.129 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:56.129 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.201 ms 00:08:56.129 00:08:56.129 --- 10.0.0.2 ping statistics --- 00:08:56.129 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:56.129 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:08:56.129 09:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:56.129 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:56.129 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.141 ms 00:08:56.129 00:08:56.129 --- 10.0.0.1 ping statistics --- 00:08:56.129 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:56.129 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:08:56.129 09:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:56.129 09:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:08:56.129 09:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:56.129 09:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:56.129 09:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:56.129 09:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:56.129 09:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:56.129 09:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:56.129 09:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:56.129 09:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:08:56.129 09:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:56.129 09:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:56.129 09:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:56.129 09:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=345667 00:08:56.129 09:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:56.129 09:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 345667 00:08:56.129 09:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 345667 ']' 00:08:56.129 09:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:56.129 09:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:56.129 09:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:56.129 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:56.129 09:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:56.129 09:57:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:56.129 [2024-07-25 09:57:40.885566] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:08:56.129 [2024-07-25 09:57:40.885662] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:56.129 EAL: No free 2048 kB hugepages reported on node 1 00:08:56.129 [2024-07-25 09:57:40.964662] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:56.129 [2024-07-25 09:57:41.089969] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:56.129 [2024-07-25 09:57:41.090033] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:56.129 [2024-07-25 09:57:41.090049] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:56.129 [2024-07-25 09:57:41.090063] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:56.129 [2024-07-25 09:57:41.090074] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:56.129 [2024-07-25 09:57:41.090107] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:56.129 09:57:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:56.129 09:57:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:08:56.129 09:57:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:56.129 09:57:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:56.129 09:57:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:56.129 09:57:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:56.129 09:57:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:56.695 [2024-07-25 09:57:41.729465] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:56.695 09:57:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:08:56.695 09:57:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:08:56.695 09:57:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:56.695 09:57:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:56.695 ************************************ 00:08:56.695 START TEST lvs_grow_clean 00:08:56.695 ************************************ 00:08:56.695 09:57:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:08:56.695 09:57:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:56.695 09:57:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:56.695 09:57:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:56.695 09:57:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:56.695 09:57:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:56.695 09:57:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:56.695 09:57:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:56.695 09:57:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:56.695 09:57:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:57.260 09:57:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:57.260 09:57:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:57.518 09:57:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=4bdf8bb3-9f17-4a5d-934a-506fb49df4cc 00:08:57.518 09:57:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4bdf8bb3-9f17-4a5d-934a-506fb49df4cc 00:08:57.518 09:57:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:57.776 09:57:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:57.776 09:57:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:57.776 09:57:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 4bdf8bb3-9f17-4a5d-934a-506fb49df4cc lvol 150 00:08:58.350 09:57:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=0c5148f1-86a6-47bc-9c7d-89e763678e51 00:08:58.350 09:57:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:58.350 09:57:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:58.610 [2024-07-25 09:57:43.600926] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:58.610 [2024-07-25 09:57:43.601022] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:58.610 true 00:08:58.610 09:57:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4bdf8bb3-9f17-4a5d-934a-506fb49df4cc 00:08:58.610 09:57:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:58.868 09:57:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:58.868 09:57:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:59.126 09:57:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 0c5148f1-86a6-47bc-9c7d-89e763678e51 00:08:59.384 09:57:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:59.949 [2024-07-25 09:57:44.812693] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:59.949 09:57:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:00.206 09:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=346232 00:09:00.206 09:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:00.206 09:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:00.206 09:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 346232 /var/tmp/bdevperf.sock 00:09:00.206 09:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 346232 ']' 00:09:00.206 09:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:00.206 09:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:00.206 09:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:00.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:00.206 09:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:00.206 09:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:00.206 [2024-07-25 09:57:45.170268] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:00.206 [2024-07-25 09:57:45.170372] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid346232 ] 00:09:00.206 EAL: No free 2048 kB hugepages reported on node 1 00:09:00.206 [2024-07-25 09:57:45.245772] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:00.206 [2024-07-25 09:57:45.370915] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:00.464 09:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:00.464 09:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:09:00.464 09:57:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:01.397 Nvme0n1 00:09:01.397 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:01.397 [ 00:09:01.397 { 00:09:01.397 "name": "Nvme0n1", 00:09:01.397 "aliases": [ 00:09:01.397 "0c5148f1-86a6-47bc-9c7d-89e763678e51" 00:09:01.397 ], 00:09:01.397 "product_name": "NVMe disk", 00:09:01.397 "block_size": 4096, 00:09:01.397 "num_blocks": 38912, 00:09:01.397 "uuid": "0c5148f1-86a6-47bc-9c7d-89e763678e51", 00:09:01.397 "assigned_rate_limits": { 00:09:01.397 "rw_ios_per_sec": 0, 00:09:01.397 "rw_mbytes_per_sec": 0, 00:09:01.397 "r_mbytes_per_sec": 0, 00:09:01.397 "w_mbytes_per_sec": 0 00:09:01.397 }, 00:09:01.397 "claimed": false, 00:09:01.397 "zoned": false, 00:09:01.397 "supported_io_types": { 00:09:01.397 "read": true, 00:09:01.397 "write": true, 00:09:01.397 "unmap": true, 00:09:01.397 "flush": true, 00:09:01.397 "reset": true, 00:09:01.397 "nvme_admin": true, 00:09:01.397 "nvme_io": true, 00:09:01.397 "nvme_io_md": false, 00:09:01.397 "write_zeroes": true, 00:09:01.397 "zcopy": false, 00:09:01.397 "get_zone_info": false, 00:09:01.397 "zone_management": false, 00:09:01.397 "zone_append": false, 00:09:01.397 "compare": true, 00:09:01.397 "compare_and_write": true, 00:09:01.397 "abort": true, 00:09:01.397 "seek_hole": false, 00:09:01.397 "seek_data": false, 00:09:01.397 "copy": true, 00:09:01.397 "nvme_iov_md": false 00:09:01.397 }, 00:09:01.397 "memory_domains": [ 00:09:01.397 { 00:09:01.397 "dma_device_id": "system", 00:09:01.397 "dma_device_type": 1 00:09:01.397 } 00:09:01.397 ], 00:09:01.397 "driver_specific": { 00:09:01.397 "nvme": [ 00:09:01.397 { 00:09:01.397 "trid": { 00:09:01.397 "trtype": "TCP", 00:09:01.397 "adrfam": "IPv4", 00:09:01.397 "traddr": "10.0.0.2", 00:09:01.397 "trsvcid": "4420", 00:09:01.397 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:01.397 }, 00:09:01.397 "ctrlr_data": { 00:09:01.397 "cntlid": 1, 00:09:01.397 "vendor_id": "0x8086", 00:09:01.397 "model_number": "SPDK bdev Controller", 00:09:01.397 "serial_number": "SPDK0", 00:09:01.397 "firmware_revision": "24.09", 00:09:01.397 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:01.397 "oacs": { 00:09:01.397 "security": 0, 00:09:01.397 "format": 0, 00:09:01.397 "firmware": 0, 00:09:01.397 "ns_manage": 0 00:09:01.397 }, 00:09:01.397 "multi_ctrlr": true, 00:09:01.397 "ana_reporting": false 00:09:01.397 }, 00:09:01.397 "vs": { 00:09:01.397 "nvme_version": "1.3" 00:09:01.397 }, 00:09:01.397 "ns_data": { 00:09:01.397 "id": 1, 00:09:01.397 "can_share": true 00:09:01.397 } 00:09:01.397 } 00:09:01.397 ], 00:09:01.397 "mp_policy": "active_passive" 00:09:01.397 } 00:09:01.397 } 00:09:01.397 ] 00:09:01.397 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=346377 00:09:01.397 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:01.397 09:57:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:01.655 Running I/O for 10 seconds... 00:09:02.587 Latency(us) 00:09:02.587 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:02.587 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:02.587 Nvme0n1 : 1.00 14054.00 54.90 0.00 0.00 0.00 0.00 0.00 00:09:02.587 =================================================================================================================== 00:09:02.587 Total : 14054.00 54.90 0.00 0.00 0.00 0.00 0.00 00:09:02.587 00:09:03.520 09:57:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 4bdf8bb3-9f17-4a5d-934a-506fb49df4cc 00:09:03.778 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:03.778 Nvme0n1 : 2.00 14176.50 55.38 0.00 0.00 0.00 0.00 0.00 00:09:03.778 =================================================================================================================== 00:09:03.778 Total : 14176.50 55.38 0.00 0.00 0.00 0.00 0.00 00:09:03.778 00:09:04.035 true 00:09:04.035 09:57:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4bdf8bb3-9f17-4a5d-934a-506fb49df4cc 00:09:04.035 09:57:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:04.294 09:57:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:04.294 09:57:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:04.294 09:57:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 346377 00:09:04.859 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:04.859 Nvme0n1 : 3.00 14320.33 55.94 0.00 0.00 0.00 0.00 0.00 00:09:04.859 =================================================================================================================== 00:09:04.859 Total : 14320.33 55.94 0.00 0.00 0.00 0.00 0.00 00:09:04.859 00:09:05.790 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:05.790 Nvme0n1 : 4.00 14377.50 56.16 0.00 0.00 0.00 0.00 0.00 00:09:05.790 =================================================================================================================== 00:09:05.790 Total : 14377.50 56.16 0.00 0.00 0.00 0.00 0.00 00:09:05.790 00:09:06.722 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:06.722 Nvme0n1 : 5.00 14436.00 56.39 0.00 0.00 0.00 0.00 0.00 00:09:06.722 =================================================================================================================== 00:09:06.722 Total : 14436.00 56.39 0.00 0.00 0.00 0.00 0.00 00:09:06.722 00:09:07.652 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:07.652 Nvme0n1 : 6.00 14489.83 56.60 0.00 0.00 0.00 0.00 0.00 00:09:07.652 =================================================================================================================== 00:09:07.652 Total : 14489.83 56.60 0.00 0.00 0.00 0.00 0.00 00:09:07.653 00:09:09.024 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:09.024 Nvme0n1 : 7.00 14534.14 56.77 0.00 0.00 0.00 0.00 0.00 00:09:09.024 =================================================================================================================== 00:09:09.024 Total : 14534.14 56.77 0.00 0.00 0.00 0.00 0.00 00:09:09.024 00:09:09.957 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:09.957 Nvme0n1 : 8.00 14561.00 56.88 0.00 0.00 0.00 0.00 0.00 00:09:09.957 =================================================================================================================== 00:09:09.957 Total : 14561.00 56.88 0.00 0.00 0.00 0.00 0.00 00:09:09.957 00:09:10.891 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:10.891 Nvme0n1 : 9.00 14595.22 57.01 0.00 0.00 0.00 0.00 0.00 00:09:10.891 =================================================================================================================== 00:09:10.891 Total : 14595.22 57.01 0.00 0.00 0.00 0.00 0.00 00:09:10.891 00:09:11.872 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:11.872 Nvme0n1 : 10.00 14611.20 57.08 0.00 0.00 0.00 0.00 0.00 00:09:11.872 =================================================================================================================== 00:09:11.872 Total : 14611.20 57.08 0.00 0.00 0.00 0.00 0.00 00:09:11.872 00:09:11.872 00:09:11.872 Latency(us) 00:09:11.872 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:11.872 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:11.872 Nvme0n1 : 10.01 14615.02 57.09 0.00 0.00 8753.14 4660.34 17379.18 00:09:11.872 =================================================================================================================== 00:09:11.872 Total : 14615.02 57.09 0.00 0.00 8753.14 4660.34 17379.18 00:09:11.872 0 00:09:11.872 09:57:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 346232 00:09:11.872 09:57:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 346232 ']' 00:09:11.872 09:57:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 346232 00:09:11.872 09:57:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:09:11.872 09:57:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:11.872 09:57:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 346232 00:09:11.872 09:57:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:11.872 09:57:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:11.872 09:57:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 346232' 00:09:11.872 killing process with pid 346232 00:09:11.872 09:57:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 346232 00:09:11.872 Received shutdown signal, test time was about 10.000000 seconds 00:09:11.872 00:09:11.872 Latency(us) 00:09:11.872 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:11.872 =================================================================================================================== 00:09:11.872 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:11.872 09:57:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 346232 00:09:12.131 09:57:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:12.389 09:57:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:12.954 09:57:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4bdf8bb3-9f17-4a5d-934a-506fb49df4cc 00:09:12.954 09:57:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:13.212 09:57:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:13.212 09:57:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:09:13.212 09:57:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:13.470 [2024-07-25 09:57:58.553869] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:13.470 09:57:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4bdf8bb3-9f17-4a5d-934a-506fb49df4cc 00:09:13.470 09:57:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:09:13.470 09:57:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4bdf8bb3-9f17-4a5d-934a-506fb49df4cc 00:09:13.470 09:57:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:13.470 09:57:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:13.470 09:57:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:13.470 09:57:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:13.471 09:57:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:13.471 09:57:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:13.471 09:57:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:13.471 09:57:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:13.471 09:57:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4bdf8bb3-9f17-4a5d-934a-506fb49df4cc 00:09:13.728 request: 00:09:13.728 { 00:09:13.728 "uuid": "4bdf8bb3-9f17-4a5d-934a-506fb49df4cc", 00:09:13.728 "method": "bdev_lvol_get_lvstores", 00:09:13.728 "req_id": 1 00:09:13.728 } 00:09:13.728 Got JSON-RPC error response 00:09:13.728 response: 00:09:13.728 { 00:09:13.728 "code": -19, 00:09:13.728 "message": "No such device" 00:09:13.728 } 00:09:13.728 09:57:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:09:13.728 09:57:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:13.728 09:57:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:13.728 09:57:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:13.986 09:57:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:14.244 aio_bdev 00:09:14.244 09:57:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 0c5148f1-86a6-47bc-9c7d-89e763678e51 00:09:14.244 09:57:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=0c5148f1-86a6-47bc-9c7d-89e763678e51 00:09:14.244 09:57:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:14.244 09:57:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:09:14.244 09:57:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:14.244 09:57:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:14.244 09:57:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:14.502 09:57:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 0c5148f1-86a6-47bc-9c7d-89e763678e51 -t 2000 00:09:14.760 [ 00:09:14.760 { 00:09:14.760 "name": "0c5148f1-86a6-47bc-9c7d-89e763678e51", 00:09:14.760 "aliases": [ 00:09:14.760 "lvs/lvol" 00:09:14.760 ], 00:09:14.760 "product_name": "Logical Volume", 00:09:14.760 "block_size": 4096, 00:09:14.760 "num_blocks": 38912, 00:09:14.760 "uuid": "0c5148f1-86a6-47bc-9c7d-89e763678e51", 00:09:14.760 "assigned_rate_limits": { 00:09:14.760 "rw_ios_per_sec": 0, 00:09:14.760 "rw_mbytes_per_sec": 0, 00:09:14.760 "r_mbytes_per_sec": 0, 00:09:14.760 "w_mbytes_per_sec": 0 00:09:14.760 }, 00:09:14.760 "claimed": false, 00:09:14.760 "zoned": false, 00:09:14.760 "supported_io_types": { 00:09:14.760 "read": true, 00:09:14.760 "write": true, 00:09:14.760 "unmap": true, 00:09:14.760 "flush": false, 00:09:14.760 "reset": true, 00:09:14.760 "nvme_admin": false, 00:09:14.760 "nvme_io": false, 00:09:14.760 "nvme_io_md": false, 00:09:14.760 "write_zeroes": true, 00:09:14.760 "zcopy": false, 00:09:14.760 "get_zone_info": false, 00:09:14.760 "zone_management": false, 00:09:14.760 "zone_append": false, 00:09:14.760 "compare": false, 00:09:14.760 "compare_and_write": false, 00:09:14.760 "abort": false, 00:09:14.760 "seek_hole": true, 00:09:14.760 "seek_data": true, 00:09:14.760 "copy": false, 00:09:14.760 "nvme_iov_md": false 00:09:14.760 }, 00:09:14.760 "driver_specific": { 00:09:14.760 "lvol": { 00:09:14.760 "lvol_store_uuid": "4bdf8bb3-9f17-4a5d-934a-506fb49df4cc", 00:09:14.760 "base_bdev": "aio_bdev", 00:09:14.760 "thin_provision": false, 00:09:14.760 "num_allocated_clusters": 38, 00:09:14.760 "snapshot": false, 00:09:14.760 "clone": false, 00:09:14.760 "esnap_clone": false 00:09:14.760 } 00:09:14.760 } 00:09:14.760 } 00:09:14.760 ] 00:09:14.760 09:57:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:09:14.760 09:57:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:14.760 09:57:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4bdf8bb3-9f17-4a5d-934a-506fb49df4cc 00:09:15.018 09:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:15.018 09:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 4bdf8bb3-9f17-4a5d-934a-506fb49df4cc 00:09:15.018 09:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:15.276 09:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:15.276 09:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 0c5148f1-86a6-47bc-9c7d-89e763678e51 00:09:15.534 09:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 4bdf8bb3-9f17-4a5d-934a-506fb49df4cc 00:09:16.100 09:58:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:16.665 09:58:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:16.665 00:09:16.665 real 0m19.816s 00:09:16.665 user 0m19.763s 00:09:16.665 sys 0m2.202s 00:09:16.665 09:58:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:16.665 09:58:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:16.665 ************************************ 00:09:16.665 END TEST lvs_grow_clean 00:09:16.665 ************************************ 00:09:16.665 09:58:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:09:16.665 09:58:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:16.665 09:58:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:16.665 09:58:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:16.665 ************************************ 00:09:16.665 START TEST lvs_grow_dirty 00:09:16.665 ************************************ 00:09:16.665 09:58:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:09:16.665 09:58:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:16.665 09:58:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:16.665 09:58:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:16.665 09:58:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:16.665 09:58:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:16.665 09:58:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:16.665 09:58:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:16.665 09:58:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:16.665 09:58:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:16.922 09:58:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:16.922 09:58:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:17.180 09:58:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=1d5fb325-f025-436a-8121-dc5ffa72cace 00:09:17.180 09:58:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1d5fb325-f025-436a-8121-dc5ffa72cace 00:09:17.180 09:58:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:17.746 09:58:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:17.746 09:58:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:17.746 09:58:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 1d5fb325-f025-436a-8121-dc5ffa72cace lvol 150 00:09:17.746 09:58:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=dcef51f9-16c6-4431-922e-17bc687fc67d 00:09:17.746 09:58:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:17.746 09:58:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:18.312 [2024-07-25 09:58:03.177077] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:18.312 [2024-07-25 09:58:03.177179] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:18.312 true 00:09:18.312 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1d5fb325-f025-436a-8121-dc5ffa72cace 00:09:18.312 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:18.569 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:18.569 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:18.827 09:58:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 dcef51f9-16c6-4431-922e-17bc687fc67d 00:09:19.085 09:58:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:19.343 [2024-07-25 09:58:04.348649] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:19.344 09:58:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:19.602 09:58:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=348550 00:09:19.602 09:58:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:19.602 09:58:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:19.602 09:58:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 348550 /var/tmp/bdevperf.sock 00:09:19.602 09:58:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 348550 ']' 00:09:19.602 09:58:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:19.602 09:58:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:19.602 09:58:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:19.602 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:19.602 09:58:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:19.602 09:58:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:19.602 [2024-07-25 09:58:04.700198] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:19.602 [2024-07-25 09:58:04.700293] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid348550 ] 00:09:19.602 EAL: No free 2048 kB hugepages reported on node 1 00:09:19.602 [2024-07-25 09:58:04.768029] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:19.860 [2024-07-25 09:58:04.892188] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:20.118 09:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:20.118 09:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:09:20.118 09:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:20.375 Nvme0n1 00:09:20.375 09:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:20.942 [ 00:09:20.942 { 00:09:20.942 "name": "Nvme0n1", 00:09:20.942 "aliases": [ 00:09:20.942 "dcef51f9-16c6-4431-922e-17bc687fc67d" 00:09:20.942 ], 00:09:20.942 "product_name": "NVMe disk", 00:09:20.942 "block_size": 4096, 00:09:20.942 "num_blocks": 38912, 00:09:20.942 "uuid": "dcef51f9-16c6-4431-922e-17bc687fc67d", 00:09:20.942 "assigned_rate_limits": { 00:09:20.942 "rw_ios_per_sec": 0, 00:09:20.942 "rw_mbytes_per_sec": 0, 00:09:20.942 "r_mbytes_per_sec": 0, 00:09:20.942 "w_mbytes_per_sec": 0 00:09:20.942 }, 00:09:20.942 "claimed": false, 00:09:20.942 "zoned": false, 00:09:20.942 "supported_io_types": { 00:09:20.942 "read": true, 00:09:20.942 "write": true, 00:09:20.942 "unmap": true, 00:09:20.942 "flush": true, 00:09:20.942 "reset": true, 00:09:20.942 "nvme_admin": true, 00:09:20.942 "nvme_io": true, 00:09:20.942 "nvme_io_md": false, 00:09:20.942 "write_zeroes": true, 00:09:20.942 "zcopy": false, 00:09:20.942 "get_zone_info": false, 00:09:20.942 "zone_management": false, 00:09:20.942 "zone_append": false, 00:09:20.942 "compare": true, 00:09:20.942 "compare_and_write": true, 00:09:20.942 "abort": true, 00:09:20.942 "seek_hole": false, 00:09:20.942 "seek_data": false, 00:09:20.942 "copy": true, 00:09:20.942 "nvme_iov_md": false 00:09:20.942 }, 00:09:20.942 "memory_domains": [ 00:09:20.942 { 00:09:20.942 "dma_device_id": "system", 00:09:20.942 "dma_device_type": 1 00:09:20.942 } 00:09:20.942 ], 00:09:20.942 "driver_specific": { 00:09:20.942 "nvme": [ 00:09:20.942 { 00:09:20.942 "trid": { 00:09:20.942 "trtype": "TCP", 00:09:20.942 "adrfam": "IPv4", 00:09:20.942 "traddr": "10.0.0.2", 00:09:20.942 "trsvcid": "4420", 00:09:20.942 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:20.942 }, 00:09:20.942 "ctrlr_data": { 00:09:20.942 "cntlid": 1, 00:09:20.942 "vendor_id": "0x8086", 00:09:20.942 "model_number": "SPDK bdev Controller", 00:09:20.942 "serial_number": "SPDK0", 00:09:20.942 "firmware_revision": "24.09", 00:09:20.942 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:20.942 "oacs": { 00:09:20.942 "security": 0, 00:09:20.942 "format": 0, 00:09:20.942 "firmware": 0, 00:09:20.942 "ns_manage": 0 00:09:20.942 }, 00:09:20.942 "multi_ctrlr": true, 00:09:20.942 "ana_reporting": false 00:09:20.942 }, 00:09:20.942 "vs": { 00:09:20.942 "nvme_version": "1.3" 00:09:20.942 }, 00:09:20.942 "ns_data": { 00:09:20.942 "id": 1, 00:09:20.942 "can_share": true 00:09:20.942 } 00:09:20.942 } 00:09:20.942 ], 00:09:20.942 "mp_policy": "active_passive" 00:09:20.942 } 00:09:20.942 } 00:09:20.942 ] 00:09:20.942 09:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=348694 00:09:20.942 09:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:20.942 09:58:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:20.942 Running I/O for 10 seconds... 00:09:21.876 Latency(us) 00:09:21.876 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:21.876 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:21.876 Nvme0n1 : 1.00 13922.00 54.38 0.00 0.00 0.00 0.00 0.00 00:09:21.876 =================================================================================================================== 00:09:21.876 Total : 13922.00 54.38 0.00 0.00 0.00 0.00 0.00 00:09:21.876 00:09:22.809 09:58:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 1d5fb325-f025-436a-8121-dc5ffa72cace 00:09:22.809 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:22.809 Nvme0n1 : 2.00 14076.00 54.98 0.00 0.00 0.00 0.00 0.00 00:09:22.809 =================================================================================================================== 00:09:22.809 Total : 14076.00 54.98 0.00 0.00 0.00 0.00 0.00 00:09:22.809 00:09:23.067 true 00:09:23.067 09:58:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1d5fb325-f025-436a-8121-dc5ffa72cace 00:09:23.067 09:58:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:23.633 09:58:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:23.633 09:58:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:23.633 09:58:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 348694 00:09:23.891 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:23.891 Nvme0n1 : 3.00 14201.00 55.47 0.00 0.00 0.00 0.00 0.00 00:09:23.891 =================================================================================================================== 00:09:23.891 Total : 14201.00 55.47 0.00 0.00 0.00 0.00 0.00 00:09:23.891 00:09:24.826 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:24.826 Nvme0n1 : 4.00 14230.25 55.59 0.00 0.00 0.00 0.00 0.00 00:09:24.826 =================================================================================================================== 00:09:24.826 Total : 14230.25 55.59 0.00 0.00 0.00 0.00 0.00 00:09:24.826 00:09:26.200 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:26.200 Nvme0n1 : 5.00 14281.20 55.79 0.00 0.00 0.00 0.00 0.00 00:09:26.200 =================================================================================================================== 00:09:26.200 Total : 14281.20 55.79 0.00 0.00 0.00 0.00 0.00 00:09:26.200 00:09:27.142 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:27.142 Nvme0n1 : 6.00 14319.33 55.93 0.00 0.00 0.00 0.00 0.00 00:09:27.142 =================================================================================================================== 00:09:27.142 Total : 14319.33 55.93 0.00 0.00 0.00 0.00 0.00 00:09:27.142 00:09:28.079 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:28.079 Nvme0n1 : 7.00 14363.00 56.11 0.00 0.00 0.00 0.00 0.00 00:09:28.079 =================================================================================================================== 00:09:28.079 Total : 14363.00 56.11 0.00 0.00 0.00 0.00 0.00 00:09:28.079 00:09:29.019 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:29.019 Nvme0n1 : 8.00 14403.62 56.26 0.00 0.00 0.00 0.00 0.00 00:09:29.019 =================================================================================================================== 00:09:29.019 Total : 14403.62 56.26 0.00 0.00 0.00 0.00 0.00 00:09:29.019 00:09:29.954 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:29.954 Nvme0n1 : 9.00 14421.44 56.33 0.00 0.00 0.00 0.00 0.00 00:09:29.954 =================================================================================================================== 00:09:29.954 Total : 14421.44 56.33 0.00 0.00 0.00 0.00 0.00 00:09:29.954 00:09:30.890 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:30.890 Nvme0n1 : 10.00 14436.80 56.39 0.00 0.00 0.00 0.00 0.00 00:09:30.890 =================================================================================================================== 00:09:30.890 Total : 14436.80 56.39 0.00 0.00 0.00 0.00 0.00 00:09:30.890 00:09:30.890 00:09:30.890 Latency(us) 00:09:30.890 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:30.890 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:30.890 Nvme0n1 : 10.01 14439.06 56.40 0.00 0.00 8859.82 5291.43 19903.53 00:09:30.890 =================================================================================================================== 00:09:30.890 Total : 14439.06 56.40 0.00 0.00 8859.82 5291.43 19903.53 00:09:30.890 0 00:09:30.890 09:58:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 348550 00:09:30.890 09:58:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 348550 ']' 00:09:30.890 09:58:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 348550 00:09:30.890 09:58:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:09:30.890 09:58:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:30.890 09:58:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 348550 00:09:30.890 09:58:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:09:30.890 09:58:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:09:30.890 09:58:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 348550' 00:09:30.890 killing process with pid 348550 00:09:30.890 09:58:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 348550 00:09:30.890 Received shutdown signal, test time was about 10.000000 seconds 00:09:30.890 00:09:30.890 Latency(us) 00:09:30.890 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:30.890 =================================================================================================================== 00:09:30.890 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:30.890 09:58:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 348550 00:09:31.457 09:58:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:31.715 09:58:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:32.282 09:58:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1d5fb325-f025-436a-8121-dc5ffa72cace 00:09:32.282 09:58:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:32.541 09:58:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:32.541 09:58:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:09:32.541 09:58:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 345667 00:09:32.541 09:58:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 345667 00:09:32.541 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 345667 Killed "${NVMF_APP[@]}" "$@" 00:09:32.541 09:58:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:09:32.541 09:58:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:09:32.541 09:58:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:32.541 09:58:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:32.541 09:58:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:32.541 09:58:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=350031 00:09:32.541 09:58:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:32.541 09:58:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 350031 00:09:32.541 09:58:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 350031 ']' 00:09:32.541 09:58:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:32.541 09:58:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:32.541 09:58:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:32.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:32.541 09:58:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:32.541 09:58:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:32.800 [2024-07-25 09:58:17.741525] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:32.800 [2024-07-25 09:58:17.741621] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:32.800 EAL: No free 2048 kB hugepages reported on node 1 00:09:32.800 [2024-07-25 09:58:17.826601] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:32.800 [2024-07-25 09:58:17.952739] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:32.800 [2024-07-25 09:58:17.952798] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:32.800 [2024-07-25 09:58:17.952815] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:32.800 [2024-07-25 09:58:17.952829] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:32.800 [2024-07-25 09:58:17.952841] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:32.800 [2024-07-25 09:58:17.952882] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:33.059 09:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:33.059 09:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:09:33.059 09:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:33.059 09:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:33.059 09:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:33.059 09:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:33.059 09:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:33.625 [2024-07-25 09:58:18.581224] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:09:33.625 [2024-07-25 09:58:18.581378] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:09:33.625 [2024-07-25 09:58:18.581447] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:09:33.625 09:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:09:33.625 09:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev dcef51f9-16c6-4431-922e-17bc687fc67d 00:09:33.625 09:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=dcef51f9-16c6-4431-922e-17bc687fc67d 00:09:33.625 09:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:33.625 09:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:09:33.625 09:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:33.625 09:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:33.625 09:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:33.883 09:58:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b dcef51f9-16c6-4431-922e-17bc687fc67d -t 2000 00:09:34.141 [ 00:09:34.141 { 00:09:34.141 "name": "dcef51f9-16c6-4431-922e-17bc687fc67d", 00:09:34.141 "aliases": [ 00:09:34.141 "lvs/lvol" 00:09:34.141 ], 00:09:34.141 "product_name": "Logical Volume", 00:09:34.141 "block_size": 4096, 00:09:34.141 "num_blocks": 38912, 00:09:34.141 "uuid": "dcef51f9-16c6-4431-922e-17bc687fc67d", 00:09:34.141 "assigned_rate_limits": { 00:09:34.141 "rw_ios_per_sec": 0, 00:09:34.141 "rw_mbytes_per_sec": 0, 00:09:34.141 "r_mbytes_per_sec": 0, 00:09:34.141 "w_mbytes_per_sec": 0 00:09:34.141 }, 00:09:34.141 "claimed": false, 00:09:34.141 "zoned": false, 00:09:34.141 "supported_io_types": { 00:09:34.141 "read": true, 00:09:34.141 "write": true, 00:09:34.141 "unmap": true, 00:09:34.141 "flush": false, 00:09:34.141 "reset": true, 00:09:34.141 "nvme_admin": false, 00:09:34.141 "nvme_io": false, 00:09:34.141 "nvme_io_md": false, 00:09:34.141 "write_zeroes": true, 00:09:34.141 "zcopy": false, 00:09:34.141 "get_zone_info": false, 00:09:34.141 "zone_management": false, 00:09:34.141 "zone_append": false, 00:09:34.142 "compare": false, 00:09:34.142 "compare_and_write": false, 00:09:34.142 "abort": false, 00:09:34.142 "seek_hole": true, 00:09:34.142 "seek_data": true, 00:09:34.142 "copy": false, 00:09:34.142 "nvme_iov_md": false 00:09:34.142 }, 00:09:34.142 "driver_specific": { 00:09:34.142 "lvol": { 00:09:34.142 "lvol_store_uuid": "1d5fb325-f025-436a-8121-dc5ffa72cace", 00:09:34.142 "base_bdev": "aio_bdev", 00:09:34.142 "thin_provision": false, 00:09:34.142 "num_allocated_clusters": 38, 00:09:34.142 "snapshot": false, 00:09:34.142 "clone": false, 00:09:34.142 "esnap_clone": false 00:09:34.142 } 00:09:34.142 } 00:09:34.142 } 00:09:34.142 ] 00:09:34.142 09:58:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:09:34.142 09:58:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1d5fb325-f025-436a-8121-dc5ffa72cace 00:09:34.142 09:58:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:09:34.400 09:58:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:09:34.400 09:58:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1d5fb325-f025-436a-8121-dc5ffa72cace 00:09:34.400 09:58:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:09:34.658 09:58:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:09:34.658 09:58:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:34.916 [2024-07-25 09:58:20.046499] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:34.916 09:58:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1d5fb325-f025-436a-8121-dc5ffa72cace 00:09:34.916 09:58:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:09:34.916 09:58:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1d5fb325-f025-436a-8121-dc5ffa72cace 00:09:34.916 09:58:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:34.916 09:58:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:34.916 09:58:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:35.174 09:58:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:35.174 09:58:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:35.174 09:58:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:35.174 09:58:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:35.174 09:58:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:35.174 09:58:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1d5fb325-f025-436a-8121-dc5ffa72cace 00:09:35.432 request: 00:09:35.432 { 00:09:35.432 "uuid": "1d5fb325-f025-436a-8121-dc5ffa72cace", 00:09:35.432 "method": "bdev_lvol_get_lvstores", 00:09:35.432 "req_id": 1 00:09:35.432 } 00:09:35.432 Got JSON-RPC error response 00:09:35.432 response: 00:09:35.432 { 00:09:35.432 "code": -19, 00:09:35.432 "message": "No such device" 00:09:35.432 } 00:09:35.432 09:58:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:09:35.432 09:58:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:35.432 09:58:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:35.432 09:58:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:35.432 09:58:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:35.690 aio_bdev 00:09:35.690 09:58:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev dcef51f9-16c6-4431-922e-17bc687fc67d 00:09:35.690 09:58:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=dcef51f9-16c6-4431-922e-17bc687fc67d 00:09:35.690 09:58:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:09:35.690 09:58:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:09:35.690 09:58:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:09:35.690 09:58:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:09:35.690 09:58:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:36.254 09:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b dcef51f9-16c6-4431-922e-17bc687fc67d -t 2000 00:09:36.820 [ 00:09:36.820 { 00:09:36.820 "name": "dcef51f9-16c6-4431-922e-17bc687fc67d", 00:09:36.820 "aliases": [ 00:09:36.820 "lvs/lvol" 00:09:36.820 ], 00:09:36.820 "product_name": "Logical Volume", 00:09:36.820 "block_size": 4096, 00:09:36.820 "num_blocks": 38912, 00:09:36.820 "uuid": "dcef51f9-16c6-4431-922e-17bc687fc67d", 00:09:36.820 "assigned_rate_limits": { 00:09:36.820 "rw_ios_per_sec": 0, 00:09:36.820 "rw_mbytes_per_sec": 0, 00:09:36.820 "r_mbytes_per_sec": 0, 00:09:36.820 "w_mbytes_per_sec": 0 00:09:36.820 }, 00:09:36.820 "claimed": false, 00:09:36.820 "zoned": false, 00:09:36.820 "supported_io_types": { 00:09:36.820 "read": true, 00:09:36.820 "write": true, 00:09:36.820 "unmap": true, 00:09:36.820 "flush": false, 00:09:36.820 "reset": true, 00:09:36.820 "nvme_admin": false, 00:09:36.820 "nvme_io": false, 00:09:36.820 "nvme_io_md": false, 00:09:36.820 "write_zeroes": true, 00:09:36.820 "zcopy": false, 00:09:36.820 "get_zone_info": false, 00:09:36.820 "zone_management": false, 00:09:36.820 "zone_append": false, 00:09:36.820 "compare": false, 00:09:36.821 "compare_and_write": false, 00:09:36.821 "abort": false, 00:09:36.821 "seek_hole": true, 00:09:36.821 "seek_data": true, 00:09:36.821 "copy": false, 00:09:36.821 "nvme_iov_md": false 00:09:36.821 }, 00:09:36.821 "driver_specific": { 00:09:36.821 "lvol": { 00:09:36.821 "lvol_store_uuid": "1d5fb325-f025-436a-8121-dc5ffa72cace", 00:09:36.821 "base_bdev": "aio_bdev", 00:09:36.821 "thin_provision": false, 00:09:36.821 "num_allocated_clusters": 38, 00:09:36.821 "snapshot": false, 00:09:36.821 "clone": false, 00:09:36.821 "esnap_clone": false 00:09:36.821 } 00:09:36.821 } 00:09:36.821 } 00:09:36.821 ] 00:09:36.821 09:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:09:36.821 09:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1d5fb325-f025-436a-8121-dc5ffa72cace 00:09:36.821 09:58:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:37.079 09:58:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:37.079 09:58:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 1d5fb325-f025-436a-8121-dc5ffa72cace 00:09:37.079 09:58:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:37.644 09:58:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:37.644 09:58:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete dcef51f9-16c6-4431-922e-17bc687fc67d 00:09:37.902 09:58:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 1d5fb325-f025-436a-8121-dc5ffa72cace 00:09:38.160 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:38.417 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:38.417 00:09:38.417 real 0m21.783s 00:09:38.417 user 0m54.601s 00:09:38.417 sys 0m5.471s 00:09:38.417 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:38.417 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:38.417 ************************************ 00:09:38.417 END TEST lvs_grow_dirty 00:09:38.417 ************************************ 00:09:38.417 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:09:38.417 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:09:38.417 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:09:38.417 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:09:38.417 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:09:38.417 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:09:38.417 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:09:38.417 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:09:38.417 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:09:38.417 nvmf_trace.0 00:09:38.417 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:09:38.417 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:09:38.417 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:38.417 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:09:38.417 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:38.417 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:09:38.417 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:38.417 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:38.417 rmmod nvme_tcp 00:09:38.417 rmmod nvme_fabrics 00:09:38.417 rmmod nvme_keyring 00:09:38.674 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:38.674 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:09:38.674 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:09:38.674 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 350031 ']' 00:09:38.674 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 350031 00:09:38.674 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 350031 ']' 00:09:38.675 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 350031 00:09:38.675 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:09:38.675 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:38.675 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 350031 00:09:38.675 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:38.675 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:38.675 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 350031' 00:09:38.675 killing process with pid 350031 00:09:38.675 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 350031 00:09:38.675 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 350031 00:09:38.933 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:38.933 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:38.933 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:38.933 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:38.933 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:38.933 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:38.933 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:38.933 09:58:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:40.866 09:58:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:40.866 00:09:40.866 real 0m47.752s 00:09:40.866 user 1m21.935s 00:09:40.866 sys 0m10.050s 00:09:40.866 09:58:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:40.866 09:58:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:40.866 ************************************ 00:09:40.866 END TEST nvmf_lvs_grow 00:09:40.866 ************************************ 00:09:40.866 09:58:26 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:40.866 09:58:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:40.866 09:58:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:40.866 09:58:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:41.125 ************************************ 00:09:41.125 START TEST nvmf_bdev_io_wait 00:09:41.125 ************************************ 00:09:41.125 09:58:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:41.125 * Looking for test storage... 00:09:41.125 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:41.125 09:58:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:41.125 09:58:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:09:41.125 09:58:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:41.126 09:58:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:41.126 09:58:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:41.126 09:58:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:41.126 09:58:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:41.126 09:58:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:41.126 09:58:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:41.126 09:58:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:41.126 09:58:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:41.126 09:58:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:41.126 09:58:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:09:41.126 09:58:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:09:41.126 09:58:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:41.126 09:58:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:41.126 09:58:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:41.126 09:58:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:41.126 09:58:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:41.126 09:58:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:41.126 09:58:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:41.126 09:58:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:41.126 09:58:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:41.126 09:58:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:41.126 09:58:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:41.126 09:58:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:09:41.126 09:58:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:41.126 09:58:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:09:41.126 09:58:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:41.126 09:58:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:41.126 09:58:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:41.126 09:58:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:41.126 09:58:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:41.126 09:58:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:41.126 09:58:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:41.126 09:58:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:41.126 09:58:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:41.126 09:58:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:41.126 09:58:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:09:41.126 09:58:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:41.126 09:58:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:41.126 09:58:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:41.126 09:58:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:41.126 09:58:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:41.126 09:58:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:41.126 09:58:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:41.126 09:58:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:41.126 09:58:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:41.126 09:58:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:41.126 09:58:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:09:41.126 09:58:26 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:43.658 09:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:43.658 09:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:09:43.658 09:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:43.658 09:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:43.658 09:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:43.658 09:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:43.658 09:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:43.658 09:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:09:43.658 09:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:43.658 09:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:09:43.658 09:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:09:43.658 09:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:09:43.658 09:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:09:43.658 09:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:09:43.658 09:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:09:43.658 09:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:43.658 09:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:43.658 09:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:43.658 09:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:43.658 09:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:43.658 09:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:43.658 09:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:43.658 09:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:43.658 09:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:43.658 09:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:43.658 09:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:43.658 09:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:43.658 09:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:43.658 09:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:43.658 09:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:43.658 09:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:43.659 09:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:43.659 09:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:43.659 09:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:09:43.659 Found 0000:84:00.0 (0x8086 - 0x159b) 00:09:43.659 09:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:43.659 09:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:43.659 09:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:43.659 09:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:43.659 09:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:43.659 09:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:43.659 09:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:09:43.659 Found 0000:84:00.1 (0x8086 - 0x159b) 00:09:43.659 09:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:43.659 09:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:43.659 09:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:43.659 09:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:43.659 09:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:43.659 09:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:43.659 09:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:43.659 09:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:43.659 09:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:43.659 09:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:43.659 09:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:43.659 09:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:43.659 09:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:43.659 09:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:43.659 09:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:43.659 09:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:09:43.659 Found net devices under 0000:84:00.0: cvl_0_0 00:09:43.659 09:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:43.659 09:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:43.659 09:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:43.659 09:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:43.659 09:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:43.659 09:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:43.659 09:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:43.659 09:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:43.659 09:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:09:43.659 Found net devices under 0000:84:00.1: cvl_0_1 00:09:43.659 09:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:43.659 09:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:43.659 09:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:09:43.659 09:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:43.659 09:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:43.659 09:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:43.659 09:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:43.659 09:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:43.659 09:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:43.659 09:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:43.659 09:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:43.659 09:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:43.659 09:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:43.659 09:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:43.659 09:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:43.659 09:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:43.659 09:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:43.659 09:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:43.659 09:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:43.659 09:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:43.659 09:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:43.659 09:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:43.659 09:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:43.659 09:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:43.659 09:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:43.659 09:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:43.659 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:43.659 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.247 ms 00:09:43.659 00:09:43.659 --- 10.0.0.2 ping statistics --- 00:09:43.659 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:43.659 rtt min/avg/max/mdev = 0.247/0.247/0.247/0.000 ms 00:09:43.659 09:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:43.659 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:43.659 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.181 ms 00:09:43.659 00:09:43.659 --- 10.0.0.1 ping statistics --- 00:09:43.659 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:43.659 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:09:43.659 09:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:43.659 09:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:09:43.659 09:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:43.659 09:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:43.659 09:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:43.659 09:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:43.659 09:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:43.659 09:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:43.659 09:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:43.918 09:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:09:43.918 09:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:43.918 09:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:43.918 09:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:43.918 09:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=352831 00:09:43.918 09:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 352831 00:09:43.918 09:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:09:43.918 09:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 352831 ']' 00:09:43.918 09:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:43.918 09:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:43.918 09:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:43.918 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:43.918 09:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:43.918 09:58:28 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:43.918 [2024-07-25 09:58:28.911142] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:43.918 [2024-07-25 09:58:28.911248] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:43.918 EAL: No free 2048 kB hugepages reported on node 1 00:09:43.918 [2024-07-25 09:58:28.996093] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:44.176 [2024-07-25 09:58:29.124087] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:44.176 [2024-07-25 09:58:29.124148] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:44.176 [2024-07-25 09:58:29.124165] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:44.176 [2024-07-25 09:58:29.124178] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:44.176 [2024-07-25 09:58:29.124189] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:44.176 [2024-07-25 09:58:29.124272] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:44.176 [2024-07-25 09:58:29.124309] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:44.176 [2024-07-25 09:58:29.124360] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:44.176 [2024-07-25 09:58:29.124362] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:44.176 09:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:44.176 09:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:09:44.176 09:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:44.176 09:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:44.176 09:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:44.176 09:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:44.176 09:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:09:44.176 09:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.176 09:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:44.436 09:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.436 09:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:09:44.436 09:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.436 09:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:44.436 09:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.436 09:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:44.436 09:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.436 09:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:44.436 [2024-07-25 09:58:29.426448] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:44.436 09:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.436 09:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:44.436 09:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.436 09:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:44.436 Malloc0 00:09:44.436 09:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.436 09:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:44.436 09:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.436 09:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:44.436 09:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.436 09:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:44.436 09:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.436 09:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:44.436 09:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.436 09:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:44.436 09:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:44.436 09:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:44.436 [2024-07-25 09:58:29.492255] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:44.436 09:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:44.436 09:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=352867 00:09:44.436 09:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=352869 00:09:44.436 09:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:09:44.436 09:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:09:44.436 09:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:09:44.436 09:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:09:44.436 09:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:44.436 09:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:09:44.436 09:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:09:44.436 09:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=352871 00:09:44.436 09:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:44.436 { 00:09:44.436 "params": { 00:09:44.436 "name": "Nvme$subsystem", 00:09:44.436 "trtype": "$TEST_TRANSPORT", 00:09:44.436 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:44.436 "adrfam": "ipv4", 00:09:44.436 "trsvcid": "$NVMF_PORT", 00:09:44.436 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:44.436 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:44.436 "hdgst": ${hdgst:-false}, 00:09:44.436 "ddgst": ${ddgst:-false} 00:09:44.436 }, 00:09:44.436 "method": "bdev_nvme_attach_controller" 00:09:44.436 } 00:09:44.436 EOF 00:09:44.436 )") 00:09:44.436 09:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:09:44.436 09:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:09:44.436 09:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:44.436 09:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:44.436 { 00:09:44.436 "params": { 00:09:44.436 "name": "Nvme$subsystem", 00:09:44.436 "trtype": "$TEST_TRANSPORT", 00:09:44.436 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:44.436 "adrfam": "ipv4", 00:09:44.436 "trsvcid": "$NVMF_PORT", 00:09:44.436 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:44.436 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:44.436 "hdgst": ${hdgst:-false}, 00:09:44.436 "ddgst": ${ddgst:-false} 00:09:44.436 }, 00:09:44.436 "method": "bdev_nvme_attach_controller" 00:09:44.436 } 00:09:44.436 EOF 00:09:44.436 )") 00:09:44.436 09:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:09:44.436 09:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:09:44.436 09:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=352873 00:09:44.436 09:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:09:44.436 09:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:09:44.436 09:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:09:44.436 09:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:09:44.436 09:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:44.436 09:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:44.436 { 00:09:44.436 "params": { 00:09:44.436 "name": "Nvme$subsystem", 00:09:44.436 "trtype": "$TEST_TRANSPORT", 00:09:44.436 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:44.436 "adrfam": "ipv4", 00:09:44.436 "trsvcid": "$NVMF_PORT", 00:09:44.436 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:44.436 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:44.436 "hdgst": ${hdgst:-false}, 00:09:44.436 "ddgst": ${ddgst:-false} 00:09:44.436 }, 00:09:44.436 "method": "bdev_nvme_attach_controller" 00:09:44.436 } 00:09:44.436 EOF 00:09:44.436 )") 00:09:44.436 09:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:09:44.436 09:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:09:44.436 09:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:09:44.436 09:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:09:44.436 09:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:09:44.436 09:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:44.436 09:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:44.436 { 00:09:44.436 "params": { 00:09:44.436 "name": "Nvme$subsystem", 00:09:44.436 "trtype": "$TEST_TRANSPORT", 00:09:44.436 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:44.436 "adrfam": "ipv4", 00:09:44.436 "trsvcid": "$NVMF_PORT", 00:09:44.436 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:44.436 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:44.436 "hdgst": ${hdgst:-false}, 00:09:44.436 "ddgst": ${ddgst:-false} 00:09:44.436 }, 00:09:44.436 "method": "bdev_nvme_attach_controller" 00:09:44.436 } 00:09:44.436 EOF 00:09:44.436 )") 00:09:44.436 09:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:09:44.436 09:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 352867 00:09:44.436 09:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:09:44.436 09:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:09:44.437 09:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:09:44.437 09:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:09:44.437 09:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:09:44.437 09:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:44.437 "params": { 00:09:44.437 "name": "Nvme1", 00:09:44.437 "trtype": "tcp", 00:09:44.437 "traddr": "10.0.0.2", 00:09:44.437 "adrfam": "ipv4", 00:09:44.437 "trsvcid": "4420", 00:09:44.437 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:44.437 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:44.437 "hdgst": false, 00:09:44.437 "ddgst": false 00:09:44.437 }, 00:09:44.437 "method": "bdev_nvme_attach_controller" 00:09:44.437 }' 00:09:44.437 09:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:09:44.437 09:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:09:44.437 09:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:44.437 "params": { 00:09:44.437 "name": "Nvme1", 00:09:44.437 "trtype": "tcp", 00:09:44.437 "traddr": "10.0.0.2", 00:09:44.437 "adrfam": "ipv4", 00:09:44.437 "trsvcid": "4420", 00:09:44.437 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:44.437 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:44.437 "hdgst": false, 00:09:44.437 "ddgst": false 00:09:44.437 }, 00:09:44.437 "method": "bdev_nvme_attach_controller" 00:09:44.437 }' 00:09:44.437 09:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:09:44.437 09:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:44.437 "params": { 00:09:44.437 "name": "Nvme1", 00:09:44.437 "trtype": "tcp", 00:09:44.437 "traddr": "10.0.0.2", 00:09:44.437 "adrfam": "ipv4", 00:09:44.437 "trsvcid": "4420", 00:09:44.437 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:44.437 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:44.437 "hdgst": false, 00:09:44.437 "ddgst": false 00:09:44.437 }, 00:09:44.437 "method": "bdev_nvme_attach_controller" 00:09:44.437 }' 00:09:44.437 09:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:09:44.437 09:58:29 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:44.437 "params": { 00:09:44.437 "name": "Nvme1", 00:09:44.437 "trtype": "tcp", 00:09:44.437 "traddr": "10.0.0.2", 00:09:44.437 "adrfam": "ipv4", 00:09:44.437 "trsvcid": "4420", 00:09:44.437 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:44.437 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:44.437 "hdgst": false, 00:09:44.437 "ddgst": false 00:09:44.437 }, 00:09:44.437 "method": "bdev_nvme_attach_controller" 00:09:44.437 }' 00:09:44.437 [2024-07-25 09:58:29.545374] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:44.437 [2024-07-25 09:58:29.545374] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:44.437 [2024-07-25 09:58:29.545374] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:44.437 [2024-07-25 09:58:29.545482] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-25 09:58:29.545482] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-25 09:58:29.545483] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:09:44.437 --proc-type=auto ] 00:09:44.437 --proc-type=auto ] 00:09:44.437 [2024-07-25 09:58:29.545583] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:44.437 [2024-07-25 09:58:29.545657] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:09:44.695 EAL: No free 2048 kB hugepages reported on node 1 00:09:44.695 EAL: No free 2048 kB hugepages reported on node 1 00:09:44.695 [2024-07-25 09:58:29.744513] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:44.695 EAL: No free 2048 kB hugepages reported on node 1 00:09:44.695 [2024-07-25 09:58:29.845875] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:09:44.695 [2024-07-25 09:58:29.849374] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:44.953 EAL: No free 2048 kB hugepages reported on node 1 00:09:44.953 [2024-07-25 09:58:29.949817] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:09:44.953 [2024-07-25 09:58:29.956501] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:44.953 [2024-07-25 09:58:30.035706] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:44.953 [2024-07-25 09:58:30.058689] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:09:45.212 [2024-07-25 09:58:30.130274] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:09:45.212 Running I/O for 1 seconds... 00:09:45.212 Running I/O for 1 seconds... 00:09:45.212 Running I/O for 1 seconds... 00:09:45.469 Running I/O for 1 seconds... 00:09:46.034 00:09:46.034 Latency(us) 00:09:46.034 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:46.034 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:09:46.034 Nvme1n1 : 1.00 198573.48 775.68 0.00 0.00 642.11 263.96 867.75 00:09:46.034 =================================================================================================================== 00:09:46.034 Total : 198573.48 775.68 0.00 0.00 642.11 263.96 867.75 00:09:46.290 00:09:46.290 Latency(us) 00:09:46.290 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:46.290 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:09:46.290 Nvme1n1 : 1.01 9262.77 36.18 0.00 0.00 13754.31 6310.87 23301.69 00:09:46.290 =================================================================================================================== 00:09:46.290 Total : 9262.77 36.18 0.00 0.00 13754.31 6310.87 23301.69 00:09:46.290 00:09:46.290 Latency(us) 00:09:46.290 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:46.290 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:09:46.290 Nvme1n1 : 1.06 6012.75 23.49 0.00 0.00 20241.69 9417.77 60584.39 00:09:46.290 =================================================================================================================== 00:09:46.290 Total : 6012.75 23.49 0.00 0.00 20241.69 9417.77 60584.39 00:09:46.290 00:09:46.290 Latency(us) 00:09:46.290 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:46.290 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:09:46.290 Nvme1n1 : 1.01 6365.07 24.86 0.00 0.00 20047.72 4975.88 45438.29 00:09:46.290 =================================================================================================================== 00:09:46.290 Total : 6365.07 24.86 0.00 0.00 20047.72 4975.88 45438.29 00:09:46.547 09:58:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 352869 00:09:46.803 09:58:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 352871 00:09:46.803 09:58:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 352873 00:09:46.803 09:58:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:46.803 09:58:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:46.803 09:58:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:46.803 09:58:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:46.803 09:58:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:09:46.803 09:58:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:09:46.803 09:58:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:46.803 09:58:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:09:46.803 09:58:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:46.803 09:58:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:09:46.803 09:58:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:46.803 09:58:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:46.803 rmmod nvme_tcp 00:09:46.803 rmmod nvme_fabrics 00:09:46.803 rmmod nvme_keyring 00:09:46.803 09:58:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:46.803 09:58:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:09:46.803 09:58:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:09:46.803 09:58:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 352831 ']' 00:09:46.803 09:58:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 352831 00:09:46.803 09:58:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 352831 ']' 00:09:46.803 09:58:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 352831 00:09:46.803 09:58:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:09:46.803 09:58:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:46.803 09:58:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 352831 00:09:46.803 09:58:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:46.803 09:58:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:46.804 09:58:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 352831' 00:09:46.804 killing process with pid 352831 00:09:46.804 09:58:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 352831 00:09:46.804 09:58:31 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 352831 00:09:47.061 09:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:47.061 09:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:47.061 09:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:47.061 09:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:47.061 09:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:47.061 09:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:47.061 09:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:47.061 09:58:32 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:49.587 09:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:49.587 00:09:49.587 real 0m8.146s 00:09:49.587 user 0m18.618s 00:09:49.587 sys 0m4.092s 00:09:49.587 09:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:49.587 09:58:34 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:49.587 ************************************ 00:09:49.587 END TEST nvmf_bdev_io_wait 00:09:49.587 ************************************ 00:09:49.587 09:58:34 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:49.587 09:58:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:49.587 09:58:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:49.587 09:58:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:49.587 ************************************ 00:09:49.587 START TEST nvmf_queue_depth 00:09:49.587 ************************************ 00:09:49.587 09:58:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:49.587 * Looking for test storage... 00:09:49.587 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:49.587 09:58:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:49.587 09:58:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:09:49.587 09:58:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:49.587 09:58:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:49.587 09:58:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:49.587 09:58:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:49.587 09:58:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:49.587 09:58:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:49.587 09:58:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:49.587 09:58:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:49.587 09:58:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:49.587 09:58:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:49.587 09:58:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:09:49.587 09:58:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:09:49.587 09:58:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:49.587 09:58:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:49.587 09:58:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:49.587 09:58:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:49.587 09:58:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:49.587 09:58:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:49.587 09:58:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:49.587 09:58:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:49.587 09:58:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:49.587 09:58:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:49.587 09:58:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:49.587 09:58:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:09:49.587 09:58:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:49.587 09:58:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:09:49.587 09:58:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:49.587 09:58:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:49.587 09:58:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:49.587 09:58:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:49.587 09:58:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:49.587 09:58:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:49.587 09:58:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:49.587 09:58:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:49.587 09:58:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:09:49.587 09:58:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:09:49.588 09:58:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:49.588 09:58:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:09:49.588 09:58:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:49.588 09:58:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:49.588 09:58:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:49.588 09:58:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:49.588 09:58:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:49.588 09:58:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:49.588 09:58:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:49.588 09:58:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:49.588 09:58:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:49.588 09:58:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:49.588 09:58:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:09:49.588 09:58:34 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:52.119 09:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:52.119 09:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:09:52.119 09:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:52.119 09:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:52.119 09:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:52.119 09:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:52.119 09:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:52.119 09:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:09:52.119 09:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:52.119 09:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:09:52.119 09:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:09:52.119 09:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:09:52.119 09:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:09:52.119 09:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:09:52.119 09:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:09:52.119 09:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:52.119 09:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:52.119 09:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:52.119 09:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:52.119 09:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:52.119 09:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:52.119 09:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:52.119 09:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:52.119 09:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:52.119 09:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:52.119 09:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:52.119 09:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:52.119 09:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:52.119 09:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:52.119 09:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:52.119 09:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:52.119 09:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:52.119 09:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:52.119 09:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:09:52.119 Found 0000:84:00.0 (0x8086 - 0x159b) 00:09:52.119 09:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:52.119 09:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:52.119 09:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:52.119 09:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:52.119 09:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:52.119 09:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:52.119 09:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:09:52.119 Found 0000:84:00.1 (0x8086 - 0x159b) 00:09:52.119 09:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:52.119 09:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:52.119 09:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:52.119 09:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:52.119 09:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:52.119 09:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:52.119 09:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:52.119 09:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:52.119 09:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:52.119 09:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:52.119 09:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:52.119 09:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:52.119 09:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:52.119 09:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:52.119 09:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:52.119 09:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:09:52.119 Found net devices under 0000:84:00.0: cvl_0_0 00:09:52.119 09:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:52.119 09:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:52.119 09:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:52.119 09:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:52.119 09:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:52.119 09:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:52.119 09:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:52.119 09:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:52.119 09:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:09:52.119 Found net devices under 0000:84:00.1: cvl_0_1 00:09:52.119 09:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:52.119 09:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:52.119 09:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:09:52.119 09:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:52.119 09:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:52.119 09:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:52.119 09:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:52.119 09:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:52.119 09:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:52.119 09:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:52.119 09:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:52.119 09:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:52.119 09:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:52.119 09:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:52.119 09:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:52.119 09:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:52.119 09:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:52.119 09:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:52.119 09:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:52.120 09:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:52.120 09:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:52.120 09:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:52.120 09:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:52.120 09:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:52.120 09:58:36 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:52.120 09:58:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:52.120 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:52.120 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.119 ms 00:09:52.120 00:09:52.120 --- 10.0.0.2 ping statistics --- 00:09:52.120 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:52.120 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:09:52.120 09:58:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:52.120 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:52.120 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:09:52.120 00:09:52.120 --- 10.0.0.1 ping statistics --- 00:09:52.120 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:52.120 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:09:52.120 09:58:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:52.120 09:58:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:09:52.120 09:58:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:52.120 09:58:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:52.120 09:58:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:52.120 09:58:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:52.120 09:58:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:52.120 09:58:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:52.120 09:58:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:52.120 09:58:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:09:52.120 09:58:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:52.120 09:58:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:52.120 09:58:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:52.120 09:58:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=355228 00:09:52.120 09:58:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:52.120 09:58:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 355228 00:09:52.120 09:58:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 355228 ']' 00:09:52.120 09:58:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:52.120 09:58:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:52.120 09:58:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:52.120 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:52.120 09:58:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:52.120 09:58:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:52.120 [2024-07-25 09:58:37.148096] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:52.120 [2024-07-25 09:58:37.148275] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:52.120 EAL: No free 2048 kB hugepages reported on node 1 00:09:52.120 [2024-07-25 09:58:37.257021] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:52.378 [2024-07-25 09:58:37.381484] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:52.378 [2024-07-25 09:58:37.381546] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:52.378 [2024-07-25 09:58:37.381563] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:52.378 [2024-07-25 09:58:37.381577] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:52.378 [2024-07-25 09:58:37.381589] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:52.378 [2024-07-25 09:58:37.381622] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:52.378 09:58:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:52.378 09:58:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:09:52.378 09:58:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:52.378 09:58:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:52.378 09:58:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:52.378 09:58:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:52.378 09:58:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:52.378 09:58:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.378 09:58:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:52.378 [2024-07-25 09:58:37.531343] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:52.378 09:58:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.378 09:58:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:52.378 09:58:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.378 09:58:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:52.636 Malloc0 00:09:52.636 09:58:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.636 09:58:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:52.636 09:58:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.636 09:58:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:52.636 09:58:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.636 09:58:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:52.636 09:58:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.636 09:58:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:52.636 09:58:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.636 09:58:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:52.636 09:58:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:52.636 09:58:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:52.636 [2024-07-25 09:58:37.601007] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:52.636 09:58:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:52.636 09:58:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=355259 00:09:52.636 09:58:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:09:52.636 09:58:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:52.636 09:58:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 355259 /var/tmp/bdevperf.sock 00:09:52.636 09:58:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 355259 ']' 00:09:52.636 09:58:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:52.636 09:58:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:52.636 09:58:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:52.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:52.636 09:58:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:52.636 09:58:37 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:52.636 [2024-07-25 09:58:37.650982] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:09:52.636 [2024-07-25 09:58:37.651060] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid355259 ] 00:09:52.636 EAL: No free 2048 kB hugepages reported on node 1 00:09:52.636 [2024-07-25 09:58:37.719523] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:52.895 [2024-07-25 09:58:37.844833] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:53.153 09:58:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:53.153 09:58:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:09:53.153 09:58:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:09:53.153 09:58:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:53.153 09:58:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:53.412 NVMe0n1 00:09:53.412 09:58:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:53.412 09:58:38 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:53.412 Running I/O for 10 seconds... 00:10:05.653 00:10:05.653 Latency(us) 00:10:05.653 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:05.653 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:10:05.653 Verification LBA range: start 0x0 length 0x4000 00:10:05.653 NVMe0n1 : 10.08 8226.71 32.14 0.00 0.00 123932.48 16117.00 76507.21 00:10:05.653 =================================================================================================================== 00:10:05.653 Total : 8226.71 32.14 0.00 0.00 123932.48 16117.00 76507.21 00:10:05.653 0 00:10:05.654 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 355259 00:10:05.654 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 355259 ']' 00:10:05.654 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 355259 00:10:05.654 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:10:05.654 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:05.654 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 355259 00:10:05.654 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:05.654 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:05.654 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 355259' 00:10:05.654 killing process with pid 355259 00:10:05.654 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 355259 00:10:05.654 Received shutdown signal, test time was about 10.000000 seconds 00:10:05.654 00:10:05.654 Latency(us) 00:10:05.654 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:05.654 =================================================================================================================== 00:10:05.654 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:05.654 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 355259 00:10:05.654 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:10:05.654 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:10:05.654 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:05.654 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:10:05.654 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:05.654 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:10:05.654 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:05.654 09:58:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:05.654 rmmod nvme_tcp 00:10:05.654 rmmod nvme_fabrics 00:10:05.654 rmmod nvme_keyring 00:10:05.654 09:58:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:05.654 09:58:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:10:05.654 09:58:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:10:05.654 09:58:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 355228 ']' 00:10:05.654 09:58:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 355228 00:10:05.654 09:58:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 355228 ']' 00:10:05.654 09:58:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 355228 00:10:05.654 09:58:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:10:05.654 09:58:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:05.654 09:58:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 355228 00:10:05.654 09:58:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:10:05.654 09:58:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:10:05.654 09:58:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 355228' 00:10:05.654 killing process with pid 355228 00:10:05.654 09:58:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 355228 00:10:05.654 09:58:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 355228 00:10:05.654 09:58:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:05.654 09:58:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:05.654 09:58:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:05.654 09:58:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:05.654 09:58:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:05.654 09:58:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:05.654 09:58:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:05.654 09:58:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:06.590 09:58:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:06.590 00:10:06.590 real 0m17.220s 00:10:06.590 user 0m23.638s 00:10:06.590 sys 0m3.978s 00:10:06.590 09:58:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:06.590 09:58:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:06.590 ************************************ 00:10:06.590 END TEST nvmf_queue_depth 00:10:06.590 ************************************ 00:10:06.590 09:58:51 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:06.590 09:58:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:06.590 09:58:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:06.590 09:58:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:06.590 ************************************ 00:10:06.590 START TEST nvmf_target_multipath 00:10:06.590 ************************************ 00:10:06.590 09:58:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:06.590 * Looking for test storage... 00:10:06.590 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:06.590 09:58:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:06.590 09:58:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:10:06.590 09:58:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:06.590 09:58:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:06.590 09:58:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:06.590 09:58:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:06.590 09:58:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:06.590 09:58:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:06.590 09:58:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:06.590 09:58:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:06.590 09:58:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:06.590 09:58:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:06.590 09:58:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:10:06.590 09:58:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:10:06.590 09:58:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:06.590 09:58:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:06.590 09:58:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:06.591 09:58:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:06.591 09:58:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:06.591 09:58:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:06.591 09:58:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:06.591 09:58:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:06.591 09:58:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.591 09:58:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.591 09:58:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.591 09:58:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:10:06.591 09:58:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:06.591 09:58:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:10:06.591 09:58:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:06.591 09:58:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:06.591 09:58:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:06.591 09:58:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:06.591 09:58:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:06.591 09:58:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:06.591 09:58:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:06.591 09:58:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:06.591 09:58:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:06.591 09:58:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:06.591 09:58:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:10:06.591 09:58:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:06.591 09:58:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:10:06.591 09:58:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:06.591 09:58:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:06.591 09:58:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:06.591 09:58:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:06.591 09:58:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:06.591 09:58:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:06.591 09:58:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:06.591 09:58:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:06.591 09:58:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:06.591 09:58:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:06.591 09:58:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:10:06.591 09:58:51 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:09.121 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:09.121 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:10:09.121 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:09.121 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:09.121 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:09.121 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:09.121 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:09.121 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:10:09.121 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:09.121 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:10:09.121 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:10:09.121 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:10:09.121 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:10:09.121 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:10:09.121 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:10:09.121 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:09.121 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:09.121 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:09.121 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:09.121 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:09.121 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:09.121 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:09.121 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:09.121 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:09.121 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:09.121 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:09.121 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:09.121 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:09.121 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:09.121 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:09.121 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:09.121 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:09.121 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:09.121 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:10:09.121 Found 0000:84:00.0 (0x8086 - 0x159b) 00:10:09.121 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:09.121 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:09.121 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:09.121 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:09.121 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:09.121 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:09.121 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:10:09.121 Found 0000:84:00.1 (0x8086 - 0x159b) 00:10:09.121 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:09.122 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:09.122 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:09.122 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:09.122 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:09.122 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:09.122 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:09.122 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:09.122 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:09.122 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:09.122 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:09.122 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:09.122 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:09.122 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:09.122 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:09.122 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:10:09.122 Found net devices under 0000:84:00.0: cvl_0_0 00:10:09.122 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:09.122 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:09.122 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:09.122 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:09.122 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:09.122 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:09.122 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:09.122 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:09.122 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:10:09.122 Found net devices under 0000:84:00.1: cvl_0_1 00:10:09.122 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:09.122 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:09.122 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:10:09.122 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:09.122 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:09.122 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:09.122 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:09.122 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:09.122 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:09.122 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:09.122 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:09.122 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:09.122 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:09.122 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:09.122 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:09.122 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:09.122 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:09.122 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:09.122 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:09.122 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:09.122 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:09.122 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:09.122 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:09.122 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:09.122 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:09.122 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:09.381 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:09.381 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.231 ms 00:10:09.381 00:10:09.381 --- 10.0.0.2 ping statistics --- 00:10:09.381 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:09.381 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:10:09.381 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:09.381 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:09.381 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.186 ms 00:10:09.381 00:10:09.381 --- 10.0.0.1 ping statistics --- 00:10:09.381 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:09.381 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:10:09.381 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:09.381 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:10:09.381 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:09.381 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:09.381 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:09.381 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:09.381 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:09.381 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:09.381 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:09.381 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:10:09.381 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:10:09.381 only one NIC for nvmf test 00:10:09.381 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:10:09.381 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:09.381 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:10:09.381 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:09.381 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:10:09.381 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:09.381 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:09.381 rmmod nvme_tcp 00:10:09.381 rmmod nvme_fabrics 00:10:09.381 rmmod nvme_keyring 00:10:09.381 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:09.381 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:10:09.381 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:10:09.381 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:10:09.381 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:09.381 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:09.381 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:09.381 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:09.381 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:09.381 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:09.381 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:09.381 09:58:54 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:11.285 09:58:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:11.285 09:58:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:10:11.285 09:58:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:10:11.285 09:58:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:11.285 09:58:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:10:11.285 09:58:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:11.285 09:58:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:10:11.285 09:58:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:11.285 09:58:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:11.285 09:58:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:11.285 09:58:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:10:11.285 09:58:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:10:11.285 09:58:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:10:11.285 09:58:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:11.285 09:58:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:11.285 09:58:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:11.285 09:58:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:11.285 09:58:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:11.285 09:58:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:11.285 09:58:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:11.285 09:58:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:11.285 09:58:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:11.285 00:10:11.285 real 0m4.919s 00:10:11.285 user 0m0.892s 00:10:11.285 sys 0m2.019s 00:10:11.285 09:58:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:11.285 09:58:56 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:10:11.285 ************************************ 00:10:11.285 END TEST nvmf_target_multipath 00:10:11.285 ************************************ 00:10:11.544 09:58:56 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:11.544 09:58:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:11.544 09:58:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:11.544 09:58:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:11.544 ************************************ 00:10:11.544 START TEST nvmf_zcopy 00:10:11.544 ************************************ 00:10:11.544 09:58:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:10:11.544 * Looking for test storage... 00:10:11.544 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:11.544 09:58:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:11.544 09:58:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:10:11.544 09:58:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:11.544 09:58:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:11.544 09:58:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:11.544 09:58:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:11.544 09:58:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:11.544 09:58:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:11.544 09:58:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:11.544 09:58:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:11.544 09:58:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:11.544 09:58:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:11.544 09:58:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:10:11.544 09:58:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:10:11.544 09:58:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:11.544 09:58:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:11.544 09:58:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:11.544 09:58:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:11.544 09:58:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:11.544 09:58:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:11.544 09:58:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:11.544 09:58:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:11.545 09:58:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.545 09:58:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.545 09:58:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.545 09:58:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:10:11.545 09:58:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:11.545 09:58:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:10:11.545 09:58:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:11.545 09:58:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:11.545 09:58:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:11.545 09:58:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:11.545 09:58:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:11.545 09:58:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:11.545 09:58:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:11.545 09:58:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:11.545 09:58:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:10:11.545 09:58:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:11.545 09:58:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:11.545 09:58:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:11.545 09:58:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:11.545 09:58:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:11.545 09:58:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:11.545 09:58:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:11.545 09:58:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:11.545 09:58:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:11.545 09:58:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:11.545 09:58:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:10:11.545 09:58:56 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:14.077 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:14.077 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:10:14.077 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:14.077 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:14.077 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:14.077 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:14.077 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:14.077 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:10:14.077 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:14.077 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:10:14.077 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:10:14.077 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:10:14.077 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:10:14.077 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:10:14.077 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:10:14.077 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:14.077 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:14.077 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:14.077 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:14.077 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:14.077 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:14.077 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:14.077 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:14.077 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:14.077 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:14.077 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:14.077 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:14.077 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:14.077 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:14.077 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:14.077 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:14.077 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:14.077 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:14.077 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:10:14.077 Found 0000:84:00.0 (0x8086 - 0x159b) 00:10:14.077 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:14.077 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:14.077 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:14.077 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:14.077 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:14.077 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:14.077 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:10:14.077 Found 0000:84:00.1 (0x8086 - 0x159b) 00:10:14.077 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:14.077 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:14.077 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:14.077 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:14.077 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:14.077 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:14.077 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:14.077 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:14.077 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:14.077 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:14.077 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:14.077 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:14.077 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:14.077 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:14.077 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:14.077 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:10:14.077 Found net devices under 0000:84:00.0: cvl_0_0 00:10:14.077 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:14.077 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:14.077 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:14.077 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:14.077 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:14.077 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:14.077 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:14.077 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:14.077 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:10:14.077 Found net devices under 0000:84:00.1: cvl_0_1 00:10:14.077 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:14.077 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:14.077 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:10:14.078 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:14.078 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:14.078 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:14.078 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:14.078 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:14.078 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:14.078 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:14.078 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:14.078 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:14.078 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:14.078 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:14.078 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:14.078 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:14.078 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:14.078 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:14.078 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:14.078 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:14.078 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:14.078 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:14.078 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:14.078 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:14.078 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:14.078 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:14.078 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:14.078 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.123 ms 00:10:14.078 00:10:14.078 --- 10.0.0.2 ping statistics --- 00:10:14.078 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:14.078 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:10:14.078 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:14.078 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:14.078 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.152 ms 00:10:14.078 00:10:14.078 --- 10.0.0.1 ping statistics --- 00:10:14.078 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:14.078 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:10:14.078 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:14.078 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:10:14.078 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:14.078 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:14.078 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:14.078 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:14.078 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:14.078 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:14.078 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:14.336 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:10:14.336 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:14.336 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:14.336 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:14.336 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=360602 00:10:14.336 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:14.336 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 360602 00:10:14.336 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 360602 ']' 00:10:14.336 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:14.336 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:14.336 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:14.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:14.336 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:14.336 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:14.336 [2024-07-25 09:58:59.330182] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:10:14.336 [2024-07-25 09:58:59.330288] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:14.336 EAL: No free 2048 kB hugepages reported on node 1 00:10:14.336 [2024-07-25 09:58:59.413637] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:14.594 [2024-07-25 09:58:59.535599] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:14.594 [2024-07-25 09:58:59.535659] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:14.594 [2024-07-25 09:58:59.535676] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:14.594 [2024-07-25 09:58:59.535689] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:14.594 [2024-07-25 09:58:59.535702] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:14.594 [2024-07-25 09:58:59.535742] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:14.594 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:14.594 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:10:14.594 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:14.594 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:14.594 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:14.594 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:14.594 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:10:14.594 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:10:14.594 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.594 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:14.594 [2024-07-25 09:58:59.685669] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:14.594 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.594 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:14.594 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.594 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:14.594 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.594 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:14.594 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.594 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:14.594 [2024-07-25 09:58:59.701901] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:14.594 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.594 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:14.594 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.594 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:14.595 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.595 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:10:14.595 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.595 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:14.595 malloc0 00:10:14.595 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.595 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:10:14.595 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.595 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:14.595 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:14.595 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:10:14.595 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:10:14.595 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:10:14.595 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:10:14.853 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:14.853 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:14.853 { 00:10:14.853 "params": { 00:10:14.853 "name": "Nvme$subsystem", 00:10:14.853 "trtype": "$TEST_TRANSPORT", 00:10:14.853 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:14.853 "adrfam": "ipv4", 00:10:14.853 "trsvcid": "$NVMF_PORT", 00:10:14.853 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:14.853 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:14.853 "hdgst": ${hdgst:-false}, 00:10:14.853 "ddgst": ${ddgst:-false} 00:10:14.853 }, 00:10:14.853 "method": "bdev_nvme_attach_controller" 00:10:14.853 } 00:10:14.853 EOF 00:10:14.853 )") 00:10:14.853 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:10:14.853 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:10:14.853 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:10:14.853 09:58:59 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:14.853 "params": { 00:10:14.853 "name": "Nvme1", 00:10:14.853 "trtype": "tcp", 00:10:14.853 "traddr": "10.0.0.2", 00:10:14.853 "adrfam": "ipv4", 00:10:14.853 "trsvcid": "4420", 00:10:14.853 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:14.853 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:14.853 "hdgst": false, 00:10:14.853 "ddgst": false 00:10:14.853 }, 00:10:14.853 "method": "bdev_nvme_attach_controller" 00:10:14.853 }' 00:10:14.853 [2024-07-25 09:58:59.804033] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:10:14.853 [2024-07-25 09:58:59.804111] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid360744 ] 00:10:14.853 EAL: No free 2048 kB hugepages reported on node 1 00:10:14.853 [2024-07-25 09:58:59.872159] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:14.853 [2024-07-25 09:58:59.995308] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:15.419 Running I/O for 10 seconds... 00:10:25.385 00:10:25.385 Latency(us) 00:10:25.385 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:25.385 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:10:25.385 Verification LBA range: start 0x0 length 0x1000 00:10:25.385 Nvme1n1 : 10.02 5554.54 43.39 0.00 0.00 22966.17 3835.07 35535.08 00:10:25.385 =================================================================================================================== 00:10:25.385 Total : 5554.54 43.39 0.00 0.00 22966.17 3835.07 35535.08 00:10:25.644 09:59:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=361946 00:10:25.644 09:59:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:10:25.644 09:59:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:25.644 09:59:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:10:25.644 09:59:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:10:25.644 09:59:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:10:25.644 09:59:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:10:25.644 09:59:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:25.644 09:59:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:25.644 { 00:10:25.644 "params": { 00:10:25.644 "name": "Nvme$subsystem", 00:10:25.644 "trtype": "$TEST_TRANSPORT", 00:10:25.644 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:25.644 "adrfam": "ipv4", 00:10:25.644 "trsvcid": "$NVMF_PORT", 00:10:25.644 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:25.644 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:25.644 "hdgst": ${hdgst:-false}, 00:10:25.644 "ddgst": ${ddgst:-false} 00:10:25.644 }, 00:10:25.644 "method": "bdev_nvme_attach_controller" 00:10:25.644 } 00:10:25.644 EOF 00:10:25.644 )") 00:10:25.644 09:59:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:10:25.644 [2024-07-25 09:59:10.638629] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.644 [2024-07-25 09:59:10.638674] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.644 09:59:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:10:25.644 09:59:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:10:25.644 09:59:10 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:25.644 "params": { 00:10:25.644 "name": "Nvme1", 00:10:25.644 "trtype": "tcp", 00:10:25.644 "traddr": "10.0.0.2", 00:10:25.644 "adrfam": "ipv4", 00:10:25.644 "trsvcid": "4420", 00:10:25.644 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:25.644 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:25.644 "hdgst": false, 00:10:25.644 "ddgst": false 00:10:25.644 }, 00:10:25.644 "method": "bdev_nvme_attach_controller" 00:10:25.644 }' 00:10:25.644 [2024-07-25 09:59:10.646598] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.644 [2024-07-25 09:59:10.646627] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.644 [2024-07-25 09:59:10.654619] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.644 [2024-07-25 09:59:10.654645] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.644 [2024-07-25 09:59:10.662644] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.644 [2024-07-25 09:59:10.662671] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.644 [2024-07-25 09:59:10.670663] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.644 [2024-07-25 09:59:10.670699] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.644 [2024-07-25 09:59:10.678685] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.644 [2024-07-25 09:59:10.678716] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.644 [2024-07-25 09:59:10.679928] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:10:25.644 [2024-07-25 09:59:10.680000] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid361946 ] 00:10:25.644 [2024-07-25 09:59:10.686708] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.644 [2024-07-25 09:59:10.686734] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.644 [2024-07-25 09:59:10.694732] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.644 [2024-07-25 09:59:10.694758] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.644 [2024-07-25 09:59:10.702769] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.644 [2024-07-25 09:59:10.702795] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.644 [2024-07-25 09:59:10.710775] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.644 [2024-07-25 09:59:10.710801] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.644 EAL: No free 2048 kB hugepages reported on node 1 00:10:25.644 [2024-07-25 09:59:10.718796] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.644 [2024-07-25 09:59:10.718821] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.644 [2024-07-25 09:59:10.726818] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.644 [2024-07-25 09:59:10.726843] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.644 [2024-07-25 09:59:10.734839] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.644 [2024-07-25 09:59:10.734864] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.644 [2024-07-25 09:59:10.742862] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.644 [2024-07-25 09:59:10.742887] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.644 [2024-07-25 09:59:10.747876] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:25.644 [2024-07-25 09:59:10.750886] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.644 [2024-07-25 09:59:10.750910] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.644 [2024-07-25 09:59:10.758935] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.644 [2024-07-25 09:59:10.758971] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.644 [2024-07-25 09:59:10.766939] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.644 [2024-07-25 09:59:10.766978] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.644 [2024-07-25 09:59:10.774950] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.644 [2024-07-25 09:59:10.774976] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.644 [2024-07-25 09:59:10.782972] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.644 [2024-07-25 09:59:10.782997] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.644 [2024-07-25 09:59:10.790993] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.644 [2024-07-25 09:59:10.791019] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.644 [2024-07-25 09:59:10.799017] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.644 [2024-07-25 09:59:10.799042] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.644 [2024-07-25 09:59:10.807041] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.644 [2024-07-25 09:59:10.807066] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.903 [2024-07-25 09:59:10.815085] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.903 [2024-07-25 09:59:10.815117] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.903 [2024-07-25 09:59:10.823114] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.903 [2024-07-25 09:59:10.823150] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.903 [2024-07-25 09:59:10.831109] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.903 [2024-07-25 09:59:10.831134] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.903 [2024-07-25 09:59:10.839130] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.903 [2024-07-25 09:59:10.839155] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.903 [2024-07-25 09:59:10.847152] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.903 [2024-07-25 09:59:10.847178] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.903 [2024-07-25 09:59:10.855175] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.903 [2024-07-25 09:59:10.855200] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.903 [2024-07-25 09:59:10.863205] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.903 [2024-07-25 09:59:10.863233] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.903 [2024-07-25 09:59:10.871220] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.903 [2024-07-25 09:59:10.871245] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.903 [2024-07-25 09:59:10.872277] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:25.903 [2024-07-25 09:59:10.879242] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.903 [2024-07-25 09:59:10.879267] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.903 [2024-07-25 09:59:10.887274] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.903 [2024-07-25 09:59:10.887302] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.903 [2024-07-25 09:59:10.895308] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.903 [2024-07-25 09:59:10.895341] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.903 [2024-07-25 09:59:10.903337] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.903 [2024-07-25 09:59:10.903374] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.903 [2024-07-25 09:59:10.911353] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.903 [2024-07-25 09:59:10.911390] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.903 [2024-07-25 09:59:10.919375] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.903 [2024-07-25 09:59:10.919412] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.903 [2024-07-25 09:59:10.927400] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.903 [2024-07-25 09:59:10.927445] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.903 [2024-07-25 09:59:10.935421] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.903 [2024-07-25 09:59:10.935464] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.903 [2024-07-25 09:59:10.943423] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.903 [2024-07-25 09:59:10.943456] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.903 [2024-07-25 09:59:10.951468] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.903 [2024-07-25 09:59:10.951503] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.903 [2024-07-25 09:59:10.959492] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.903 [2024-07-25 09:59:10.959528] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.903 [2024-07-25 09:59:10.967516] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.903 [2024-07-25 09:59:10.967554] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.903 [2024-07-25 09:59:10.975513] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.903 [2024-07-25 09:59:10.975538] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.903 [2024-07-25 09:59:10.983531] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.903 [2024-07-25 09:59:10.983556] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.903 [2024-07-25 09:59:10.991552] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.903 [2024-07-25 09:59:10.991577] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.903 [2024-07-25 09:59:10.999587] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.903 [2024-07-25 09:59:10.999618] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.903 [2024-07-25 09:59:11.007650] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.903 [2024-07-25 09:59:11.007679] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.903 [2024-07-25 09:59:11.015656] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.903 [2024-07-25 09:59:11.015685] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.903 [2024-07-25 09:59:11.023681] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.903 [2024-07-25 09:59:11.023710] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.903 [2024-07-25 09:59:11.031774] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.903 [2024-07-25 09:59:11.031811] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.903 [2024-07-25 09:59:11.039795] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.903 [2024-07-25 09:59:11.039823] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.903 [2024-07-25 09:59:11.047814] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.903 [2024-07-25 09:59:11.047842] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.903 [2024-07-25 09:59:11.055836] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.903 [2024-07-25 09:59:11.055862] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.903 [2024-07-25 09:59:11.063864] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:25.903 [2024-07-25 09:59:11.063891] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.163 [2024-07-25 09:59:11.071885] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.163 [2024-07-25 09:59:11.071911] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.163 [2024-07-25 09:59:11.079915] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.163 [2024-07-25 09:59:11.079943] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.163 [2024-07-25 09:59:11.087931] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.163 [2024-07-25 09:59:11.087957] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.163 [2024-07-25 09:59:11.095956] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.163 [2024-07-25 09:59:11.095981] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.163 [2024-07-25 09:59:11.103976] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.163 [2024-07-25 09:59:11.104001] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.163 [2024-07-25 09:59:11.112001] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.163 [2024-07-25 09:59:11.112025] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.163 [2024-07-25 09:59:11.120027] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.163 [2024-07-25 09:59:11.120053] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.163 [2024-07-25 09:59:11.128049] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.163 [2024-07-25 09:59:11.128076] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.163 [2024-07-25 09:59:11.136075] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.163 [2024-07-25 09:59:11.136102] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.163 [2024-07-25 09:59:11.144093] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.163 [2024-07-25 09:59:11.144118] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.163 [2024-07-25 09:59:11.152118] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.163 [2024-07-25 09:59:11.152144] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.163 [2024-07-25 09:59:11.160142] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.163 [2024-07-25 09:59:11.160167] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.163 [2024-07-25 09:59:11.168166] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.163 [2024-07-25 09:59:11.168193] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.163 [2024-07-25 09:59:11.176186] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.163 [2024-07-25 09:59:11.176211] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.163 [2024-07-25 09:59:11.184215] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.163 [2024-07-25 09:59:11.184245] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.163 Running I/O for 5 seconds... 00:10:26.163 [2024-07-25 09:59:11.192234] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.163 [2024-07-25 09:59:11.192259] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.163 [2024-07-25 09:59:11.207538] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.163 [2024-07-25 09:59:11.207570] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.163 [2024-07-25 09:59:11.218922] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.163 [2024-07-25 09:59:11.218954] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.163 [2024-07-25 09:59:11.232627] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.163 [2024-07-25 09:59:11.232665] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.163 [2024-07-25 09:59:11.243749] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.163 [2024-07-25 09:59:11.243780] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.163 [2024-07-25 09:59:11.255680] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.163 [2024-07-25 09:59:11.255711] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.163 [2024-07-25 09:59:11.268102] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.163 [2024-07-25 09:59:11.268133] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.163 [2024-07-25 09:59:11.280031] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.163 [2024-07-25 09:59:11.280062] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.163 [2024-07-25 09:59:11.292030] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.163 [2024-07-25 09:59:11.292060] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.163 [2024-07-25 09:59:11.305524] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.163 [2024-07-25 09:59:11.305555] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.163 [2024-07-25 09:59:11.316675] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.163 [2024-07-25 09:59:11.316707] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.163 [2024-07-25 09:59:11.327990] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.163 [2024-07-25 09:59:11.328021] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.422 [2024-07-25 09:59:11.341287] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.422 [2024-07-25 09:59:11.341318] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.422 [2024-07-25 09:59:11.352551] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.422 [2024-07-25 09:59:11.352581] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.422 [2024-07-25 09:59:11.364265] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.422 [2024-07-25 09:59:11.364296] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.422 [2024-07-25 09:59:11.375756] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.422 [2024-07-25 09:59:11.375787] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.422 [2024-07-25 09:59:11.387564] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.422 [2024-07-25 09:59:11.387595] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.422 [2024-07-25 09:59:11.398937] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.422 [2024-07-25 09:59:11.398968] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.422 [2024-07-25 09:59:11.412738] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.422 [2024-07-25 09:59:11.412769] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.422 [2024-07-25 09:59:11.423738] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.422 [2024-07-25 09:59:11.423770] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.422 [2024-07-25 09:59:11.435481] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.422 [2024-07-25 09:59:11.435511] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.422 [2024-07-25 09:59:11.446662] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.422 [2024-07-25 09:59:11.446692] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.422 [2024-07-25 09:59:11.457883] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.422 [2024-07-25 09:59:11.457922] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.422 [2024-07-25 09:59:11.469561] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.422 [2024-07-25 09:59:11.469592] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.422 [2024-07-25 09:59:11.480868] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.422 [2024-07-25 09:59:11.480908] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.422 [2024-07-25 09:59:11.492543] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.422 [2024-07-25 09:59:11.492573] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.422 [2024-07-25 09:59:11.503993] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.422 [2024-07-25 09:59:11.504024] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.422 [2024-07-25 09:59:11.515439] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.422 [2024-07-25 09:59:11.515469] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.422 [2024-07-25 09:59:11.526998] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.422 [2024-07-25 09:59:11.527029] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.422 [2024-07-25 09:59:11.538374] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.422 [2024-07-25 09:59:11.538405] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.422 [2024-07-25 09:59:11.549980] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.422 [2024-07-25 09:59:11.550011] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.422 [2024-07-25 09:59:11.561171] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.422 [2024-07-25 09:59:11.561201] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.422 [2024-07-25 09:59:11.572740] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.422 [2024-07-25 09:59:11.572771] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.422 [2024-07-25 09:59:11.584401] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.422 [2024-07-25 09:59:11.584439] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.680 [2024-07-25 09:59:11.597623] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.680 [2024-07-25 09:59:11.597654] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.680 [2024-07-25 09:59:11.608443] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.680 [2024-07-25 09:59:11.608474] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.680 [2024-07-25 09:59:11.620242] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.680 [2024-07-25 09:59:11.620273] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.680 [2024-07-25 09:59:11.631656] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.680 [2024-07-25 09:59:11.631697] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.680 [2024-07-25 09:59:11.645414] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.680 [2024-07-25 09:59:11.645464] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.680 [2024-07-25 09:59:11.656399] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.680 [2024-07-25 09:59:11.656449] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.680 [2024-07-25 09:59:11.667651] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.680 [2024-07-25 09:59:11.667690] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.680 [2024-07-25 09:59:11.680702] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.680 [2024-07-25 09:59:11.680741] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.680 [2024-07-25 09:59:11.691142] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.680 [2024-07-25 09:59:11.691173] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.680 [2024-07-25 09:59:11.702792] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.680 [2024-07-25 09:59:11.702822] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.680 [2024-07-25 09:59:11.714706] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.680 [2024-07-25 09:59:11.714737] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.680 [2024-07-25 09:59:11.726878] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.680 [2024-07-25 09:59:11.726910] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.680 [2024-07-25 09:59:11.738727] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.680 [2024-07-25 09:59:11.738758] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.680 [2024-07-25 09:59:11.750368] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.680 [2024-07-25 09:59:11.750399] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.680 [2024-07-25 09:59:11.761756] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.680 [2024-07-25 09:59:11.761786] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.680 [2024-07-25 09:59:11.773124] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.680 [2024-07-25 09:59:11.773155] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.680 [2024-07-25 09:59:11.784485] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.680 [2024-07-25 09:59:11.784517] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.680 [2024-07-25 09:59:11.795986] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.680 [2024-07-25 09:59:11.796017] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.680 [2024-07-25 09:59:11.807720] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.681 [2024-07-25 09:59:11.807751] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.681 [2024-07-25 09:59:11.819181] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.681 [2024-07-25 09:59:11.819211] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.681 [2024-07-25 09:59:11.830993] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.681 [2024-07-25 09:59:11.831023] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.681 [2024-07-25 09:59:11.842558] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.681 [2024-07-25 09:59:11.842589] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.938 [2024-07-25 09:59:11.854220] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.938 [2024-07-25 09:59:11.854251] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.938 [2024-07-25 09:59:11.867834] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.938 [2024-07-25 09:59:11.867865] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.938 [2024-07-25 09:59:11.878982] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.938 [2024-07-25 09:59:11.879013] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.938 [2024-07-25 09:59:11.890272] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.938 [2024-07-25 09:59:11.890303] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.938 [2024-07-25 09:59:11.901967] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.938 [2024-07-25 09:59:11.902012] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.938 [2024-07-25 09:59:11.913399] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.938 [2024-07-25 09:59:11.913439] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.938 [2024-07-25 09:59:11.924905] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.939 [2024-07-25 09:59:11.924936] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.939 [2024-07-25 09:59:11.936471] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.939 [2024-07-25 09:59:11.936501] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.939 [2024-07-25 09:59:11.948598] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.939 [2024-07-25 09:59:11.948629] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.939 [2024-07-25 09:59:11.960236] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.939 [2024-07-25 09:59:11.960267] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.939 [2024-07-25 09:59:11.971737] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.939 [2024-07-25 09:59:11.971768] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.939 [2024-07-25 09:59:11.984938] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.939 [2024-07-25 09:59:11.984969] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.939 [2024-07-25 09:59:11.995738] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.939 [2024-07-25 09:59:11.995770] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.939 [2024-07-25 09:59:12.007237] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.939 [2024-07-25 09:59:12.007268] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.939 [2024-07-25 09:59:12.018705] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.939 [2024-07-25 09:59:12.018736] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.939 [2024-07-25 09:59:12.030157] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.939 [2024-07-25 09:59:12.030187] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.939 [2024-07-25 09:59:12.042057] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.939 [2024-07-25 09:59:12.042087] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.939 [2024-07-25 09:59:12.053712] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.939 [2024-07-25 09:59:12.053742] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.939 [2024-07-25 09:59:12.065290] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.939 [2024-07-25 09:59:12.065320] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.939 [2024-07-25 09:59:12.076852] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.939 [2024-07-25 09:59:12.076882] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.939 [2024-07-25 09:59:12.090440] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.939 [2024-07-25 09:59:12.090470] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:26.939 [2024-07-25 09:59:12.101843] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:26.939 [2024-07-25 09:59:12.101873] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.200 [2024-07-25 09:59:12.113223] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.200 [2024-07-25 09:59:12.113254] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.200 [2024-07-25 09:59:12.124921] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.200 [2024-07-25 09:59:12.124951] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.200 [2024-07-25 09:59:12.136738] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.200 [2024-07-25 09:59:12.136768] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.200 [2024-07-25 09:59:12.149067] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.200 [2024-07-25 09:59:12.149098] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.200 [2024-07-25 09:59:12.161343] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.200 [2024-07-25 09:59:12.161373] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.200 [2024-07-25 09:59:12.173184] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.200 [2024-07-25 09:59:12.173215] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.200 [2024-07-25 09:59:12.184837] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.200 [2024-07-25 09:59:12.184869] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.200 [2024-07-25 09:59:12.196826] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.200 [2024-07-25 09:59:12.196859] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.200 [2024-07-25 09:59:12.208610] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.200 [2024-07-25 09:59:12.208642] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.200 [2024-07-25 09:59:12.220108] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.200 [2024-07-25 09:59:12.220139] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.200 [2024-07-25 09:59:12.231638] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.200 [2024-07-25 09:59:12.231668] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.200 [2024-07-25 09:59:12.243076] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.200 [2024-07-25 09:59:12.243107] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.200 [2024-07-25 09:59:12.254556] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.200 [2024-07-25 09:59:12.254587] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.200 [2024-07-25 09:59:12.266131] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.200 [2024-07-25 09:59:12.266162] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.200 [2024-07-25 09:59:12.277344] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.200 [2024-07-25 09:59:12.277375] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.200 [2024-07-25 09:59:12.289047] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.200 [2024-07-25 09:59:12.289078] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.200 [2024-07-25 09:59:12.302601] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.200 [2024-07-25 09:59:12.302634] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.200 [2024-07-25 09:59:12.314021] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.200 [2024-07-25 09:59:12.314052] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.200 [2024-07-25 09:59:12.325301] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.201 [2024-07-25 09:59:12.325331] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.201 [2024-07-25 09:59:12.337205] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.201 [2024-07-25 09:59:12.337236] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.201 [2024-07-25 09:59:12.348760] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.201 [2024-07-25 09:59:12.348791] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.201 [2024-07-25 09:59:12.360426] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.201 [2024-07-25 09:59:12.360465] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.474 [2024-07-25 09:59:12.372381] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.474 [2024-07-25 09:59:12.372411] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.474 [2024-07-25 09:59:12.384422] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.474 [2024-07-25 09:59:12.384461] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.474 [2024-07-25 09:59:12.398368] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.474 [2024-07-25 09:59:12.398399] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.474 [2024-07-25 09:59:12.409982] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.474 [2024-07-25 09:59:12.410012] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.474 [2024-07-25 09:59:12.423509] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.474 [2024-07-25 09:59:12.423539] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.474 [2024-07-25 09:59:12.433464] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.474 [2024-07-25 09:59:12.433495] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.474 [2024-07-25 09:59:12.445402] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.474 [2024-07-25 09:59:12.445443] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.474 [2024-07-25 09:59:12.456780] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.474 [2024-07-25 09:59:12.456811] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.474 [2024-07-25 09:59:12.468191] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.474 [2024-07-25 09:59:12.468221] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.474 [2024-07-25 09:59:12.479960] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.474 [2024-07-25 09:59:12.479990] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.474 [2024-07-25 09:59:12.491970] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.474 [2024-07-25 09:59:12.492000] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.474 [2024-07-25 09:59:12.504018] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.474 [2024-07-25 09:59:12.504048] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.474 [2024-07-25 09:59:12.515648] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.474 [2024-07-25 09:59:12.515679] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.474 [2024-07-25 09:59:12.527202] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.474 [2024-07-25 09:59:12.527232] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.474 [2024-07-25 09:59:12.539106] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.474 [2024-07-25 09:59:12.539136] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.474 [2024-07-25 09:59:12.550670] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.474 [2024-07-25 09:59:12.550700] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.474 [2024-07-25 09:59:12.562340] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.474 [2024-07-25 09:59:12.562370] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.474 [2024-07-25 09:59:12.574348] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.474 [2024-07-25 09:59:12.574379] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.474 [2024-07-25 09:59:12.588385] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.474 [2024-07-25 09:59:12.588416] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.474 [2024-07-25 09:59:12.599558] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.474 [2024-07-25 09:59:12.599589] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.474 [2024-07-25 09:59:12.611064] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.474 [2024-07-25 09:59:12.611094] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.474 [2024-07-25 09:59:12.622594] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.474 [2024-07-25 09:59:12.622636] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.474 [2024-07-25 09:59:12.634112] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.474 [2024-07-25 09:59:12.634143] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.732 [2024-07-25 09:59:12.647543] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.732 [2024-07-25 09:59:12.647574] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.732 [2024-07-25 09:59:12.658660] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.732 [2024-07-25 09:59:12.658690] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.732 [2024-07-25 09:59:12.670346] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.732 [2024-07-25 09:59:12.670376] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.732 [2024-07-25 09:59:12.684095] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.732 [2024-07-25 09:59:12.684125] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.732 [2024-07-25 09:59:12.695140] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.732 [2024-07-25 09:59:12.695170] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.732 [2024-07-25 09:59:12.706424] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.732 [2024-07-25 09:59:12.706462] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.732 [2024-07-25 09:59:12.718122] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.732 [2024-07-25 09:59:12.718163] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.732 [2024-07-25 09:59:12.731387] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.732 [2024-07-25 09:59:12.731417] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.732 [2024-07-25 09:59:12.742534] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.732 [2024-07-25 09:59:12.742565] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.732 [2024-07-25 09:59:12.754346] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.732 [2024-07-25 09:59:12.754376] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.732 [2024-07-25 09:59:12.766532] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.732 [2024-07-25 09:59:12.766573] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.732 [2024-07-25 09:59:12.778971] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.732 [2024-07-25 09:59:12.779002] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.732 [2024-07-25 09:59:12.792571] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.732 [2024-07-25 09:59:12.792602] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.732 [2024-07-25 09:59:12.803421] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.733 [2024-07-25 09:59:12.803461] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.733 [2024-07-25 09:59:12.814873] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.733 [2024-07-25 09:59:12.814904] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.733 [2024-07-25 09:59:12.828401] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.733 [2024-07-25 09:59:12.828442] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.733 [2024-07-25 09:59:12.839089] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.733 [2024-07-25 09:59:12.839130] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.733 [2024-07-25 09:59:12.851559] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.733 [2024-07-25 09:59:12.851589] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.733 [2024-07-25 09:59:12.863257] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.733 [2024-07-25 09:59:12.863295] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.733 [2024-07-25 09:59:12.874463] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.733 [2024-07-25 09:59:12.874494] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.733 [2024-07-25 09:59:12.885931] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.733 [2024-07-25 09:59:12.885962] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.733 [2024-07-25 09:59:12.897424] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.733 [2024-07-25 09:59:12.897462] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.990 [2024-07-25 09:59:12.909206] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.991 [2024-07-25 09:59:12.909235] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.991 [2024-07-25 09:59:12.920660] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.991 [2024-07-25 09:59:12.920689] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.991 [2024-07-25 09:59:12.932019] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.991 [2024-07-25 09:59:12.932049] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.991 [2024-07-25 09:59:12.943468] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.991 [2024-07-25 09:59:12.943499] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.991 [2024-07-25 09:59:12.955025] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.991 [2024-07-25 09:59:12.955056] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.991 [2024-07-25 09:59:12.966929] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.991 [2024-07-25 09:59:12.966960] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.991 [2024-07-25 09:59:12.980279] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.991 [2024-07-25 09:59:12.980309] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.991 [2024-07-25 09:59:12.990914] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.991 [2024-07-25 09:59:12.990944] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.991 [2024-07-25 09:59:13.002767] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.991 [2024-07-25 09:59:13.002798] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.991 [2024-07-25 09:59:13.014273] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.991 [2024-07-25 09:59:13.014314] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.991 [2024-07-25 09:59:13.026245] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.991 [2024-07-25 09:59:13.026276] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.991 [2024-07-25 09:59:13.038050] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.991 [2024-07-25 09:59:13.038080] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.991 [2024-07-25 09:59:13.050210] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.991 [2024-07-25 09:59:13.050240] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.991 [2024-07-25 09:59:13.061940] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.991 [2024-07-25 09:59:13.061970] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.991 [2024-07-25 09:59:13.073663] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.991 [2024-07-25 09:59:13.073693] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.991 [2024-07-25 09:59:13.085417] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.991 [2024-07-25 09:59:13.085460] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.991 [2024-07-25 09:59:13.096848] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.991 [2024-07-25 09:59:13.096878] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.991 [2024-07-25 09:59:13.108365] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.991 [2024-07-25 09:59:13.108403] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.991 [2024-07-25 09:59:13.119878] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.991 [2024-07-25 09:59:13.119908] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.991 [2024-07-25 09:59:13.131157] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.991 [2024-07-25 09:59:13.131187] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.991 [2024-07-25 09:59:13.142762] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.991 [2024-07-25 09:59:13.142792] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:27.991 [2024-07-25 09:59:13.154411] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:27.991 [2024-07-25 09:59:13.154451] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.249 [2024-07-25 09:59:13.165783] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.249 [2024-07-25 09:59:13.165814] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.249 [2024-07-25 09:59:13.177009] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.249 [2024-07-25 09:59:13.177039] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.249 [2024-07-25 09:59:13.188684] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.249 [2024-07-25 09:59:13.188714] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.249 [2024-07-25 09:59:13.200127] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.249 [2024-07-25 09:59:13.200157] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.249 [2024-07-25 09:59:13.211625] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.249 [2024-07-25 09:59:13.211655] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.249 [2024-07-25 09:59:13.223085] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.249 [2024-07-25 09:59:13.223116] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.249 [2024-07-25 09:59:13.236252] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.249 [2024-07-25 09:59:13.236303] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.249 [2024-07-25 09:59:13.246870] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.249 [2024-07-25 09:59:13.246900] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.249 [2024-07-25 09:59:13.259028] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.249 [2024-07-25 09:59:13.259058] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.249 [2024-07-25 09:59:13.270995] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.249 [2024-07-25 09:59:13.271026] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.249 [2024-07-25 09:59:13.282299] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.249 [2024-07-25 09:59:13.282329] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.249 [2024-07-25 09:59:13.293503] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.249 [2024-07-25 09:59:13.293533] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.249 [2024-07-25 09:59:13.305023] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.249 [2024-07-25 09:59:13.305053] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.249 [2024-07-25 09:59:13.316772] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.249 [2024-07-25 09:59:13.316804] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.249 [2024-07-25 09:59:13.328204] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.249 [2024-07-25 09:59:13.328236] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.249 [2024-07-25 09:59:13.341463] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.249 [2024-07-25 09:59:13.341494] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.249 [2024-07-25 09:59:13.351999] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.249 [2024-07-25 09:59:13.352030] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.249 [2024-07-25 09:59:13.363391] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.249 [2024-07-25 09:59:13.363421] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.249 [2024-07-25 09:59:13.375000] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.249 [2024-07-25 09:59:13.375031] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.249 [2024-07-25 09:59:13.388106] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.249 [2024-07-25 09:59:13.388138] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.249 [2024-07-25 09:59:13.398801] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.249 [2024-07-25 09:59:13.398833] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.249 [2024-07-25 09:59:13.410385] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.249 [2024-07-25 09:59:13.410416] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.507 [2024-07-25 09:59:13.422046] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.507 [2024-07-25 09:59:13.422077] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.507 [2024-07-25 09:59:13.435586] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.507 [2024-07-25 09:59:13.435617] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.507 [2024-07-25 09:59:13.447033] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.507 [2024-07-25 09:59:13.447064] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.507 [2024-07-25 09:59:13.460667] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.507 [2024-07-25 09:59:13.460717] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.507 [2024-07-25 09:59:13.471608] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.507 [2024-07-25 09:59:13.471645] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.507 [2024-07-25 09:59:13.483296] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.507 [2024-07-25 09:59:13.483338] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.507 [2024-07-25 09:59:13.495139] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.507 [2024-07-25 09:59:13.495170] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.507 [2024-07-25 09:59:13.506382] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.507 [2024-07-25 09:59:13.506413] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.508 [2024-07-25 09:59:13.518039] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.508 [2024-07-25 09:59:13.518070] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.508 [2024-07-25 09:59:13.529488] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.508 [2024-07-25 09:59:13.529518] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.508 [2024-07-25 09:59:13.540538] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.508 [2024-07-25 09:59:13.540572] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.508 [2024-07-25 09:59:13.552317] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.508 [2024-07-25 09:59:13.552348] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.508 [2024-07-25 09:59:13.564016] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.508 [2024-07-25 09:59:13.564047] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.508 [2024-07-25 09:59:13.575657] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.508 [2024-07-25 09:59:13.575687] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.508 [2024-07-25 09:59:13.587083] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.508 [2024-07-25 09:59:13.587114] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.508 [2024-07-25 09:59:13.598033] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.508 [2024-07-25 09:59:13.598063] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.508 [2024-07-25 09:59:13.610270] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.508 [2024-07-25 09:59:13.610300] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.508 [2024-07-25 09:59:13.622038] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.508 [2024-07-25 09:59:13.622069] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.508 [2024-07-25 09:59:13.633691] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.508 [2024-07-25 09:59:13.633721] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.508 [2024-07-25 09:59:13.645186] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.508 [2024-07-25 09:59:13.645217] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.508 [2024-07-25 09:59:13.657002] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.508 [2024-07-25 09:59:13.657033] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.508 [2024-07-25 09:59:13.668663] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.508 [2024-07-25 09:59:13.668694] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.766 [2024-07-25 09:59:13.682134] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.766 [2024-07-25 09:59:13.682185] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.766 [2024-07-25 09:59:13.693117] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.766 [2024-07-25 09:59:13.693148] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.766 [2024-07-25 09:59:13.704757] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.766 [2024-07-25 09:59:13.704787] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.766 [2024-07-25 09:59:13.715867] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.766 [2024-07-25 09:59:13.715898] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.766 [2024-07-25 09:59:13.727252] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.766 [2024-07-25 09:59:13.727282] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.766 [2024-07-25 09:59:13.738704] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.766 [2024-07-25 09:59:13.738735] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.766 [2024-07-25 09:59:13.750200] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.766 [2024-07-25 09:59:13.750232] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.766 [2024-07-25 09:59:13.761617] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.766 [2024-07-25 09:59:13.761648] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.766 [2024-07-25 09:59:13.774829] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.766 [2024-07-25 09:59:13.774859] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.766 [2024-07-25 09:59:13.785305] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.766 [2024-07-25 09:59:13.785335] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.766 [2024-07-25 09:59:13.796994] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.766 [2024-07-25 09:59:13.797025] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.766 [2024-07-25 09:59:13.808680] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.766 [2024-07-25 09:59:13.808710] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.766 [2024-07-25 09:59:13.820584] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.766 [2024-07-25 09:59:13.820614] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.766 [2024-07-25 09:59:13.832532] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.766 [2024-07-25 09:59:13.832563] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.766 [2024-07-25 09:59:13.844572] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.766 [2024-07-25 09:59:13.844603] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.766 [2024-07-25 09:59:13.856536] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.766 [2024-07-25 09:59:13.856566] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.766 [2024-07-25 09:59:13.868305] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.766 [2024-07-25 09:59:13.868335] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.766 [2024-07-25 09:59:13.880227] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.766 [2024-07-25 09:59:13.880258] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.766 [2024-07-25 09:59:13.892016] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.766 [2024-07-25 09:59:13.892046] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.766 [2024-07-25 09:59:13.903703] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.766 [2024-07-25 09:59:13.903734] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.766 [2024-07-25 09:59:13.914953] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.766 [2024-07-25 09:59:13.914983] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:28.766 [2024-07-25 09:59:13.926565] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:28.766 [2024-07-25 09:59:13.926596] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.024 [2024-07-25 09:59:13.938186] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.024 [2024-07-25 09:59:13.938218] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.024 [2024-07-25 09:59:13.949478] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.024 [2024-07-25 09:59:13.949508] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.024 [2024-07-25 09:59:13.961598] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.024 [2024-07-25 09:59:13.961629] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.024 [2024-07-25 09:59:13.973411] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.024 [2024-07-25 09:59:13.973450] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.024 [2024-07-25 09:59:13.985009] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.024 [2024-07-25 09:59:13.985039] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.024 [2024-07-25 09:59:13.996745] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.024 [2024-07-25 09:59:13.996775] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.024 [2024-07-25 09:59:14.008581] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.024 [2024-07-25 09:59:14.008612] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.024 [2024-07-25 09:59:14.020405] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.024 [2024-07-25 09:59:14.020446] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.024 [2024-07-25 09:59:14.034603] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.024 [2024-07-25 09:59:14.034634] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.024 [2024-07-25 09:59:14.045780] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.024 [2024-07-25 09:59:14.045810] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.024 [2024-07-25 09:59:14.057103] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.024 [2024-07-25 09:59:14.057133] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.024 [2024-07-25 09:59:14.068767] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.024 [2024-07-25 09:59:14.068797] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.024 [2024-07-25 09:59:14.080893] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.024 [2024-07-25 09:59:14.080923] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.024 [2024-07-25 09:59:14.092952] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.024 [2024-07-25 09:59:14.092982] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.024 [2024-07-25 09:59:14.104246] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.024 [2024-07-25 09:59:14.104277] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.024 [2024-07-25 09:59:14.115463] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.024 [2024-07-25 09:59:14.115493] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.024 [2024-07-25 09:59:14.126590] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.024 [2024-07-25 09:59:14.126621] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.024 [2024-07-25 09:59:14.138542] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.024 [2024-07-25 09:59:14.138573] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.024 [2024-07-25 09:59:14.150545] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.024 [2024-07-25 09:59:14.150575] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.024 [2024-07-25 09:59:14.164451] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.024 [2024-07-25 09:59:14.164481] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.024 [2024-07-25 09:59:14.175709] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.024 [2024-07-25 09:59:14.175739] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.024 [2024-07-25 09:59:14.186912] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.024 [2024-07-25 09:59:14.186943] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.282 [2024-07-25 09:59:14.198891] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.282 [2024-07-25 09:59:14.198922] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.282 [2024-07-25 09:59:14.210426] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.282 [2024-07-25 09:59:14.210467] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.282 [2024-07-25 09:59:14.221711] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.282 [2024-07-25 09:59:14.221740] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.282 [2024-07-25 09:59:14.233311] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.282 [2024-07-25 09:59:14.233341] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.282 [2024-07-25 09:59:14.245009] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.282 [2024-07-25 09:59:14.245050] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.282 [2024-07-25 09:59:14.256666] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.282 [2024-07-25 09:59:14.256703] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.282 [2024-07-25 09:59:14.268531] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.282 [2024-07-25 09:59:14.268561] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.282 [2024-07-25 09:59:14.280039] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.282 [2024-07-25 09:59:14.280070] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.282 [2024-07-25 09:59:14.292212] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.282 [2024-07-25 09:59:14.292253] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.282 [2024-07-25 09:59:14.304256] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.282 [2024-07-25 09:59:14.304288] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.282 [2024-07-25 09:59:14.315783] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.282 [2024-07-25 09:59:14.315813] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.282 [2024-07-25 09:59:14.327465] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.282 [2024-07-25 09:59:14.327499] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.282 [2024-07-25 09:59:14.339200] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.282 [2024-07-25 09:59:14.339230] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.282 [2024-07-25 09:59:14.350942] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.282 [2024-07-25 09:59:14.350973] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.282 [2024-07-25 09:59:14.362628] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.282 [2024-07-25 09:59:14.362659] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.282 [2024-07-25 09:59:14.374590] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.282 [2024-07-25 09:59:14.374620] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.282 [2024-07-25 09:59:14.386337] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.282 [2024-07-25 09:59:14.386367] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.282 [2024-07-25 09:59:14.398418] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.282 [2024-07-25 09:59:14.398459] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.282 [2024-07-25 09:59:14.410616] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.282 [2024-07-25 09:59:14.410646] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.282 [2024-07-25 09:59:14.422086] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.282 [2024-07-25 09:59:14.422116] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.282 [2024-07-25 09:59:14.433769] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.282 [2024-07-25 09:59:14.433799] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.282 [2024-07-25 09:59:14.445400] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.282 [2024-07-25 09:59:14.445444] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.539 [2024-07-25 09:59:14.457038] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.539 [2024-07-25 09:59:14.457070] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.539 [2024-07-25 09:59:14.468583] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.539 [2024-07-25 09:59:14.468614] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.539 [2024-07-25 09:59:14.482115] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.539 [2024-07-25 09:59:14.482146] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.540 [2024-07-25 09:59:14.493097] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.540 [2024-07-25 09:59:14.493128] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.540 [2024-07-25 09:59:14.504764] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.540 [2024-07-25 09:59:14.504794] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.540 [2024-07-25 09:59:14.516154] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.540 [2024-07-25 09:59:14.516185] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.540 [2024-07-25 09:59:14.527229] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.540 [2024-07-25 09:59:14.527260] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.540 [2024-07-25 09:59:14.538623] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.540 [2024-07-25 09:59:14.538654] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.540 [2024-07-25 09:59:14.550248] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.540 [2024-07-25 09:59:14.550279] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.540 [2024-07-25 09:59:14.561957] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.540 [2024-07-25 09:59:14.561990] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.540 [2024-07-25 09:59:14.573821] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.540 [2024-07-25 09:59:14.573852] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.540 [2024-07-25 09:59:14.585297] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.540 [2024-07-25 09:59:14.585328] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.540 [2024-07-25 09:59:14.597206] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.540 [2024-07-25 09:59:14.597236] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.540 [2024-07-25 09:59:14.608855] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.540 [2024-07-25 09:59:14.608886] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.540 [2024-07-25 09:59:14.620678] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.540 [2024-07-25 09:59:14.620708] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.540 [2024-07-25 09:59:14.632480] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.540 [2024-07-25 09:59:14.632511] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.540 [2024-07-25 09:59:14.644057] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.540 [2024-07-25 09:59:14.644087] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.540 [2024-07-25 09:59:14.655412] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.540 [2024-07-25 09:59:14.655453] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.540 [2024-07-25 09:59:14.667038] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.540 [2024-07-25 09:59:14.667069] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.540 [2024-07-25 09:59:14.678062] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.540 [2024-07-25 09:59:14.678092] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.540 [2024-07-25 09:59:14.689825] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.540 [2024-07-25 09:59:14.689856] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.540 [2024-07-25 09:59:14.701197] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.540 [2024-07-25 09:59:14.701227] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.798 [2024-07-25 09:59:14.712897] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.798 [2024-07-25 09:59:14.712927] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.798 [2024-07-25 09:59:14.724292] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.798 [2024-07-25 09:59:14.724322] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.798 [2024-07-25 09:59:14.735373] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.798 [2024-07-25 09:59:14.735403] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.798 [2024-07-25 09:59:14.747149] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.798 [2024-07-25 09:59:14.747180] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.798 [2024-07-25 09:59:14.758902] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.798 [2024-07-25 09:59:14.758932] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.798 [2024-07-25 09:59:14.770244] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.798 [2024-07-25 09:59:14.770275] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.798 [2024-07-25 09:59:14.781848] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.798 [2024-07-25 09:59:14.781889] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.798 [2024-07-25 09:59:14.793558] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.798 [2024-07-25 09:59:14.793589] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.798 [2024-07-25 09:59:14.804953] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.798 [2024-07-25 09:59:14.804984] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.798 [2024-07-25 09:59:14.816698] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.798 [2024-07-25 09:59:14.816728] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.798 [2024-07-25 09:59:14.828575] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.798 [2024-07-25 09:59:14.828605] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.798 [2024-07-25 09:59:14.840132] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.798 [2024-07-25 09:59:14.840162] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.798 [2024-07-25 09:59:14.851686] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.798 [2024-07-25 09:59:14.851717] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.798 [2024-07-25 09:59:14.862754] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.798 [2024-07-25 09:59:14.862796] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.798 [2024-07-25 09:59:14.874409] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.798 [2024-07-25 09:59:14.874449] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.798 [2024-07-25 09:59:14.886405] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.798 [2024-07-25 09:59:14.886445] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.798 [2024-07-25 09:59:14.897903] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.798 [2024-07-25 09:59:14.897933] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.798 [2024-07-25 09:59:14.909382] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.798 [2024-07-25 09:59:14.909412] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.798 [2024-07-25 09:59:14.920734] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.798 [2024-07-25 09:59:14.920765] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.798 [2024-07-25 09:59:14.932535] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.798 [2024-07-25 09:59:14.932566] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.798 [2024-07-25 09:59:14.944025] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.798 [2024-07-25 09:59:14.944055] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:29.798 [2024-07-25 09:59:14.955680] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:29.798 [2024-07-25 09:59:14.955711] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.056 [2024-07-25 09:59:14.967336] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.056 [2024-07-25 09:59:14.967367] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.056 [2024-07-25 09:59:14.979038] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.056 [2024-07-25 09:59:14.979069] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.056 [2024-07-25 09:59:14.992319] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.056 [2024-07-25 09:59:14.992350] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.056 [2024-07-25 09:59:15.003281] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.056 [2024-07-25 09:59:15.003321] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.056 [2024-07-25 09:59:15.014501] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.056 [2024-07-25 09:59:15.014542] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.056 [2024-07-25 09:59:15.025806] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.056 [2024-07-25 09:59:15.025836] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.056 [2024-07-25 09:59:15.037090] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.056 [2024-07-25 09:59:15.037121] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.056 [2024-07-25 09:59:15.049038] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.056 [2024-07-25 09:59:15.049068] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.056 [2024-07-25 09:59:15.061018] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.056 [2024-07-25 09:59:15.061049] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.056 [2024-07-25 09:59:15.072656] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.056 [2024-07-25 09:59:15.072687] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.056 [2024-07-25 09:59:15.084383] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.056 [2024-07-25 09:59:15.084414] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.056 [2024-07-25 09:59:15.097748] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.056 [2024-07-25 09:59:15.097779] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.056 [2024-07-25 09:59:15.108305] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.056 [2024-07-25 09:59:15.108336] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.056 [2024-07-25 09:59:15.119721] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.056 [2024-07-25 09:59:15.119751] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.056 [2024-07-25 09:59:15.131606] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.056 [2024-07-25 09:59:15.131637] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.056 [2024-07-25 09:59:15.143522] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.056 [2024-07-25 09:59:15.143553] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.057 [2024-07-25 09:59:15.155035] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.057 [2024-07-25 09:59:15.155065] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.057 [2024-07-25 09:59:15.166910] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.057 [2024-07-25 09:59:15.166941] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.057 [2024-07-25 09:59:15.178511] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.057 [2024-07-25 09:59:15.178541] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.057 [2024-07-25 09:59:15.189744] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.057 [2024-07-25 09:59:15.189775] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.057 [2024-07-25 09:59:15.201252] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.057 [2024-07-25 09:59:15.201283] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.057 [2024-07-25 09:59:15.212698] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.057 [2024-07-25 09:59:15.212727] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.314 [2024-07-25 09:59:15.223829] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.315 [2024-07-25 09:59:15.223885] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.315 [2024-07-25 09:59:15.235141] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.315 [2024-07-25 09:59:15.235172] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.315 [2024-07-25 09:59:15.246479] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.315 [2024-07-25 09:59:15.246511] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.315 [2024-07-25 09:59:15.258437] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.315 [2024-07-25 09:59:15.258472] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.315 [2024-07-25 09:59:15.269932] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.315 [2024-07-25 09:59:15.269962] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.315 [2024-07-25 09:59:15.283346] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.315 [2024-07-25 09:59:15.283377] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.315 [2024-07-25 09:59:15.294582] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.315 [2024-07-25 09:59:15.294613] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.315 [2024-07-25 09:59:15.306008] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.315 [2024-07-25 09:59:15.306038] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.315 [2024-07-25 09:59:15.317676] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.315 [2024-07-25 09:59:15.317707] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.315 [2024-07-25 09:59:15.329259] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.315 [2024-07-25 09:59:15.329289] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.315 [2024-07-25 09:59:15.340920] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.315 [2024-07-25 09:59:15.340950] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.315 [2024-07-25 09:59:15.352391] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.315 [2024-07-25 09:59:15.352421] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.315 [2024-07-25 09:59:15.363788] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.315 [2024-07-25 09:59:15.363819] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.315 [2024-07-25 09:59:15.375399] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.315 [2024-07-25 09:59:15.375438] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.315 [2024-07-25 09:59:15.387022] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.315 [2024-07-25 09:59:15.387053] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.315 [2024-07-25 09:59:15.398973] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.315 [2024-07-25 09:59:15.399003] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.315 [2024-07-25 09:59:15.410781] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.315 [2024-07-25 09:59:15.410812] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.315 [2024-07-25 09:59:15.421399] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.315 [2024-07-25 09:59:15.421438] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.315 [2024-07-25 09:59:15.433584] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.315 [2024-07-25 09:59:15.433615] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.315 [2024-07-25 09:59:15.445437] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.315 [2024-07-25 09:59:15.445476] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.315 [2024-07-25 09:59:15.459003] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.315 [2024-07-25 09:59:15.459033] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.315 [2024-07-25 09:59:15.470227] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.315 [2024-07-25 09:59:15.470258] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.573 [2024-07-25 09:59:15.481723] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.573 [2024-07-25 09:59:15.481754] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.573 [2024-07-25 09:59:15.493391] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.573 [2024-07-25 09:59:15.493421] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.573 [2024-07-25 09:59:15.505020] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.573 [2024-07-25 09:59:15.505050] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.573 [2024-07-25 09:59:15.516240] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.573 [2024-07-25 09:59:15.516270] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.573 [2024-07-25 09:59:15.528001] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.573 [2024-07-25 09:59:15.528031] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.573 [2024-07-25 09:59:15.539024] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.573 [2024-07-25 09:59:15.539054] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.573 [2024-07-25 09:59:15.550672] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.573 [2024-07-25 09:59:15.550702] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.573 [2024-07-25 09:59:15.562260] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.573 [2024-07-25 09:59:15.562289] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.573 [2024-07-25 09:59:15.574095] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.573 [2024-07-25 09:59:15.574127] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.573 [2024-07-25 09:59:15.586232] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.573 [2024-07-25 09:59:15.586264] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.573 [2024-07-25 09:59:15.597447] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.573 [2024-07-25 09:59:15.597478] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.573 [2024-07-25 09:59:15.610648] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.573 [2024-07-25 09:59:15.610679] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.573 [2024-07-25 09:59:15.621699] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.573 [2024-07-25 09:59:15.621730] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.573 [2024-07-25 09:59:15.633585] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.573 [2024-07-25 09:59:15.633616] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.573 [2024-07-25 09:59:15.645012] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.573 [2024-07-25 09:59:15.645044] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.573 [2024-07-25 09:59:15.656642] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.573 [2024-07-25 09:59:15.656673] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.573 [2024-07-25 09:59:15.668259] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.573 [2024-07-25 09:59:15.668300] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.573 [2024-07-25 09:59:15.684005] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.573 [2024-07-25 09:59:15.684037] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.573 [2024-07-25 09:59:15.695098] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.573 [2024-07-25 09:59:15.695129] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.573 [2024-07-25 09:59:15.706807] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.573 [2024-07-25 09:59:15.706837] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.573 [2024-07-25 09:59:15.718827] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.573 [2024-07-25 09:59:15.718856] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.573 [2024-07-25 09:59:15.730702] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.573 [2024-07-25 09:59:15.730732] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.831 [2024-07-25 09:59:15.742323] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.831 [2024-07-25 09:59:15.742354] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.831 [2024-07-25 09:59:15.754576] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.831 [2024-07-25 09:59:15.754606] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.831 [2024-07-25 09:59:15.766481] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.831 [2024-07-25 09:59:15.766511] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.831 [2024-07-25 09:59:15.777973] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.831 [2024-07-25 09:59:15.778003] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.831 [2024-07-25 09:59:15.789696] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.831 [2024-07-25 09:59:15.789726] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.831 [2024-07-25 09:59:15.801440] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.831 [2024-07-25 09:59:15.801470] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.831 [2024-07-25 09:59:15.812967] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.831 [2024-07-25 09:59:15.812997] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.831 [2024-07-25 09:59:15.824821] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.831 [2024-07-25 09:59:15.824852] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.831 [2024-07-25 09:59:15.837049] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.831 [2024-07-25 09:59:15.837079] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.831 [2024-07-25 09:59:15.848195] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.831 [2024-07-25 09:59:15.848225] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.831 [2024-07-25 09:59:15.861342] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.831 [2024-07-25 09:59:15.861372] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.831 [2024-07-25 09:59:15.872513] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.831 [2024-07-25 09:59:15.872544] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.831 [2024-07-25 09:59:15.883900] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.831 [2024-07-25 09:59:15.883931] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.831 [2024-07-25 09:59:15.895638] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.831 [2024-07-25 09:59:15.895669] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.831 [2024-07-25 09:59:15.909161] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.831 [2024-07-25 09:59:15.909191] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.831 [2024-07-25 09:59:15.919998] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.831 [2024-07-25 09:59:15.920028] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.831 [2024-07-25 09:59:15.932300] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.831 [2024-07-25 09:59:15.932330] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.831 [2024-07-25 09:59:15.944148] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.831 [2024-07-25 09:59:15.944178] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.831 [2024-07-25 09:59:15.957653] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.831 [2024-07-25 09:59:15.957684] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.831 [2024-07-25 09:59:15.968966] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.831 [2024-07-25 09:59:15.968997] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.831 [2024-07-25 09:59:15.980622] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.831 [2024-07-25 09:59:15.980653] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:30.831 [2024-07-25 09:59:15.992035] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:30.831 [2024-07-25 09:59:15.992065] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.089 [2024-07-25 09:59:16.003983] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.089 [2024-07-25 09:59:16.004014] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.089 [2024-07-25 09:59:16.015604] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.089 [2024-07-25 09:59:16.015635] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.089 [2024-07-25 09:59:16.027157] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.089 [2024-07-25 09:59:16.027188] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.089 [2024-07-25 09:59:16.038702] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.089 [2024-07-25 09:59:16.038733] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.089 [2024-07-25 09:59:16.050267] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.089 [2024-07-25 09:59:16.050298] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.089 [2024-07-25 09:59:16.061874] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.089 [2024-07-25 09:59:16.061904] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.089 [2024-07-25 09:59:16.073836] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.089 [2024-07-25 09:59:16.073867] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.089 [2024-07-25 09:59:16.087243] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.089 [2024-07-25 09:59:16.087283] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.089 [2024-07-25 09:59:16.097914] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.089 [2024-07-25 09:59:16.097945] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.089 [2024-07-25 09:59:16.109805] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.089 [2024-07-25 09:59:16.109835] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.089 [2024-07-25 09:59:16.121796] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.089 [2024-07-25 09:59:16.121826] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.089 [2024-07-25 09:59:16.133527] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.089 [2024-07-25 09:59:16.133558] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.089 [2024-07-25 09:59:16.145405] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.089 [2024-07-25 09:59:16.145448] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.089 [2024-07-25 09:59:16.157118] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.089 [2024-07-25 09:59:16.157148] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.089 [2024-07-25 09:59:16.169033] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.089 [2024-07-25 09:59:16.169064] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.089 [2024-07-25 09:59:16.180221] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.089 [2024-07-25 09:59:16.180252] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.089 [2024-07-25 09:59:16.191880] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.089 [2024-07-25 09:59:16.191911] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.089 [2024-07-25 09:59:16.203628] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.089 [2024-07-25 09:59:16.203660] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.089 [2024-07-25 09:59:16.213842] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.089 [2024-07-25 09:59:16.213872] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.089 00:10:31.089 Latency(us) 00:10:31.089 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:31.089 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:10:31.089 Nvme1n1 : 5.01 10934.39 85.42 0.00 0.00 11690.41 5291.43 23010.42 00:10:31.089 =================================================================================================================== 00:10:31.089 Total : 10934.39 85.42 0.00 0.00 11690.41 5291.43 23010.42 00:10:31.089 [2024-07-25 09:59:16.219081] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.089 [2024-07-25 09:59:16.219109] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.090 [2024-07-25 09:59:16.227103] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.090 [2024-07-25 09:59:16.227132] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.090 [2024-07-25 09:59:16.235118] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.090 [2024-07-25 09:59:16.235144] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.090 [2024-07-25 09:59:16.243175] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.090 [2024-07-25 09:59:16.243216] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.090 [2024-07-25 09:59:16.251207] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.090 [2024-07-25 09:59:16.251254] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.348 [2024-07-25 09:59:16.259227] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.348 [2024-07-25 09:59:16.259270] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.348 [2024-07-25 09:59:16.267254] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.348 [2024-07-25 09:59:16.267302] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.348 [2024-07-25 09:59:16.275266] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.348 [2024-07-25 09:59:16.275309] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.348 [2024-07-25 09:59:16.283299] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.348 [2024-07-25 09:59:16.283345] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.348 [2024-07-25 09:59:16.291313] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.348 [2024-07-25 09:59:16.291357] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.348 [2024-07-25 09:59:16.299332] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.348 [2024-07-25 09:59:16.299375] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.348 [2024-07-25 09:59:16.307355] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.348 [2024-07-25 09:59:16.307400] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.348 [2024-07-25 09:59:16.315386] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.348 [2024-07-25 09:59:16.315442] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.348 [2024-07-25 09:59:16.323408] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.348 [2024-07-25 09:59:16.323484] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.348 [2024-07-25 09:59:16.331426] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.348 [2024-07-25 09:59:16.331479] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.348 [2024-07-25 09:59:16.339453] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.348 [2024-07-25 09:59:16.339496] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.348 [2024-07-25 09:59:16.347479] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.348 [2024-07-25 09:59:16.347524] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.348 [2024-07-25 09:59:16.355494] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.348 [2024-07-25 09:59:16.355538] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.348 [2024-07-25 09:59:16.363530] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.348 [2024-07-25 09:59:16.363574] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.348 [2024-07-25 09:59:16.371504] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.348 [2024-07-25 09:59:16.371532] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.348 [2024-07-25 09:59:16.379516] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.348 [2024-07-25 09:59:16.379542] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.348 [2024-07-25 09:59:16.387537] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.348 [2024-07-25 09:59:16.387562] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.348 [2024-07-25 09:59:16.395558] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.348 [2024-07-25 09:59:16.395583] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.348 [2024-07-25 09:59:16.403580] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.348 [2024-07-25 09:59:16.403605] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.348 [2024-07-25 09:59:16.411658] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.348 [2024-07-25 09:59:16.411711] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.348 [2024-07-25 09:59:16.419661] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.348 [2024-07-25 09:59:16.419721] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.348 [2024-07-25 09:59:16.427692] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.348 [2024-07-25 09:59:16.427727] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.348 [2024-07-25 09:59:16.435668] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.348 [2024-07-25 09:59:16.435694] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.348 [2024-07-25 09:59:16.443691] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.348 [2024-07-25 09:59:16.443717] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.348 [2024-07-25 09:59:16.451719] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.348 [2024-07-25 09:59:16.451749] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.348 [2024-07-25 09:59:16.459735] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.348 [2024-07-25 09:59:16.459760] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.348 [2024-07-25 09:59:16.467800] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.348 [2024-07-25 09:59:16.467844] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.348 [2024-07-25 09:59:16.475824] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.348 [2024-07-25 09:59:16.475869] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.348 [2024-07-25 09:59:16.483815] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.348 [2024-07-25 09:59:16.483846] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.348 [2024-07-25 09:59:16.491822] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.348 [2024-07-25 09:59:16.491847] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.348 [2024-07-25 09:59:16.499844] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:31.348 [2024-07-25 09:59:16.499869] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:31.348 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (361946) - No such process 00:10:31.348 09:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 361946 00:10:31.348 09:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:31.348 09:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.348 09:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:31.348 09:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.348 09:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:31.348 09:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.348 09:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:31.607 delay0 00:10:31.607 09:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.607 09:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:10:31.607 09:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:31.607 09:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:31.607 09:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:31.607 09:59:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:10:31.607 EAL: No free 2048 kB hugepages reported on node 1 00:10:31.607 [2024-07-25 09:59:16.662613] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:39.715 Initializing NVMe Controllers 00:10:39.715 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:39.715 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:39.715 Initialization complete. Launching workers. 00:10:39.715 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 271, failed: 15043 00:10:39.715 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 15228, failed to submit 86 00:10:39.715 success 15096, unsuccess 132, failed 0 00:10:39.715 09:59:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:10:39.715 09:59:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:10:39.715 09:59:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:39.715 09:59:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:10:39.715 09:59:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:39.716 09:59:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:10:39.716 09:59:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:39.716 09:59:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:39.716 rmmod nvme_tcp 00:10:39.716 rmmod nvme_fabrics 00:10:39.716 rmmod nvme_keyring 00:10:39.716 09:59:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:39.716 09:59:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:10:39.716 09:59:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:10:39.716 09:59:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 360602 ']' 00:10:39.716 09:59:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 360602 00:10:39.716 09:59:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 360602 ']' 00:10:39.716 09:59:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 360602 00:10:39.716 09:59:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:10:39.716 09:59:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:39.716 09:59:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 360602 00:10:39.716 09:59:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:10:39.716 09:59:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:10:39.716 09:59:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 360602' 00:10:39.716 killing process with pid 360602 00:10:39.716 09:59:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 360602 00:10:39.716 09:59:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 360602 00:10:39.716 09:59:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:39.716 09:59:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:39.716 09:59:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:39.716 09:59:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:39.716 09:59:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:39.716 09:59:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:39.716 09:59:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:39.716 09:59:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:41.093 09:59:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:41.093 00:10:41.093 real 0m29.660s 00:10:41.093 user 0m41.895s 00:10:41.093 sys 0m10.553s 00:10:41.093 09:59:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:41.093 09:59:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:41.093 ************************************ 00:10:41.093 END TEST nvmf_zcopy 00:10:41.093 ************************************ 00:10:41.093 09:59:26 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:41.093 09:59:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:41.093 09:59:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:41.093 09:59:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:41.093 ************************************ 00:10:41.093 START TEST nvmf_nmic 00:10:41.093 ************************************ 00:10:41.093 09:59:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:41.352 * Looking for test storage... 00:10:41.352 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:41.352 09:59:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:41.352 09:59:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:10:41.352 09:59:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:41.352 09:59:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:41.352 09:59:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:41.352 09:59:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:41.352 09:59:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:41.352 09:59:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:41.352 09:59:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:41.352 09:59:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:41.352 09:59:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:41.352 09:59:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:41.352 09:59:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:10:41.352 09:59:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:10:41.352 09:59:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:41.352 09:59:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:41.352 09:59:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:41.352 09:59:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:41.352 09:59:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:41.352 09:59:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:41.352 09:59:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:41.352 09:59:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:41.352 09:59:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.353 09:59:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.353 09:59:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.353 09:59:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:10:41.353 09:59:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:41.353 09:59:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:10:41.353 09:59:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:41.353 09:59:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:41.353 09:59:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:41.353 09:59:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:41.353 09:59:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:41.353 09:59:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:41.353 09:59:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:41.353 09:59:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:41.353 09:59:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:41.353 09:59:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:41.353 09:59:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:10:41.353 09:59:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:41.353 09:59:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:41.353 09:59:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:41.353 09:59:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:41.353 09:59:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:41.353 09:59:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:41.353 09:59:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:41.353 09:59:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:41.353 09:59:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:41.353 09:59:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:41.353 09:59:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:10:41.353 09:59:26 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:43.888 09:59:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:43.888 09:59:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:10:43.888 09:59:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:43.888 09:59:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:43.888 09:59:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:43.888 09:59:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:43.888 09:59:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:43.888 09:59:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:10:43.888 09:59:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:43.888 09:59:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:10:43.888 09:59:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:10:43.888 09:59:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:10:43.888 09:59:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:10:43.888 09:59:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:10:43.888 09:59:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:10:43.888 09:59:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:43.888 09:59:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:43.889 09:59:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:43.889 09:59:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:43.889 09:59:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:43.889 09:59:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:43.889 09:59:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:43.889 09:59:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:43.889 09:59:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:43.889 09:59:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:43.889 09:59:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:43.889 09:59:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:43.889 09:59:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:43.889 09:59:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:43.889 09:59:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:43.889 09:59:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:43.889 09:59:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:43.889 09:59:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:43.889 09:59:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:10:43.889 Found 0000:84:00.0 (0x8086 - 0x159b) 00:10:43.889 09:59:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:43.889 09:59:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:43.889 09:59:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:43.889 09:59:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:43.889 09:59:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:43.889 09:59:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:43.889 09:59:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:10:43.889 Found 0000:84:00.1 (0x8086 - 0x159b) 00:10:43.889 09:59:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:43.889 09:59:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:43.889 09:59:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:43.889 09:59:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:43.889 09:59:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:43.889 09:59:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:43.889 09:59:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:43.889 09:59:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:43.889 09:59:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:43.889 09:59:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:43.889 09:59:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:43.889 09:59:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:43.889 09:59:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:43.889 09:59:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:43.889 09:59:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:43.889 09:59:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:10:43.889 Found net devices under 0000:84:00.0: cvl_0_0 00:10:43.889 09:59:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:43.889 09:59:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:43.889 09:59:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:43.889 09:59:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:43.889 09:59:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:43.889 09:59:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:43.889 09:59:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:43.889 09:59:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:43.889 09:59:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:10:43.889 Found net devices under 0000:84:00.1: cvl_0_1 00:10:43.889 09:59:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:43.889 09:59:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:43.889 09:59:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:10:43.889 09:59:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:43.889 09:59:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:43.889 09:59:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:43.889 09:59:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:43.889 09:59:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:43.889 09:59:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:43.889 09:59:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:43.889 09:59:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:43.889 09:59:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:43.889 09:59:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:43.889 09:59:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:43.889 09:59:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:43.889 09:59:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:43.889 09:59:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:43.889 09:59:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:43.889 09:59:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:43.889 09:59:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:43.889 09:59:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:43.889 09:59:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:43.889 09:59:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:43.889 09:59:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:43.889 09:59:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:43.889 09:59:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:43.889 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:43.889 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.126 ms 00:10:43.889 00:10:43.889 --- 10.0.0.2 ping statistics --- 00:10:43.889 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:43.889 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:10:43.889 09:59:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:43.889 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:43.889 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.094 ms 00:10:43.889 00:10:43.889 --- 10.0.0.1 ping statistics --- 00:10:43.889 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:43.889 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:10:43.889 09:59:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:43.889 09:59:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:10:43.889 09:59:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:43.889 09:59:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:43.889 09:59:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:43.889 09:59:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:43.889 09:59:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:43.889 09:59:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:43.889 09:59:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:43.889 09:59:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:10:43.889 09:59:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:43.889 09:59:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:43.889 09:59:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:43.889 09:59:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=365482 00:10:43.889 09:59:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:43.889 09:59:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 365482 00:10:43.889 09:59:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 365482 ']' 00:10:43.889 09:59:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:43.889 09:59:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:43.889 09:59:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:43.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:43.889 09:59:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:43.889 09:59:28 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:43.889 [2024-07-25 09:59:28.920358] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:10:43.890 [2024-07-25 09:59:28.920461] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:43.890 EAL: No free 2048 kB hugepages reported on node 1 00:10:43.890 [2024-07-25 09:59:28.999533] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:44.147 [2024-07-25 09:59:29.127322] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:44.147 [2024-07-25 09:59:29.127388] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:44.147 [2024-07-25 09:59:29.127404] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:44.147 [2024-07-25 09:59:29.127418] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:44.147 [2024-07-25 09:59:29.127438] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:44.147 [2024-07-25 09:59:29.127504] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:44.147 [2024-07-25 09:59:29.127536] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:44.147 [2024-07-25 09:59:29.127590] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:44.147 [2024-07-25 09:59:29.127593] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:44.147 09:59:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:44.147 09:59:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:10:44.147 09:59:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:44.147 09:59:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:44.147 09:59:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:44.147 09:59:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:44.147 09:59:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:44.147 09:59:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.147 09:59:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:44.147 [2024-07-25 09:59:29.307256] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:44.405 09:59:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.405 09:59:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:44.405 09:59:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.405 09:59:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:44.405 Malloc0 00:10:44.405 09:59:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.405 09:59:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:44.405 09:59:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.405 09:59:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:44.405 09:59:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.405 09:59:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:44.405 09:59:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.405 09:59:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:44.405 09:59:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.406 09:59:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:44.406 09:59:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.406 09:59:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:44.406 [2024-07-25 09:59:29.361077] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:44.406 09:59:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.406 09:59:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:10:44.406 test case1: single bdev can't be used in multiple subsystems 00:10:44.406 09:59:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:10:44.406 09:59:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.406 09:59:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:44.406 09:59:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.406 09:59:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:44.406 09:59:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.406 09:59:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:44.406 09:59:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.406 09:59:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:10:44.406 09:59:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:10:44.406 09:59:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.406 09:59:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:44.406 [2024-07-25 09:59:29.384914] bdev.c:8111:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:10:44.406 [2024-07-25 09:59:29.384943] subsystem.c:2087:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:10:44.406 [2024-07-25 09:59:29.384958] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:44.406 request: 00:10:44.406 { 00:10:44.406 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:44.406 "namespace": { 00:10:44.406 "bdev_name": "Malloc0", 00:10:44.406 "no_auto_visible": false 00:10:44.406 }, 00:10:44.406 "method": "nvmf_subsystem_add_ns", 00:10:44.406 "req_id": 1 00:10:44.406 } 00:10:44.406 Got JSON-RPC error response 00:10:44.406 response: 00:10:44.406 { 00:10:44.406 "code": -32602, 00:10:44.406 "message": "Invalid parameters" 00:10:44.406 } 00:10:44.406 09:59:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:10:44.406 09:59:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:10:44.406 09:59:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:10:44.406 09:59:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:10:44.406 Adding namespace failed - expected result. 00:10:44.406 09:59:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:10:44.406 test case2: host connect to nvmf target in multiple paths 00:10:44.406 09:59:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:10:44.406 09:59:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:44.406 09:59:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:44.406 [2024-07-25 09:59:29.393020] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:10:44.406 09:59:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:44.406 09:59:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:44.971 09:59:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:10:45.536 09:59:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:45.536 09:59:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:10:45.536 09:59:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:45.536 09:59:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:45.536 09:59:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:10:48.099 09:59:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:48.099 09:59:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:48.099 09:59:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:48.099 09:59:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:48.099 09:59:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:48.099 09:59:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:10:48.099 09:59:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:48.099 [global] 00:10:48.099 thread=1 00:10:48.099 invalidate=1 00:10:48.099 rw=write 00:10:48.099 time_based=1 00:10:48.099 runtime=1 00:10:48.099 ioengine=libaio 00:10:48.099 direct=1 00:10:48.099 bs=4096 00:10:48.099 iodepth=1 00:10:48.099 norandommap=0 00:10:48.099 numjobs=1 00:10:48.099 00:10:48.099 verify_dump=1 00:10:48.099 verify_backlog=512 00:10:48.099 verify_state_save=0 00:10:48.099 do_verify=1 00:10:48.099 verify=crc32c-intel 00:10:48.099 [job0] 00:10:48.099 filename=/dev/nvme0n1 00:10:48.099 Could not set queue depth (nvme0n1) 00:10:48.099 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:48.099 fio-3.35 00:10:48.099 Starting 1 thread 00:10:49.029 00:10:49.029 job0: (groupid=0, jobs=1): err= 0: pid=366065: Thu Jul 25 09:59:34 2024 00:10:49.029 read: IOPS=21, BW=86.6KiB/s (88.7kB/s)(88.0KiB/1016msec) 00:10:49.029 slat (nsec): min=9134, max=43871, avg=28065.27, stdev=8701.29 00:10:49.029 clat (usec): min=40769, max=41046, avg=40944.70, stdev=57.56 00:10:49.029 lat (usec): min=40778, max=41063, avg=40972.76, stdev=59.72 00:10:49.029 clat percentiles (usec): 00:10:49.029 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:10:49.029 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:49.029 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:49.029 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:49.029 | 99.99th=[41157] 00:10:49.029 write: IOPS=503, BW=2016KiB/s (2064kB/s)(2048KiB/1016msec); 0 zone resets 00:10:49.029 slat (nsec): min=8736, max=56642, avg=11886.49, stdev=5080.44 00:10:49.029 clat (usec): min=165, max=425, avg=208.32, stdev=40.99 00:10:49.029 lat (usec): min=175, max=456, avg=220.21, stdev=43.38 00:10:49.029 clat percentiles (usec): 00:10:49.029 | 1.00th=[ 169], 5.00th=[ 174], 10.00th=[ 176], 20.00th=[ 180], 00:10:49.029 | 30.00th=[ 186], 40.00th=[ 190], 50.00th=[ 196], 60.00th=[ 202], 00:10:49.029 | 70.00th=[ 212], 80.00th=[ 227], 90.00th=[ 251], 95.00th=[ 285], 00:10:49.029 | 99.00th=[ 392], 99.50th=[ 404], 99.90th=[ 424], 99.95th=[ 424], 00:10:49.029 | 99.99th=[ 424] 00:10:49.029 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:10:49.029 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:49.029 lat (usec) : 250=85.21%, 500=10.67% 00:10:49.029 lat (msec) : 50=4.12% 00:10:49.029 cpu : usr=0.30%, sys=0.69%, ctx=535, majf=0, minf=2 00:10:49.029 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:49.029 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:49.029 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:49.030 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:49.030 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:49.030 00:10:49.030 Run status group 0 (all jobs): 00:10:49.030 READ: bw=86.6KiB/s (88.7kB/s), 86.6KiB/s-86.6KiB/s (88.7kB/s-88.7kB/s), io=88.0KiB (90.1kB), run=1016-1016msec 00:10:49.030 WRITE: bw=2016KiB/s (2064kB/s), 2016KiB/s-2016KiB/s (2064kB/s-2064kB/s), io=2048KiB (2097kB), run=1016-1016msec 00:10:49.030 00:10:49.030 Disk stats (read/write): 00:10:49.030 nvme0n1: ios=69/512, merge=0/0, ticks=904/107, in_queue=1011, util=96.49% 00:10:49.030 09:59:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:49.287 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:49.287 09:59:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:49.287 09:59:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:10:49.287 09:59:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:49.287 09:59:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:49.287 09:59:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:49.287 09:59:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:49.287 09:59:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:10:49.287 09:59:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:49.287 09:59:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:10:49.287 09:59:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:49.287 09:59:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:10:49.287 09:59:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:49.287 09:59:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:10:49.287 09:59:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:49.287 09:59:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:49.287 rmmod nvme_tcp 00:10:49.287 rmmod nvme_fabrics 00:10:49.287 rmmod nvme_keyring 00:10:49.287 09:59:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:49.287 09:59:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:10:49.287 09:59:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:10:49.287 09:59:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 365482 ']' 00:10:49.287 09:59:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 365482 00:10:49.287 09:59:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 365482 ']' 00:10:49.287 09:59:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 365482 00:10:49.287 09:59:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:10:49.287 09:59:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:49.287 09:59:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 365482 00:10:49.287 09:59:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:49.287 09:59:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:49.287 09:59:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 365482' 00:10:49.287 killing process with pid 365482 00:10:49.287 09:59:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 365482 00:10:49.287 09:59:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 365482 00:10:49.852 09:59:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:49.852 09:59:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:49.852 09:59:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:49.852 09:59:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:49.852 09:59:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:49.852 09:59:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:49.852 09:59:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:49.852 09:59:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:51.754 09:59:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:51.754 00:10:51.754 real 0m10.544s 00:10:51.754 user 0m22.928s 00:10:51.754 sys 0m2.755s 00:10:51.754 09:59:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:51.754 09:59:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:51.754 ************************************ 00:10:51.754 END TEST nvmf_nmic 00:10:51.754 ************************************ 00:10:51.754 09:59:36 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:51.754 09:59:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:51.755 09:59:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:51.755 09:59:36 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:51.755 ************************************ 00:10:51.755 START TEST nvmf_fio_target 00:10:51.755 ************************************ 00:10:51.755 09:59:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:52.013 * Looking for test storage... 00:10:52.013 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:52.013 09:59:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:52.013 09:59:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:10:52.013 09:59:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:52.013 09:59:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:52.013 09:59:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:52.013 09:59:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:52.013 09:59:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:52.013 09:59:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:52.013 09:59:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:52.013 09:59:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:52.013 09:59:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:52.013 09:59:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:52.013 09:59:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:10:52.013 09:59:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:10:52.013 09:59:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:52.013 09:59:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:52.013 09:59:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:52.013 09:59:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:52.013 09:59:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:52.013 09:59:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:52.013 09:59:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:52.013 09:59:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:52.013 09:59:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:52.013 09:59:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:52.014 09:59:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:52.014 09:59:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:10:52.014 09:59:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:52.014 09:59:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:10:52.014 09:59:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:52.014 09:59:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:52.014 09:59:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:52.014 09:59:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:52.014 09:59:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:52.014 09:59:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:52.014 09:59:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:52.014 09:59:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:52.014 09:59:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:52.014 09:59:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:52.014 09:59:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:52.014 09:59:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:10:52.014 09:59:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:52.014 09:59:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:52.014 09:59:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:52.014 09:59:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:52.014 09:59:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:52.014 09:59:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:52.014 09:59:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:52.014 09:59:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:52.014 09:59:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:52.014 09:59:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:52.014 09:59:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:10:52.014 09:59:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:54.549 09:59:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:54.549 09:59:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:10:54.549 09:59:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:54.549 09:59:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:54.549 09:59:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:54.549 09:59:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:54.549 09:59:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:54.549 09:59:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:10:54.549 09:59:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:54.549 09:59:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:10:54.549 09:59:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:10:54.549 09:59:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:10:54.549 09:59:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:10:54.549 09:59:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:10:54.549 09:59:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:10:54.549 09:59:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:54.549 09:59:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:54.549 09:59:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:54.549 09:59:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:54.549 09:59:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:54.549 09:59:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:54.549 09:59:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:54.549 09:59:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:54.549 09:59:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:54.549 09:59:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:54.549 09:59:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:54.549 09:59:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:54.549 09:59:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:54.549 09:59:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:54.549 09:59:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:54.549 09:59:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:54.549 09:59:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:54.549 09:59:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:54.549 09:59:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:10:54.549 Found 0000:84:00.0 (0x8086 - 0x159b) 00:10:54.549 09:59:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:54.549 09:59:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:54.549 09:59:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:54.549 09:59:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:54.549 09:59:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:54.549 09:59:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:54.549 09:59:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:10:54.549 Found 0000:84:00.1 (0x8086 - 0x159b) 00:10:54.550 09:59:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:54.550 09:59:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:54.550 09:59:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:54.550 09:59:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:54.550 09:59:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:54.550 09:59:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:54.550 09:59:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:54.550 09:59:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:54.550 09:59:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:54.550 09:59:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:54.550 09:59:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:54.550 09:59:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:54.550 09:59:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:54.550 09:59:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:54.550 09:59:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:54.550 09:59:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:10:54.550 Found net devices under 0000:84:00.0: cvl_0_0 00:10:54.550 09:59:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:54.550 09:59:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:54.550 09:59:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:54.550 09:59:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:54.550 09:59:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:54.550 09:59:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:54.550 09:59:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:54.550 09:59:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:54.550 09:59:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:10:54.550 Found net devices under 0000:84:00.1: cvl_0_1 00:10:54.550 09:59:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:54.550 09:59:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:54.550 09:59:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:10:54.550 09:59:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:54.550 09:59:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:54.550 09:59:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:54.550 09:59:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:54.550 09:59:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:54.550 09:59:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:54.550 09:59:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:54.550 09:59:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:54.550 09:59:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:54.550 09:59:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:54.550 09:59:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:54.550 09:59:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:54.550 09:59:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:54.550 09:59:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:54.550 09:59:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:54.550 09:59:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:54.550 09:59:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:54.550 09:59:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:54.550 09:59:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:54.550 09:59:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:54.550 09:59:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:54.550 09:59:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:54.550 09:59:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:54.550 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:54.550 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.259 ms 00:10:54.550 00:10:54.550 --- 10.0.0.2 ping statistics --- 00:10:54.550 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:54.550 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:10:54.550 09:59:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:54.550 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:54.550 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.161 ms 00:10:54.550 00:10:54.550 --- 10.0.0.1 ping statistics --- 00:10:54.550 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:54.550 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:10:54.550 09:59:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:54.550 09:59:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:10:54.550 09:59:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:54.550 09:59:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:54.550 09:59:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:54.550 09:59:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:54.550 09:59:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:54.550 09:59:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:54.550 09:59:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:54.550 09:59:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:54.550 09:59:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:54.550 09:59:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:54.550 09:59:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:54.550 09:59:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=368222 00:10:54.550 09:59:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:54.550 09:59:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 368222 00:10:54.550 09:59:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 368222 ']' 00:10:54.550 09:59:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:54.550 09:59:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:54.550 09:59:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:54.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:54.550 09:59:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:54.550 09:59:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:54.809 [2024-07-25 09:59:39.749366] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:10:54.809 [2024-07-25 09:59:39.749473] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:54.809 EAL: No free 2048 kB hugepages reported on node 1 00:10:54.809 [2024-07-25 09:59:39.825499] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:54.809 [2024-07-25 09:59:39.948039] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:54.809 [2024-07-25 09:59:39.948104] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:54.809 [2024-07-25 09:59:39.948121] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:54.809 [2024-07-25 09:59:39.948135] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:54.809 [2024-07-25 09:59:39.948148] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:54.809 [2024-07-25 09:59:39.948234] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:54.809 [2024-07-25 09:59:39.948271] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:54.809 [2024-07-25 09:59:39.948323] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:54.809 [2024-07-25 09:59:39.948326] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:55.067 09:59:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:55.067 09:59:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:10:55.067 09:59:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:55.067 09:59:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:55.067 09:59:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:55.067 09:59:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:55.067 09:59:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:55.325 [2024-07-25 09:59:40.379919] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:55.325 09:59:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:55.890 09:59:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:55.890 09:59:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:56.455 09:59:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:56.455 09:59:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:57.020 09:59:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:57.020 09:59:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:57.278 09:59:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:57.278 09:59:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:57.536 09:59:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:58.101 09:59:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:58.101 09:59:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:58.359 09:59:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:58.359 09:59:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:58.617 09:59:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:58.617 09:59:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:10:59.182 09:59:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:59.439 09:59:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:59.439 09:59:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:59.439 09:59:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:59.439 09:59:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:00.004 09:59:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:00.261 [2024-07-25 09:59:45.316143] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:00.261 09:59:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:11:00.519 09:59:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:11:01.083 09:59:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:01.649 09:59:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:11:01.649 09:59:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:11:01.649 09:59:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:01.649 09:59:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:11:01.649 09:59:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:11:01.649 09:59:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:11:03.573 09:59:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:03.573 09:59:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:03.573 09:59:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:03.573 09:59:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:11:03.573 09:59:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:03.573 09:59:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:11:03.573 09:59:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:03.573 [global] 00:11:03.573 thread=1 00:11:03.573 invalidate=1 00:11:03.573 rw=write 00:11:03.573 time_based=1 00:11:03.573 runtime=1 00:11:03.573 ioengine=libaio 00:11:03.573 direct=1 00:11:03.573 bs=4096 00:11:03.573 iodepth=1 00:11:03.573 norandommap=0 00:11:03.573 numjobs=1 00:11:03.573 00:11:03.573 verify_dump=1 00:11:03.573 verify_backlog=512 00:11:03.573 verify_state_save=0 00:11:03.573 do_verify=1 00:11:03.573 verify=crc32c-intel 00:11:03.573 [job0] 00:11:03.573 filename=/dev/nvme0n1 00:11:03.573 [job1] 00:11:03.573 filename=/dev/nvme0n2 00:11:03.573 [job2] 00:11:03.573 filename=/dev/nvme0n3 00:11:03.573 [job3] 00:11:03.573 filename=/dev/nvme0n4 00:11:03.573 Could not set queue depth (nvme0n1) 00:11:03.573 Could not set queue depth (nvme0n2) 00:11:03.573 Could not set queue depth (nvme0n3) 00:11:03.573 Could not set queue depth (nvme0n4) 00:11:03.831 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:03.831 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:03.831 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:03.831 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:03.831 fio-3.35 00:11:03.831 Starting 4 threads 00:11:05.202 00:11:05.202 job0: (groupid=0, jobs=1): err= 0: pid=369435: Thu Jul 25 09:59:50 2024 00:11:05.202 read: IOPS=584, BW=2337KiB/s (2393kB/s)(2384KiB/1020msec) 00:11:05.202 slat (nsec): min=6013, max=22694, avg=9517.98, stdev=2279.34 00:11:05.202 clat (usec): min=260, max=41007, avg=1261.42, stdev=6162.51 00:11:05.202 lat (usec): min=268, max=41020, avg=1270.94, stdev=6163.17 00:11:05.202 clat percentiles (usec): 00:11:05.202 | 1.00th=[ 265], 5.00th=[ 273], 10.00th=[ 277], 20.00th=[ 285], 00:11:05.202 | 30.00th=[ 289], 40.00th=[ 293], 50.00th=[ 302], 60.00th=[ 306], 00:11:05.202 | 70.00th=[ 314], 80.00th=[ 322], 90.00th=[ 343], 95.00th=[ 371], 00:11:05.202 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:11:05.202 | 99.99th=[41157] 00:11:05.202 write: IOPS=1003, BW=4016KiB/s (4112kB/s)(4096KiB/1020msec); 0 zone resets 00:11:05.202 slat (usec): min=7, max=107, avg=13.62, stdev=10.97 00:11:05.202 clat (usec): min=140, max=1095, avg=237.29, stdev=71.38 00:11:05.202 lat (usec): min=183, max=1108, avg=250.91, stdev=74.14 00:11:05.202 clat percentiles (usec): 00:11:05.202 | 1.00th=[ 180], 5.00th=[ 184], 10.00th=[ 188], 20.00th=[ 194], 00:11:05.202 | 30.00th=[ 200], 40.00th=[ 210], 50.00th=[ 221], 60.00th=[ 233], 00:11:05.202 | 70.00th=[ 245], 80.00th=[ 262], 90.00th=[ 310], 95.00th=[ 343], 00:11:05.202 | 99.00th=[ 465], 99.50th=[ 676], 99.90th=[ 1057], 99.95th=[ 1090], 00:11:05.202 | 99.99th=[ 1090] 00:11:05.202 bw ( KiB/s): min= 4096, max= 4096, per=21.88%, avg=4096.00, stdev= 0.00, samples=2 00:11:05.202 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=2 00:11:05.202 lat (usec) : 250=47.16%, 500=51.17%, 750=0.49%, 1000=0.19% 00:11:05.202 lat (msec) : 2=0.12%, 50=0.86% 00:11:05.202 cpu : usr=0.79%, sys=2.55%, ctx=1624, majf=0, minf=1 00:11:05.202 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:05.202 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:05.202 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:05.202 issued rwts: total=596,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:05.202 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:05.202 job1: (groupid=0, jobs=1): err= 0: pid=369436: Thu Jul 25 09:59:50 2024 00:11:05.202 read: IOPS=996, BW=3985KiB/s (4080kB/s)(4128KiB/1036msec) 00:11:05.202 slat (nsec): min=5877, max=38245, avg=12523.21, stdev=5058.02 00:11:05.202 clat (usec): min=276, max=41087, avg=648.08, stdev=3561.38 00:11:05.202 lat (usec): min=283, max=41105, avg=660.60, stdev=3561.86 00:11:05.202 clat percentiles (usec): 00:11:05.202 | 1.00th=[ 281], 5.00th=[ 289], 10.00th=[ 297], 20.00th=[ 310], 00:11:05.202 | 30.00th=[ 314], 40.00th=[ 322], 50.00th=[ 330], 60.00th=[ 338], 00:11:05.202 | 70.00th=[ 347], 80.00th=[ 355], 90.00th=[ 371], 95.00th=[ 392], 00:11:05.202 | 99.00th=[ 478], 99.50th=[40633], 99.90th=[41157], 99.95th=[41157], 00:11:05.202 | 99.99th=[41157] 00:11:05.202 write: IOPS=1482, BW=5931KiB/s (6073kB/s)(6144KiB/1036msec); 0 zone resets 00:11:05.202 slat (usec): min=7, max=1408, avg=12.25, stdev=36.05 00:11:05.202 clat (usec): min=159, max=1901, avg=212.52, stdev=61.62 00:11:05.202 lat (usec): min=167, max=1929, avg=224.78, stdev=72.21 00:11:05.202 clat percentiles (usec): 00:11:05.202 | 1.00th=[ 163], 5.00th=[ 169], 10.00th=[ 174], 20.00th=[ 180], 00:11:05.202 | 30.00th=[ 190], 40.00th=[ 198], 50.00th=[ 206], 60.00th=[ 212], 00:11:05.202 | 70.00th=[ 221], 80.00th=[ 231], 90.00th=[ 251], 95.00th=[ 281], 00:11:05.202 | 99.00th=[ 347], 99.50th=[ 412], 99.90th=[ 865], 99.95th=[ 1909], 00:11:05.202 | 99.99th=[ 1909] 00:11:05.202 bw ( KiB/s): min= 4096, max= 8192, per=32.82%, avg=6144.00, stdev=2896.31, samples=2 00:11:05.202 iops : min= 1024, max= 2048, avg=1536.00, stdev=724.08, samples=2 00:11:05.202 lat (usec) : 250=53.58%, 500=45.87%, 750=0.12%, 1000=0.08% 00:11:05.202 lat (msec) : 2=0.04%, 50=0.31% 00:11:05.203 cpu : usr=1.35%, sys=3.38%, ctx=2571, majf=0, minf=2 00:11:05.203 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:05.203 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:05.203 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:05.203 issued rwts: total=1032,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:05.203 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:05.203 job2: (groupid=0, jobs=1): err= 0: pid=369437: Thu Jul 25 09:59:50 2024 00:11:05.203 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:11:05.203 slat (nsec): min=7935, max=26402, avg=9526.18, stdev=1444.00 00:11:05.203 clat (usec): min=261, max=40754, avg=367.79, stdev=1031.40 00:11:05.203 lat (usec): min=270, max=40763, avg=377.32, stdev=1031.40 00:11:05.203 clat percentiles (usec): 00:11:05.203 | 1.00th=[ 297], 5.00th=[ 310], 10.00th=[ 314], 20.00th=[ 322], 00:11:05.203 | 30.00th=[ 326], 40.00th=[ 334], 50.00th=[ 343], 60.00th=[ 347], 00:11:05.203 | 70.00th=[ 355], 80.00th=[ 363], 90.00th=[ 371], 95.00th=[ 379], 00:11:05.203 | 99.00th=[ 392], 99.50th=[ 404], 99.90th=[ 469], 99.95th=[40633], 00:11:05.203 | 99.99th=[40633] 00:11:05.203 write: IOPS=1774, BW=7097KiB/s (7267kB/s)(7104KiB/1001msec); 0 zone resets 00:11:05.203 slat (nsec): min=8642, max=57406, avg=12060.60, stdev=2896.69 00:11:05.203 clat (usec): min=177, max=989, avg=218.96, stdev=45.48 00:11:05.203 lat (usec): min=187, max=1005, avg=231.02, stdev=45.81 00:11:05.203 clat percentiles (usec): 00:11:05.203 | 1.00th=[ 184], 5.00th=[ 188], 10.00th=[ 192], 20.00th=[ 198], 00:11:05.203 | 30.00th=[ 202], 40.00th=[ 206], 50.00th=[ 210], 60.00th=[ 217], 00:11:05.203 | 70.00th=[ 223], 80.00th=[ 233], 90.00th=[ 249], 95.00th=[ 269], 00:11:05.203 | 99.00th=[ 326], 99.50th=[ 408], 99.90th=[ 816], 99.95th=[ 988], 00:11:05.203 | 99.99th=[ 988] 00:11:05.203 bw ( KiB/s): min= 8192, max= 8192, per=43.77%, avg=8192.00, stdev= 0.00, samples=1 00:11:05.203 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:11:05.203 lat (usec) : 250=48.49%, 500=51.24%, 750=0.09%, 1000=0.15% 00:11:05.203 lat (msec) : 50=0.03% 00:11:05.203 cpu : usr=2.90%, sys=4.70%, ctx=3313, majf=0, minf=1 00:11:05.203 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:05.203 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:05.203 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:05.203 issued rwts: total=1536,1776,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:05.203 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:05.203 job3: (groupid=0, jobs=1): err= 0: pid=369438: Thu Jul 25 09:59:50 2024 00:11:05.203 read: IOPS=22, BW=91.7KiB/s (93.9kB/s)(92.0KiB/1003msec) 00:11:05.203 slat (nsec): min=9269, max=21965, avg=16034.26, stdev=2361.39 00:11:05.203 clat (usec): min=436, max=41015, avg=37392.46, stdev=11660.20 00:11:05.203 lat (usec): min=452, max=41030, avg=37408.50, stdev=11659.96 00:11:05.203 clat percentiles (usec): 00:11:05.203 | 1.00th=[ 437], 5.00th=[ 449], 10.00th=[40633], 20.00th=[40633], 00:11:05.203 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:05.203 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:11:05.203 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:11:05.203 | 99.99th=[41157] 00:11:05.203 write: IOPS=510, BW=2042KiB/s (2091kB/s)(2048KiB/1003msec); 0 zone resets 00:11:05.203 slat (nsec): min=8246, max=52006, avg=13508.63, stdev=5110.57 00:11:05.203 clat (usec): min=183, max=1081, avg=258.92, stdev=70.66 00:11:05.203 lat (usec): min=192, max=1092, avg=272.43, stdev=70.90 00:11:05.203 clat percentiles (usec): 00:11:05.203 | 1.00th=[ 194], 5.00th=[ 204], 10.00th=[ 212], 20.00th=[ 221], 00:11:05.203 | 30.00th=[ 231], 40.00th=[ 239], 50.00th=[ 247], 60.00th=[ 258], 00:11:05.203 | 70.00th=[ 269], 80.00th=[ 285], 90.00th=[ 302], 95.00th=[ 322], 00:11:05.203 | 99.00th=[ 570], 99.50th=[ 758], 99.90th=[ 1074], 99.95th=[ 1074], 00:11:05.203 | 99.99th=[ 1074] 00:11:05.203 bw ( KiB/s): min= 4096, max= 4096, per=21.88%, avg=4096.00, stdev= 0.00, samples=1 00:11:05.203 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:05.203 lat (usec) : 250=50.84%, 500=44.11%, 750=0.56%, 1000=0.37% 00:11:05.203 lat (msec) : 2=0.19%, 50=3.93% 00:11:05.203 cpu : usr=0.20%, sys=1.00%, ctx=543, majf=0, minf=1 00:11:05.203 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:05.203 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:05.203 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:05.203 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:05.203 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:05.203 00:11:05.203 Run status group 0 (all jobs): 00:11:05.203 READ: bw=12.0MiB/s (12.6MB/s), 91.7KiB/s-6138KiB/s (93.9kB/s-6285kB/s), io=12.4MiB (13.1MB), run=1001-1036msec 00:11:05.203 WRITE: bw=18.3MiB/s (19.2MB/s), 2042KiB/s-7097KiB/s (2091kB/s-7267kB/s), io=18.9MiB (19.9MB), run=1001-1036msec 00:11:05.203 00:11:05.203 Disk stats (read/write): 00:11:05.203 nvme0n1: ios=562/708, merge=0/0, ticks=664/163, in_queue=827, util=85.67% 00:11:05.203 nvme0n2: ios=1083/1536, merge=0/0, ticks=822/317, in_queue=1139, util=97.04% 00:11:05.203 nvme0n3: ios=1250/1536, merge=0/0, ticks=453/324, in_queue=777, util=88.61% 00:11:05.203 nvme0n4: ios=76/512, merge=0/0, ticks=914/126, in_queue=1040, util=96.91% 00:11:05.203 09:59:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:11:05.203 [global] 00:11:05.203 thread=1 00:11:05.203 invalidate=1 00:11:05.203 rw=randwrite 00:11:05.203 time_based=1 00:11:05.203 runtime=1 00:11:05.203 ioengine=libaio 00:11:05.203 direct=1 00:11:05.203 bs=4096 00:11:05.203 iodepth=1 00:11:05.203 norandommap=0 00:11:05.203 numjobs=1 00:11:05.203 00:11:05.203 verify_dump=1 00:11:05.203 verify_backlog=512 00:11:05.203 verify_state_save=0 00:11:05.203 do_verify=1 00:11:05.203 verify=crc32c-intel 00:11:05.203 [job0] 00:11:05.203 filename=/dev/nvme0n1 00:11:05.203 [job1] 00:11:05.203 filename=/dev/nvme0n2 00:11:05.203 [job2] 00:11:05.203 filename=/dev/nvme0n3 00:11:05.203 [job3] 00:11:05.203 filename=/dev/nvme0n4 00:11:05.203 Could not set queue depth (nvme0n1) 00:11:05.203 Could not set queue depth (nvme0n2) 00:11:05.203 Could not set queue depth (nvme0n3) 00:11:05.203 Could not set queue depth (nvme0n4) 00:11:05.203 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:05.203 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:05.203 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:05.203 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:05.203 fio-3.35 00:11:05.203 Starting 4 threads 00:11:06.572 00:11:06.572 job0: (groupid=0, jobs=1): err= 0: pid=369789: Thu Jul 25 09:59:51 2024 00:11:06.572 read: IOPS=53, BW=213KiB/s (218kB/s)(216KiB/1013msec) 00:11:06.573 slat (nsec): min=8740, max=36736, avg=14131.24, stdev=5501.00 00:11:06.573 clat (usec): min=290, max=41203, avg=16150.48, stdev=20002.77 00:11:06.573 lat (usec): min=300, max=41212, avg=16164.61, stdev=20004.65 00:11:06.573 clat percentiles (usec): 00:11:06.573 | 1.00th=[ 289], 5.00th=[ 306], 10.00th=[ 314], 20.00th=[ 326], 00:11:06.573 | 30.00th=[ 334], 40.00th=[ 351], 50.00th=[ 367], 60.00th=[ 404], 00:11:06.573 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:11:06.573 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:11:06.573 | 99.99th=[41157] 00:11:06.573 write: IOPS=505, BW=2022KiB/s (2070kB/s)(2048KiB/1013msec); 0 zone resets 00:11:06.573 slat (nsec): min=10773, max=54939, avg=13503.59, stdev=4975.98 00:11:06.573 clat (usec): min=183, max=391, avg=254.42, stdev=29.57 00:11:06.573 lat (usec): min=194, max=403, avg=267.92, stdev=30.09 00:11:06.573 clat percentiles (usec): 00:11:06.573 | 1.00th=[ 196], 5.00th=[ 208], 10.00th=[ 221], 20.00th=[ 233], 00:11:06.573 | 30.00th=[ 239], 40.00th=[ 243], 50.00th=[ 249], 60.00th=[ 258], 00:11:06.573 | 70.00th=[ 269], 80.00th=[ 281], 90.00th=[ 293], 95.00th=[ 306], 00:11:06.573 | 99.00th=[ 330], 99.50th=[ 343], 99.90th=[ 392], 99.95th=[ 392], 00:11:06.573 | 99.99th=[ 392] 00:11:06.573 bw ( KiB/s): min= 4096, max= 4096, per=34.20%, avg=4096.00, stdev= 0.00, samples=1 00:11:06.573 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:06.573 lat (usec) : 250=45.23%, 500=51.06% 00:11:06.573 lat (msec) : 50=3.71% 00:11:06.573 cpu : usr=0.49%, sys=0.49%, ctx=567, majf=0, minf=2 00:11:06.573 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:06.573 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:06.573 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:06.573 issued rwts: total=54,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:06.573 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:06.573 job1: (groupid=0, jobs=1): err= 0: pid=369790: Thu Jul 25 09:59:51 2024 00:11:06.573 read: IOPS=71, BW=286KiB/s (293kB/s)(288KiB/1008msec) 00:11:06.573 slat (usec): min=7, max=370, avg=16.66, stdev=42.80 00:11:06.573 clat (usec): min=302, max=41354, avg=11480.86, stdev=18087.54 00:11:06.573 lat (usec): min=313, max=41724, avg=11497.52, stdev=18099.70 00:11:06.573 clat percentiles (usec): 00:11:06.573 | 1.00th=[ 302], 5.00th=[ 326], 10.00th=[ 347], 20.00th=[ 351], 00:11:06.573 | 30.00th=[ 371], 40.00th=[ 375], 50.00th=[ 383], 60.00th=[ 408], 00:11:06.573 | 70.00th=[ 502], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:11:06.573 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:11:06.573 | 99.99th=[41157] 00:11:06.573 write: IOPS=507, BW=2032KiB/s (2081kB/s)(2048KiB/1008msec); 0 zone resets 00:11:06.573 slat (nsec): min=9311, max=58096, avg=15497.33, stdev=5602.98 00:11:06.573 clat (usec): min=169, max=4177, avg=331.68, stdev=252.28 00:11:06.573 lat (usec): min=183, max=4194, avg=347.17, stdev=252.74 00:11:06.573 clat percentiles (usec): 00:11:06.573 | 1.00th=[ 178], 5.00th=[ 194], 10.00th=[ 206], 20.00th=[ 225], 00:11:06.573 | 30.00th=[ 249], 40.00th=[ 289], 50.00th=[ 318], 60.00th=[ 334], 00:11:06.573 | 70.00th=[ 359], 80.00th=[ 379], 90.00th=[ 437], 95.00th=[ 486], 00:11:06.573 | 99.00th=[ 553], 99.50th=[ 2573], 99.90th=[ 4178], 99.95th=[ 4178], 00:11:06.573 | 99.99th=[ 4178] 00:11:06.573 bw ( KiB/s): min= 4096, max= 4096, per=34.20%, avg=4096.00, stdev= 0.00, samples=1 00:11:06.573 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:06.573 lat (usec) : 250=26.71%, 500=66.95%, 750=2.23% 00:11:06.573 lat (msec) : 2=0.17%, 4=0.34%, 10=0.17%, 50=3.42% 00:11:06.573 cpu : usr=0.50%, sys=1.09%, ctx=585, majf=0, minf=1 00:11:06.573 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:06.573 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:06.573 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:06.573 issued rwts: total=72,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:06.573 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:06.573 job2: (groupid=0, jobs=1): err= 0: pid=369791: Thu Jul 25 09:59:51 2024 00:11:06.573 read: IOPS=20, BW=81.9KiB/s (83.8kB/s)(84.0KiB/1026msec) 00:11:06.573 slat (nsec): min=7547, max=30911, avg=17543.24, stdev=4356.96 00:11:06.573 clat (usec): min=40785, max=41031, avg=40967.72, stdev=52.60 00:11:06.573 lat (usec): min=40793, max=41046, avg=40985.26, stdev=53.89 00:11:06.573 clat percentiles (usec): 00:11:06.573 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:11:06.573 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:11:06.573 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:11:06.573 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:11:06.573 | 99.99th=[41157] 00:11:06.573 write: IOPS=499, BW=1996KiB/s (2044kB/s)(2048KiB/1026msec); 0 zone resets 00:11:06.573 slat (nsec): min=6939, max=48955, avg=15148.78, stdev=7613.99 00:11:06.573 clat (usec): min=178, max=550, avg=303.93, stdev=84.86 00:11:06.573 lat (usec): min=186, max=567, avg=319.08, stdev=87.78 00:11:06.573 clat percentiles (usec): 00:11:06.573 | 1.00th=[ 186], 5.00th=[ 202], 10.00th=[ 210], 20.00th=[ 223], 00:11:06.573 | 30.00th=[ 235], 40.00th=[ 253], 50.00th=[ 297], 60.00th=[ 322], 00:11:06.573 | 70.00th=[ 355], 80.00th=[ 383], 90.00th=[ 424], 95.00th=[ 461], 00:11:06.573 | 99.00th=[ 502], 99.50th=[ 519], 99.90th=[ 553], 99.95th=[ 553], 00:11:06.573 | 99.99th=[ 553] 00:11:06.573 bw ( KiB/s): min= 4096, max= 4096, per=34.20%, avg=4096.00, stdev= 0.00, samples=1 00:11:06.573 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:11:06.573 lat (usec) : 250=37.90%, 500=56.66%, 750=1.50% 00:11:06.573 lat (msec) : 50=3.94% 00:11:06.573 cpu : usr=0.39%, sys=0.78%, ctx=534, majf=0, minf=1 00:11:06.573 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:06.573 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:06.573 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:06.573 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:06.573 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:06.573 job3: (groupid=0, jobs=1): err= 0: pid=369792: Thu Jul 25 09:59:51 2024 00:11:06.573 read: IOPS=1031, BW=4127KiB/s (4226kB/s)(4160KiB/1008msec) 00:11:06.573 slat (nsec): min=7855, max=28304, avg=9884.28, stdev=2203.30 00:11:06.573 clat (usec): min=255, max=41314, avg=580.08, stdev=3332.32 00:11:06.573 lat (usec): min=263, max=41323, avg=589.96, stdev=3332.89 00:11:06.573 clat percentiles (usec): 00:11:06.573 | 1.00th=[ 265], 5.00th=[ 277], 10.00th=[ 277], 20.00th=[ 285], 00:11:06.573 | 30.00th=[ 285], 40.00th=[ 289], 50.00th=[ 293], 60.00th=[ 302], 00:11:06.573 | 70.00th=[ 306], 80.00th=[ 314], 90.00th=[ 338], 95.00th=[ 396], 00:11:06.573 | 99.00th=[ 553], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:11:06.573 | 99.99th=[41157] 00:11:06.573 write: IOPS=1523, BW=6095KiB/s (6242kB/s)(6144KiB/1008msec); 0 zone resets 00:11:06.573 slat (nsec): min=7023, max=45012, avg=10954.99, stdev=2845.57 00:11:06.573 clat (usec): min=171, max=567, avg=240.23, stdev=75.88 00:11:06.573 lat (usec): min=181, max=595, avg=251.18, stdev=76.90 00:11:06.573 clat percentiles (usec): 00:11:06.573 | 1.00th=[ 178], 5.00th=[ 184], 10.00th=[ 186], 20.00th=[ 190], 00:11:06.573 | 30.00th=[ 196], 40.00th=[ 200], 50.00th=[ 206], 60.00th=[ 215], 00:11:06.573 | 70.00th=[ 231], 80.00th=[ 289], 90.00th=[ 363], 95.00th=[ 404], 00:11:06.573 | 99.00th=[ 498], 99.50th=[ 519], 99.90th=[ 545], 99.95th=[ 570], 00:11:06.573 | 99.99th=[ 570] 00:11:06.573 bw ( KiB/s): min= 4096, max= 8192, per=51.30%, avg=6144.00, stdev=2896.31, samples=2 00:11:06.573 iops : min= 1024, max= 2048, avg=1536.00, stdev=724.08, samples=2 00:11:06.573 lat (usec) : 250=44.60%, 500=53.92%, 750=1.20% 00:11:06.573 lat (msec) : 50=0.27% 00:11:06.573 cpu : usr=2.68%, sys=2.68%, ctx=2576, majf=0, minf=1 00:11:06.573 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:06.573 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:06.573 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:06.573 issued rwts: total=1040,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:06.573 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:06.573 00:11:06.573 Run status group 0 (all jobs): 00:11:06.573 READ: bw=4628KiB/s (4739kB/s), 81.9KiB/s-4127KiB/s (83.8kB/s-4226kB/s), io=4748KiB (4862kB), run=1008-1026msec 00:11:06.573 WRITE: bw=11.7MiB/s (12.3MB/s), 1996KiB/s-6095KiB/s (2044kB/s-6242kB/s), io=12.0MiB (12.6MB), run=1008-1026msec 00:11:06.573 00:11:06.573 Disk stats (read/write): 00:11:06.573 nvme0n1: ios=74/512, merge=0/0, ticks=1691/133, in_queue=1824, util=99.00% 00:11:06.573 nvme0n2: ios=109/512, merge=0/0, ticks=977/154, in_queue=1131, util=98.98% 00:11:06.573 nvme0n3: ios=15/512, merge=0/0, ticks=615/153, in_queue=768, util=87.10% 00:11:06.573 nvme0n4: ios=1035/1536, merge=0/0, ticks=380/365, in_queue=745, util=89.11% 00:11:06.573 09:59:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:11:06.573 [global] 00:11:06.573 thread=1 00:11:06.573 invalidate=1 00:11:06.573 rw=write 00:11:06.573 time_based=1 00:11:06.573 runtime=1 00:11:06.573 ioengine=libaio 00:11:06.573 direct=1 00:11:06.573 bs=4096 00:11:06.573 iodepth=128 00:11:06.573 norandommap=0 00:11:06.573 numjobs=1 00:11:06.573 00:11:06.573 verify_dump=1 00:11:06.573 verify_backlog=512 00:11:06.573 verify_state_save=0 00:11:06.573 do_verify=1 00:11:06.573 verify=crc32c-intel 00:11:06.573 [job0] 00:11:06.573 filename=/dev/nvme0n1 00:11:06.573 [job1] 00:11:06.573 filename=/dev/nvme0n2 00:11:06.573 [job2] 00:11:06.573 filename=/dev/nvme0n3 00:11:06.573 [job3] 00:11:06.573 filename=/dev/nvme0n4 00:11:06.573 Could not set queue depth (nvme0n1) 00:11:06.573 Could not set queue depth (nvme0n2) 00:11:06.573 Could not set queue depth (nvme0n3) 00:11:06.573 Could not set queue depth (nvme0n4) 00:11:06.830 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:06.830 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:06.830 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:06.830 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:06.830 fio-3.35 00:11:06.830 Starting 4 threads 00:11:08.207 00:11:08.207 job0: (groupid=0, jobs=1): err= 0: pid=370016: Thu Jul 25 09:59:53 2024 00:11:08.207 read: IOPS=3582, BW=14.0MiB/s (14.7MB/s)(14.1MiB/1009msec) 00:11:08.207 slat (usec): min=2, max=43800, avg=133.53, stdev=1122.38 00:11:08.207 clat (usec): min=5372, max=57825, avg=18080.41, stdev=9226.65 00:11:08.207 lat (usec): min=5380, max=57833, avg=18213.94, stdev=9276.79 00:11:08.207 clat percentiles (usec): 00:11:08.207 | 1.00th=[ 6259], 5.00th=[ 8848], 10.00th=[ 9896], 20.00th=[11469], 00:11:08.207 | 30.00th=[12518], 40.00th=[13173], 50.00th=[15533], 60.00th=[18220], 00:11:08.207 | 70.00th=[21627], 80.00th=[22152], 90.00th=[26346], 95.00th=[32113], 00:11:08.207 | 99.00th=[55313], 99.50th=[57934], 99.90th=[57934], 99.95th=[57934], 00:11:08.207 | 99.99th=[57934] 00:11:08.207 write: IOPS=4059, BW=15.9MiB/s (16.6MB/s)(16.0MiB/1009msec); 0 zone resets 00:11:08.207 slat (usec): min=4, max=9807, avg=104.25, stdev=590.23 00:11:08.207 clat (usec): min=983, max=44942, avg=15308.92, stdev=6531.66 00:11:08.207 lat (usec): min=993, max=44951, avg=15413.16, stdev=6571.03 00:11:08.207 clat percentiles (usec): 00:11:08.207 | 1.00th=[ 5473], 5.00th=[ 7046], 10.00th=[ 7832], 20.00th=[ 9110], 00:11:08.207 | 30.00th=[10683], 40.00th=[11600], 50.00th=[13829], 60.00th=[17433], 00:11:08.207 | 70.00th=[18744], 80.00th=[21627], 90.00th=[24773], 95.00th=[25822], 00:11:08.207 | 99.00th=[29754], 99.50th=[33162], 99.90th=[44827], 99.95th=[44827], 00:11:08.207 | 99.99th=[44827] 00:11:08.207 bw ( KiB/s): min=14792, max=17165, per=23.15%, avg=15978.50, stdev=1677.96, samples=2 00:11:08.207 iops : min= 3698, max= 4291, avg=3994.50, stdev=419.31, samples=2 00:11:08.207 lat (usec) : 1000=0.03% 00:11:08.207 lat (msec) : 4=0.16%, 10=17.78%, 20=51.38%, 50=29.01%, 100=1.65% 00:11:08.207 cpu : usr=3.77%, sys=5.06%, ctx=370, majf=0, minf=1 00:11:08.207 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:11:08.207 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:08.207 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:08.207 issued rwts: total=3615,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:08.207 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:08.207 job1: (groupid=0, jobs=1): err= 0: pid=370017: Thu Jul 25 09:59:53 2024 00:11:08.207 read: IOPS=4862, BW=19.0MiB/s (19.9MB/s)(19.0MiB/1002msec) 00:11:08.207 slat (usec): min=2, max=11414, avg=102.82, stdev=618.76 00:11:08.207 clat (usec): min=620, max=34919, avg=13013.18, stdev=4204.51 00:11:08.207 lat (usec): min=1808, max=34949, avg=13116.00, stdev=4239.03 00:11:08.207 clat percentiles (usec): 00:11:08.207 | 1.00th=[ 5604], 5.00th=[ 9241], 10.00th=[10159], 20.00th=[10814], 00:11:08.207 | 30.00th=[11076], 40.00th=[11338], 50.00th=[11469], 60.00th=[11863], 00:11:08.207 | 70.00th=[12518], 80.00th=[13960], 90.00th=[18482], 95.00th=[23987], 00:11:08.207 | 99.00th=[27132], 99.50th=[28181], 99.90th=[28967], 99.95th=[31851], 00:11:08.207 | 99.99th=[34866] 00:11:08.207 write: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec); 0 zone resets 00:11:08.207 slat (usec): min=4, max=14015, avg=89.48, stdev=539.79 00:11:08.207 clat (usec): min=6630, max=41320, avg=12339.80, stdev=4120.76 00:11:08.207 lat (usec): min=6637, max=41332, avg=12429.28, stdev=4150.49 00:11:08.207 clat percentiles (usec): 00:11:08.207 | 1.00th=[ 6915], 5.00th=[ 8717], 10.00th=[ 9503], 20.00th=[10552], 00:11:08.207 | 30.00th=[10683], 40.00th=[10945], 50.00th=[11076], 60.00th=[11338], 00:11:08.207 | 70.00th=[11994], 80.00th=[13304], 90.00th=[16188], 95.00th=[19530], 00:11:08.207 | 99.00th=[27132], 99.50th=[40109], 99.90th=[41157], 99.95th=[41157], 00:11:08.207 | 99.99th=[41157] 00:11:08.207 bw ( KiB/s): min=16704, max=24256, per=29.68%, avg=20480.00, stdev=5340.07, samples=2 00:11:08.207 iops : min= 4176, max= 6064, avg=5120.00, stdev=1335.02, samples=2 00:11:08.207 lat (usec) : 750=0.01% 00:11:08.207 lat (msec) : 2=0.11%, 4=0.22%, 10=9.85%, 20=83.04%, 50=6.78% 00:11:08.207 cpu : usr=4.90%, sys=7.79%, ctx=499, majf=0, minf=1 00:11:08.207 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:11:08.207 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:08.207 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:08.207 issued rwts: total=4872,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:08.207 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:08.208 job2: (groupid=0, jobs=1): err= 0: pid=370018: Thu Jul 25 09:59:53 2024 00:11:08.208 read: IOPS=4071, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1006msec) 00:11:08.208 slat (usec): min=3, max=16421, avg=122.91, stdev=763.08 00:11:08.208 clat (usec): min=7671, max=47933, avg=15498.82, stdev=7305.95 00:11:08.208 lat (usec): min=7684, max=47943, avg=15621.72, stdev=7364.01 00:11:08.208 clat percentiles (usec): 00:11:08.208 | 1.00th=[ 8455], 5.00th=[ 9503], 10.00th=[10421], 20.00th=[11338], 00:11:08.208 | 30.00th=[11731], 40.00th=[11994], 50.00th=[12256], 60.00th=[13304], 00:11:08.208 | 70.00th=[15139], 80.00th=[18482], 90.00th=[26346], 95.00th=[33424], 00:11:08.208 | 99.00th=[41681], 99.50th=[42730], 99.90th=[47973], 99.95th=[47973], 00:11:08.208 | 99.99th=[47973] 00:11:08.208 write: IOPS=4469, BW=17.5MiB/s (18.3MB/s)(17.6MiB/1006msec); 0 zone resets 00:11:08.208 slat (usec): min=5, max=11794, avg=102.15, stdev=601.84 00:11:08.208 clat (usec): min=5002, max=57523, avg=14041.15, stdev=7046.77 00:11:08.208 lat (usec): min=5016, max=57545, avg=14143.29, stdev=7071.62 00:11:08.208 clat percentiles (usec): 00:11:08.208 | 1.00th=[ 6783], 5.00th=[ 9110], 10.00th=[10421], 20.00th=[11338], 00:11:08.208 | 30.00th=[11600], 40.00th=[11994], 50.00th=[12125], 60.00th=[12256], 00:11:08.208 | 70.00th=[12649], 80.00th=[13698], 90.00th=[19006], 95.00th=[29230], 00:11:08.208 | 99.00th=[48497], 99.50th=[55837], 99.90th=[57410], 99.95th=[57410], 00:11:08.208 | 99.99th=[57410] 00:11:08.208 bw ( KiB/s): min=12311, max=22616, per=25.30%, avg=17463.50, stdev=7286.74, samples=2 00:11:08.208 iops : min= 3077, max= 5654, avg=4365.50, stdev=1822.21, samples=2 00:11:08.208 lat (msec) : 10=8.19%, 20=78.82%, 50=12.64%, 100=0.35% 00:11:08.208 cpu : usr=4.38%, sys=7.66%, ctx=429, majf=0, minf=1 00:11:08.208 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:11:08.208 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:08.208 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:08.208 issued rwts: total=4096,4496,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:08.208 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:08.208 job3: (groupid=0, jobs=1): err= 0: pid=370019: Thu Jul 25 09:59:53 2024 00:11:08.208 read: IOPS=3548, BW=13.9MiB/s (14.5MB/s)(14.0MiB/1010msec) 00:11:08.208 slat (usec): min=3, max=13371, avg=135.71, stdev=764.26 00:11:08.208 clat (usec): min=8653, max=76981, avg=17969.12, stdev=8379.52 00:11:08.208 lat (usec): min=8919, max=89009, avg=18104.82, stdev=8454.40 00:11:08.208 clat percentiles (usec): 00:11:08.208 | 1.00th=[ 9765], 5.00th=[11076], 10.00th=[11338], 20.00th=[12125], 00:11:08.208 | 30.00th=[12649], 40.00th=[13304], 50.00th=[14877], 60.00th=[18482], 00:11:08.208 | 70.00th=[19530], 80.00th=[22414], 90.00th=[27395], 95.00th=[30278], 00:11:08.208 | 99.00th=[52691], 99.50th=[61080], 99.90th=[77071], 99.95th=[77071], 00:11:08.208 | 99.99th=[77071] 00:11:08.208 write: IOPS=3677, BW=14.4MiB/s (15.1MB/s)(14.5MiB/1010msec); 0 zone resets 00:11:08.208 slat (usec): min=4, max=11309, avg=129.65, stdev=635.15 00:11:08.208 clat (usec): min=6723, max=76621, avg=16980.66, stdev=8784.84 00:11:08.208 lat (usec): min=7789, max=76638, avg=17110.31, stdev=8832.64 00:11:08.208 clat percentiles (usec): 00:11:08.208 | 1.00th=[ 8455], 5.00th=[10421], 10.00th=[10945], 20.00th=[11731], 00:11:08.208 | 30.00th=[12649], 40.00th=[13042], 50.00th=[13435], 60.00th=[14353], 00:11:08.208 | 70.00th=[20317], 80.00th=[21627], 90.00th=[24773], 95.00th=[28705], 00:11:08.208 | 99.00th=[62653], 99.50th=[65274], 99.90th=[77071], 99.95th=[77071], 00:11:08.208 | 99.99th=[77071] 00:11:08.208 bw ( KiB/s): min= 8296, max=20480, per=20.85%, avg=14388.00, stdev=8615.39, samples=2 00:11:08.208 iops : min= 2074, max= 5120, avg=3597.00, stdev=2153.85, samples=2 00:11:08.208 lat (msec) : 10=2.99%, 20=67.55%, 50=27.61%, 100=1.85% 00:11:08.208 cpu : usr=4.46%, sys=6.05%, ctx=417, majf=0, minf=1 00:11:08.208 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:11:08.208 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:08.208 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:08.208 issued rwts: total=3584,3714,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:08.208 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:08.208 00:11:08.208 Run status group 0 (all jobs): 00:11:08.208 READ: bw=62.5MiB/s (65.6MB/s), 13.9MiB/s-19.0MiB/s (14.5MB/s-19.9MB/s), io=63.2MiB (66.2MB), run=1002-1010msec 00:11:08.208 WRITE: bw=67.4MiB/s (70.7MB/s), 14.4MiB/s-20.0MiB/s (15.1MB/s-20.9MB/s), io=68.1MiB (71.4MB), run=1002-1010msec 00:11:08.208 00:11:08.208 Disk stats (read/write): 00:11:08.208 nvme0n1: ios=2985/3072, merge=0/0, ticks=26513/19475, in_queue=45988, util=86.26% 00:11:08.208 nvme0n2: ios=3666/4096, merge=0/0, ticks=20855/18303, in_queue=39158, util=97.20% 00:11:08.208 nvme0n3: ios=3072/3367, merge=0/0, ticks=22702/19218, in_queue=41920, util=87.25% 00:11:08.208 nvme0n4: ios=3129/3407, merge=0/0, ticks=16416/16261, in_queue=32677, util=96.66% 00:11:08.208 09:59:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:11:08.208 [global] 00:11:08.208 thread=1 00:11:08.208 invalidate=1 00:11:08.208 rw=randwrite 00:11:08.208 time_based=1 00:11:08.208 runtime=1 00:11:08.208 ioengine=libaio 00:11:08.208 direct=1 00:11:08.208 bs=4096 00:11:08.208 iodepth=128 00:11:08.208 norandommap=0 00:11:08.208 numjobs=1 00:11:08.208 00:11:08.208 verify_dump=1 00:11:08.208 verify_backlog=512 00:11:08.208 verify_state_save=0 00:11:08.208 do_verify=1 00:11:08.208 verify=crc32c-intel 00:11:08.208 [job0] 00:11:08.208 filename=/dev/nvme0n1 00:11:08.208 [job1] 00:11:08.208 filename=/dev/nvme0n2 00:11:08.208 [job2] 00:11:08.208 filename=/dev/nvme0n3 00:11:08.208 [job3] 00:11:08.208 filename=/dev/nvme0n4 00:11:08.208 Could not set queue depth (nvme0n1) 00:11:08.208 Could not set queue depth (nvme0n2) 00:11:08.208 Could not set queue depth (nvme0n3) 00:11:08.208 Could not set queue depth (nvme0n4) 00:11:08.464 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:08.464 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:08.464 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:08.464 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:11:08.464 fio-3.35 00:11:08.464 Starting 4 threads 00:11:09.835 00:11:09.835 job0: (groupid=0, jobs=1): err= 0: pid=370256: Thu Jul 25 09:59:54 2024 00:11:09.835 read: IOPS=3991, BW=15.6MiB/s (16.3MB/s)(16.2MiB/1042msec) 00:11:09.835 slat (usec): min=3, max=14459, avg=106.94, stdev=665.12 00:11:09.835 clat (usec): min=6881, max=52049, avg=13774.79, stdev=7375.79 00:11:09.835 lat (usec): min=7041, max=52062, avg=13881.73, stdev=7424.45 00:11:09.835 clat percentiles (usec): 00:11:09.835 | 1.00th=[ 8160], 5.00th=[ 8717], 10.00th=[ 9241], 20.00th=[ 9765], 00:11:09.835 | 30.00th=[10290], 40.00th=[10552], 50.00th=[10814], 60.00th=[11207], 00:11:09.835 | 70.00th=[11863], 80.00th=[16057], 90.00th=[22676], 95.00th=[31327], 00:11:09.835 | 99.00th=[48497], 99.50th=[50594], 99.90th=[52167], 99.95th=[52167], 00:11:09.835 | 99.99th=[52167] 00:11:09.835 write: IOPS=4422, BW=17.3MiB/s (18.1MB/s)(18.0MiB/1042msec); 0 zone resets 00:11:09.835 slat (usec): min=5, max=9684, avg=112.31, stdev=665.84 00:11:09.835 clat (usec): min=6207, max=64470, avg=16146.33, stdev=10502.23 00:11:09.835 lat (usec): min=6235, max=64479, avg=16258.65, stdev=10557.94 00:11:09.835 clat percentiles (usec): 00:11:09.835 | 1.00th=[ 7308], 5.00th=[ 8848], 10.00th=[ 9110], 20.00th=[ 9503], 00:11:09.835 | 30.00th=[10421], 40.00th=[11338], 50.00th=[11863], 60.00th=[13173], 00:11:09.835 | 70.00th=[16581], 80.00th=[21103], 90.00th=[27132], 95.00th=[36439], 00:11:09.835 | 99.00th=[58983], 99.50th=[62653], 99.90th=[64226], 99.95th=[64226], 00:11:09.835 | 99.99th=[64226] 00:11:09.835 bw ( KiB/s): min=16384, max=19968, per=28.60%, avg=18176.00, stdev=2534.27, samples=2 00:11:09.835 iops : min= 4096, max= 4992, avg=4544.00, stdev=633.57, samples=2 00:11:09.835 lat (msec) : 10=23.13%, 20=57.83%, 50=16.80%, 100=2.24% 00:11:09.835 cpu : usr=5.00%, sys=6.92%, ctx=423, majf=0, minf=15 00:11:09.835 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:11:09.835 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:09.835 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:09.835 issued rwts: total=4159,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:09.835 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:09.835 job1: (groupid=0, jobs=1): err= 0: pid=370257: Thu Jul 25 09:59:54 2024 00:11:09.835 read: IOPS=4589, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1004msec) 00:11:09.835 slat (usec): min=3, max=11925, avg=112.46, stdev=755.30 00:11:09.835 clat (usec): min=3852, max=77979, avg=14194.47, stdev=8848.72 00:11:09.835 lat (usec): min=3861, max=77986, avg=14306.93, stdev=8912.80 00:11:09.835 clat percentiles (usec): 00:11:09.835 | 1.00th=[ 6521], 5.00th=[ 9110], 10.00th=[ 9634], 20.00th=[10552], 00:11:09.835 | 30.00th=[11338], 40.00th=[11731], 50.00th=[12125], 60.00th=[12649], 00:11:09.835 | 70.00th=[14222], 80.00th=[15795], 90.00th=[17957], 95.00th=[21103], 00:11:09.835 | 99.00th=[74974], 99.50th=[76022], 99.90th=[78119], 99.95th=[78119], 00:11:09.835 | 99.99th=[78119] 00:11:09.835 write: IOPS=4743, BW=18.5MiB/s (19.4MB/s)(18.6MiB/1004msec); 0 zone resets 00:11:09.835 slat (usec): min=4, max=11352, avg=92.66, stdev=610.20 00:11:09.835 clat (usec): min=2442, max=77989, avg=12877.76, stdev=7363.54 00:11:09.835 lat (usec): min=2453, max=78009, avg=12970.42, stdev=7390.60 00:11:09.835 clat percentiles (usec): 00:11:09.835 | 1.00th=[ 4293], 5.00th=[ 6980], 10.00th=[ 7570], 20.00th=[ 8979], 00:11:09.835 | 30.00th=[10290], 40.00th=[10683], 50.00th=[11076], 60.00th=[11863], 00:11:09.835 | 70.00th=[12780], 80.00th=[15401], 90.00th=[17695], 95.00th=[21890], 00:11:09.836 | 99.00th=[47973], 99.50th=[51643], 99.90th=[66847], 99.95th=[66847], 00:11:09.836 | 99.99th=[78119] 00:11:09.836 bw ( KiB/s): min=18016, max=19064, per=29.18%, avg=18540.00, stdev=741.05, samples=2 00:11:09.836 iops : min= 4504, max= 4766, avg=4635.00, stdev=185.26, samples=2 00:11:09.836 lat (msec) : 4=0.46%, 10=19.24%, 20=74.25%, 50=4.87%, 100=1.18% 00:11:09.836 cpu : usr=4.49%, sys=7.78%, ctx=371, majf=0, minf=11 00:11:09.836 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:11:09.836 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:09.836 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:09.836 issued rwts: total=4608,4762,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:09.836 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:09.836 job2: (groupid=0, jobs=1): err= 0: pid=370258: Thu Jul 25 09:59:54 2024 00:11:09.836 read: IOPS=3573, BW=14.0MiB/s (14.6MB/s)(14.0MiB/1003msec) 00:11:09.836 slat (usec): min=3, max=16878, avg=102.30, stdev=717.70 00:11:09.836 clat (usec): min=1177, max=32152, avg=13511.99, stdev=4035.46 00:11:09.836 lat (usec): min=1181, max=32161, avg=13614.29, stdev=4070.20 00:11:09.836 clat percentiles (usec): 00:11:09.836 | 1.00th=[ 3916], 5.00th=[ 8586], 10.00th=[ 9896], 20.00th=[10945], 00:11:09.836 | 30.00th=[11994], 40.00th=[12387], 50.00th=[12780], 60.00th=[13435], 00:11:09.836 | 70.00th=[14222], 80.00th=[16057], 90.00th=[17957], 95.00th=[19530], 00:11:09.836 | 99.00th=[27395], 99.50th=[32113], 99.90th=[32113], 99.95th=[32113], 00:11:09.836 | 99.99th=[32113] 00:11:09.836 write: IOPS=3589, BW=14.0MiB/s (14.7MB/s)(14.1MiB/1003msec); 0 zone resets 00:11:09.836 slat (usec): min=4, max=12938, avg=150.07, stdev=936.64 00:11:09.836 clat (usec): min=637, max=118922, avg=21919.83, stdev=28444.16 00:11:09.836 lat (usec): min=656, max=118929, avg=22069.90, stdev=28630.99 00:11:09.836 clat percentiles (usec): 00:11:09.836 | 1.00th=[ 922], 5.00th=[ 5211], 10.00th=[ 7898], 20.00th=[ 9372], 00:11:09.836 | 30.00th=[ 11076], 40.00th=[ 12387], 50.00th=[ 12780], 60.00th=[ 13042], 00:11:09.836 | 70.00th=[ 14353], 80.00th=[ 17171], 90.00th=[ 69731], 95.00th=[104334], 00:11:09.836 | 99.00th=[113771], 99.50th=[115868], 99.90th=[119014], 99.95th=[119014], 00:11:09.836 | 99.99th=[119014] 00:11:09.836 bw ( KiB/s): min=11536, max=17136, per=22.56%, avg=14336.00, stdev=3959.80, samples=2 00:11:09.836 iops : min= 2884, max= 4284, avg=3584.00, stdev=989.95, samples=2 00:11:09.836 lat (usec) : 750=0.11%, 1000=0.46% 00:11:09.836 lat (msec) : 2=0.63%, 4=1.22%, 10=14.60%, 20=73.64%, 50=4.01% 00:11:09.836 lat (msec) : 100=1.46%, 250=3.87% 00:11:09.836 cpu : usr=3.29%, sys=4.99%, ctx=292, majf=0, minf=9 00:11:09.836 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:11:09.836 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:09.836 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:09.836 issued rwts: total=3584,3600,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:09.836 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:09.836 job3: (groupid=0, jobs=1): err= 0: pid=370259: Thu Jul 25 09:59:54 2024 00:11:09.836 read: IOPS=3358, BW=13.1MiB/s (13.8MB/s)(13.2MiB/1005msec) 00:11:09.836 slat (usec): min=2, max=27245, avg=166.26, stdev=1358.87 00:11:09.836 clat (usec): min=751, max=76523, avg=22472.40, stdev=13783.72 00:11:09.836 lat (usec): min=4460, max=76545, avg=22638.66, stdev=13860.93 00:11:09.836 clat percentiles (usec): 00:11:09.836 | 1.00th=[ 4686], 5.00th=[ 9110], 10.00th=[ 9765], 20.00th=[12911], 00:11:09.836 | 30.00th=[14746], 40.00th=[15401], 50.00th=[17433], 60.00th=[20055], 00:11:09.836 | 70.00th=[24249], 80.00th=[31589], 90.00th=[40633], 95.00th=[50070], 00:11:09.836 | 99.00th=[72877], 99.50th=[72877], 99.90th=[74974], 99.95th=[74974], 00:11:09.836 | 99.99th=[76022] 00:11:09.836 write: IOPS=3566, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1005msec); 0 zone resets 00:11:09.836 slat (usec): min=4, max=12660, avg=99.51, stdev=728.72 00:11:09.836 clat (usec): min=1056, max=41010, avg=14230.16, stdev=5230.17 00:11:09.836 lat (usec): min=1101, max=41017, avg=14329.67, stdev=5280.93 00:11:09.836 clat percentiles (usec): 00:11:09.836 | 1.00th=[ 4424], 5.00th=[ 6194], 10.00th=[ 8455], 20.00th=[10290], 00:11:09.836 | 30.00th=[12387], 40.00th=[13304], 50.00th=[13960], 60.00th=[14615], 00:11:09.836 | 70.00th=[16188], 80.00th=[17695], 90.00th=[18482], 95.00th=[23462], 00:11:09.836 | 99.00th=[33162], 99.50th=[33162], 99.90th=[41157], 99.95th=[41157], 00:11:09.836 | 99.99th=[41157] 00:11:09.836 bw ( KiB/s): min=13328, max=15344, per=22.56%, avg=14336.00, stdev=1425.53, samples=2 00:11:09.836 iops : min= 3332, max= 3836, avg=3584.00, stdev=356.38, samples=2 00:11:09.836 lat (usec) : 1000=0.01% 00:11:09.836 lat (msec) : 2=0.01%, 10=14.89%, 20=61.13%, 50=21.63%, 100=2.33% 00:11:09.836 cpu : usr=1.79%, sys=4.48%, ctx=237, majf=0, minf=15 00:11:09.836 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:11:09.836 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:09.836 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:11:09.836 issued rwts: total=3375,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:09.836 latency : target=0, window=0, percentile=100.00%, depth=128 00:11:09.836 00:11:09.836 Run status group 0 (all jobs): 00:11:09.836 READ: bw=59.0MiB/s (61.8MB/s), 13.1MiB/s-17.9MiB/s (13.8MB/s-18.8MB/s), io=61.4MiB (64.4MB), run=1003-1042msec 00:11:09.836 WRITE: bw=62.1MiB/s (65.1MB/s), 13.9MiB/s-18.5MiB/s (14.6MB/s-19.4MB/s), io=64.7MiB (67.8MB), run=1003-1042msec 00:11:09.836 00:11:09.836 Disk stats (read/write): 00:11:09.836 nvme0n1: ios=3416/3584, merge=0/0, ticks=14604/17983, in_queue=32587, util=90.78% 00:11:09.836 nvme0n2: ios=3622/3988, merge=0/0, ticks=45395/35890, in_queue=81285, util=100.00% 00:11:09.836 nvme0n3: ios=2487/2711, merge=0/0, ticks=29688/62686, in_queue=92374, util=87.79% 00:11:09.836 nvme0n4: ios=2605/2600, merge=0/0, ticks=33324/24206, in_queue=57530, util=95.81% 00:11:09.836 09:59:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:11:09.836 09:59:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=370395 00:11:09.836 09:59:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:11:09.836 09:59:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:11:09.836 [global] 00:11:09.836 thread=1 00:11:09.836 invalidate=1 00:11:09.836 rw=read 00:11:09.836 time_based=1 00:11:09.836 runtime=10 00:11:09.836 ioengine=libaio 00:11:09.836 direct=1 00:11:09.836 bs=4096 00:11:09.836 iodepth=1 00:11:09.836 norandommap=1 00:11:09.836 numjobs=1 00:11:09.836 00:11:09.836 [job0] 00:11:09.836 filename=/dev/nvme0n1 00:11:09.836 [job1] 00:11:09.836 filename=/dev/nvme0n2 00:11:09.836 [job2] 00:11:09.836 filename=/dev/nvme0n3 00:11:09.836 [job3] 00:11:09.836 filename=/dev/nvme0n4 00:11:09.836 Could not set queue depth (nvme0n1) 00:11:09.836 Could not set queue depth (nvme0n2) 00:11:09.836 Could not set queue depth (nvme0n3) 00:11:09.836 Could not set queue depth (nvme0n4) 00:11:09.836 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:09.836 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:09.836 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:09.836 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:09.836 fio-3.35 00:11:09.836 Starting 4 threads 00:11:13.112 09:59:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:11:13.112 09:59:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:11:13.112 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=21815296, buflen=4096 00:11:13.112 fio: pid=370608, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:11:13.370 09:59:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:13.370 09:59:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:11:13.370 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=40325120, buflen=4096 00:11:13.370 fio: pid=370607, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:11:13.628 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=44421120, buflen=4096 00:11:13.628 fio: pid=370601, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:11:13.628 09:59:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:13.628 09:59:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:11:14.192 09:59:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:14.192 09:59:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:11:14.192 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=27590656, buflen=4096 00:11:14.192 fio: pid=370603, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:11:14.192 00:11:14.193 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=370601: Thu Jul 25 09:59:59 2024 00:11:14.193 read: IOPS=3012, BW=11.8MiB/s (12.3MB/s)(42.4MiB/3600msec) 00:11:14.193 slat (usec): min=5, max=22521, avg=12.89, stdev=250.39 00:11:14.193 clat (usec): min=238, max=1489, avg=315.75, stdev=58.18 00:11:14.193 lat (usec): min=244, max=22887, avg=328.64, stdev=258.41 00:11:14.193 clat percentiles (usec): 00:11:14.193 | 1.00th=[ 249], 5.00th=[ 255], 10.00th=[ 262], 20.00th=[ 269], 00:11:14.193 | 30.00th=[ 277], 40.00th=[ 293], 50.00th=[ 310], 60.00th=[ 318], 00:11:14.193 | 70.00th=[ 330], 80.00th=[ 351], 90.00th=[ 375], 95.00th=[ 420], 00:11:14.193 | 99.00th=[ 523], 99.50th=[ 562], 99.90th=[ 668], 99.95th=[ 742], 00:11:14.193 | 99.99th=[ 1369] 00:11:14.193 bw ( KiB/s): min=10752, max=13624, per=37.09%, avg=12098.86, stdev=1194.16, samples=7 00:11:14.193 iops : min= 2688, max= 3406, avg=3024.86, stdev=298.52, samples=7 00:11:14.193 lat (usec) : 250=1.82%, 500=96.47%, 750=1.67%, 1000=0.02% 00:11:14.193 lat (msec) : 2=0.02% 00:11:14.193 cpu : usr=1.33%, sys=3.95%, ctx=10854, majf=0, minf=1 00:11:14.193 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:14.193 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:14.193 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:14.193 issued rwts: total=10846,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:14.193 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:14.193 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=370603: Thu Jul 25 09:59:59 2024 00:11:14.193 read: IOPS=1677, BW=6709KiB/s (6870kB/s)(26.3MiB/4016msec) 00:11:14.193 slat (usec): min=5, max=17444, avg=15.92, stdev=286.79 00:11:14.193 clat (usec): min=257, max=43119, avg=573.30, stdev=3011.88 00:11:14.193 lat (usec): min=266, max=57006, avg=589.22, stdev=3069.61 00:11:14.193 clat percentiles (usec): 00:11:14.193 | 1.00th=[ 285], 5.00th=[ 297], 10.00th=[ 306], 20.00th=[ 310], 00:11:14.193 | 30.00th=[ 318], 40.00th=[ 326], 50.00th=[ 330], 60.00th=[ 338], 00:11:14.193 | 70.00th=[ 355], 80.00th=[ 371], 90.00th=[ 412], 95.00th=[ 474], 00:11:14.193 | 99.00th=[ 594], 99.50th=[40633], 99.90th=[41157], 99.95th=[41157], 00:11:14.193 | 99.99th=[43254] 00:11:14.193 bw ( KiB/s): min= 2082, max=11632, per=23.41%, avg=7636.86, stdev=4051.23, samples=7 00:11:14.193 iops : min= 520, max= 2908, avg=1909.14, stdev=1012.92, samples=7 00:11:14.193 lat (usec) : 500=97.51%, 750=1.78%, 1000=0.10% 00:11:14.193 lat (msec) : 2=0.03%, 50=0.56% 00:11:14.193 cpu : usr=0.87%, sys=1.92%, ctx=6742, majf=0, minf=1 00:11:14.193 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:14.193 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:14.193 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:14.193 issued rwts: total=6737,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:14.193 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:14.193 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=370607: Thu Jul 25 09:59:59 2024 00:11:14.193 read: IOPS=3001, BW=11.7MiB/s (12.3MB/s)(38.5MiB/3280msec) 00:11:14.193 slat (usec): min=6, max=15666, avg=10.90, stdev=193.40 00:11:14.193 clat (usec): min=251, max=3715, avg=317.37, stdev=54.83 00:11:14.193 lat (usec): min=258, max=16021, avg=328.27, stdev=201.51 00:11:14.193 clat percentiles (usec): 00:11:14.193 | 1.00th=[ 269], 5.00th=[ 277], 10.00th=[ 285], 20.00th=[ 289], 00:11:14.193 | 30.00th=[ 297], 40.00th=[ 302], 50.00th=[ 310], 60.00th=[ 314], 00:11:14.193 | 70.00th=[ 326], 80.00th=[ 334], 90.00th=[ 355], 95.00th=[ 383], 00:11:14.193 | 99.00th=[ 498], 99.50th=[ 529], 99.90th=[ 594], 99.95th=[ 758], 00:11:14.193 | 99.99th=[ 3720] 00:11:14.193 bw ( KiB/s): min=11624, max=12904, per=37.31%, avg=12170.67, stdev=519.49, samples=6 00:11:14.193 iops : min= 2906, max= 3226, avg=3042.67, stdev=129.87, samples=6 00:11:14.193 lat (usec) : 500=98.99%, 750=0.94%, 1000=0.03% 00:11:14.193 lat (msec) : 2=0.01%, 4=0.01% 00:11:14.193 cpu : usr=1.28%, sys=4.15%, ctx=9848, majf=0, minf=1 00:11:14.193 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:14.193 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:14.193 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:14.193 issued rwts: total=9846,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:14.193 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:14.193 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=370608: Thu Jul 25 09:59:59 2024 00:11:14.193 read: IOPS=1788, BW=7151KiB/s (7323kB/s)(20.8MiB/2979msec) 00:11:14.193 slat (nsec): min=5657, max=42154, avg=9726.81, stdev=3559.55 00:11:14.193 clat (usec): min=260, max=42042, avg=543.09, stdev=2615.62 00:11:14.193 lat (usec): min=269, max=42054, avg=552.82, stdev=2615.81 00:11:14.193 clat percentiles (usec): 00:11:14.193 | 1.00th=[ 302], 5.00th=[ 314], 10.00th=[ 322], 20.00th=[ 330], 00:11:14.193 | 30.00th=[ 338], 40.00th=[ 347], 50.00th=[ 355], 60.00th=[ 363], 00:11:14.193 | 70.00th=[ 375], 80.00th=[ 408], 90.00th=[ 474], 95.00th=[ 519], 00:11:14.193 | 99.00th=[ 644], 99.50th=[ 783], 99.90th=[41157], 99.95th=[42206], 00:11:14.193 | 99.99th=[42206] 00:11:14.193 bw ( KiB/s): min= 4736, max=10784, per=26.06%, avg=8502.40, stdev=2662.41, samples=5 00:11:14.193 iops : min= 1184, max= 2696, avg=2125.60, stdev=665.60, samples=5 00:11:14.193 lat (usec) : 500=93.86%, 750=5.58%, 1000=0.13% 00:11:14.193 lat (msec) : 50=0.41% 00:11:14.193 cpu : usr=0.81%, sys=2.32%, ctx=5329, majf=0, minf=1 00:11:14.193 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:14.193 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:14.193 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:14.193 issued rwts: total=5327,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:14.193 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:14.193 00:11:14.193 Run status group 0 (all jobs): 00:11:14.193 READ: bw=31.9MiB/s (33.4MB/s), 6709KiB/s-11.8MiB/s (6870kB/s-12.3MB/s), io=128MiB (134MB), run=2979-4016msec 00:11:14.193 00:11:14.193 Disk stats (read/write): 00:11:14.193 nvme0n1: ios=10879/0, merge=0/0, ticks=4434/0, in_queue=4434, util=98.64% 00:11:14.193 nvme0n2: ios=6732/0, merge=0/0, ticks=3619/0, in_queue=3619, util=95.53% 00:11:14.193 nvme0n3: ios=9319/0, merge=0/0, ticks=2870/0, in_queue=2870, util=95.84% 00:11:14.193 nvme0n4: ios=5323/0, merge=0/0, ticks=2722/0, in_queue=2722, util=96.70% 00:11:14.450 09:59:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:14.450 09:59:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:11:14.708 09:59:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:14.708 09:59:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:11:14.965 10:00:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:14.965 10:00:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:11:15.222 10:00:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:11:15.223 10:00:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:11:15.788 10:00:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:11:15.788 10:00:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 370395 00:11:15.788 10:00:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:11:15.788 10:00:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:15.788 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:15.788 10:00:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:15.788 10:00:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:11:15.788 10:00:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:15.788 10:00:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:15.788 10:00:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:15.788 10:00:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:15.788 10:00:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:11:15.788 10:00:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:11:15.788 10:00:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:11:15.788 nvmf hotplug test: fio failed as expected 00:11:15.788 10:00:00 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:16.046 10:00:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:11:16.046 10:00:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:11:16.046 10:00:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:11:16.046 10:00:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:11:16.046 10:00:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:11:16.046 10:00:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:16.046 10:00:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:11:16.046 10:00:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:16.046 10:00:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:11:16.046 10:00:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:16.046 10:00:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:16.046 rmmod nvme_tcp 00:11:16.046 rmmod nvme_fabrics 00:11:16.046 rmmod nvme_keyring 00:11:16.046 10:00:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:16.046 10:00:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:11:16.046 10:00:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:11:16.046 10:00:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 368222 ']' 00:11:16.046 10:00:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 368222 00:11:16.046 10:00:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 368222 ']' 00:11:16.046 10:00:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 368222 00:11:16.046 10:00:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:11:16.046 10:00:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:16.046 10:00:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 368222 00:11:16.046 10:00:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:16.046 10:00:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:16.046 10:00:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 368222' 00:11:16.046 killing process with pid 368222 00:11:16.046 10:00:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 368222 00:11:16.046 10:00:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 368222 00:11:16.611 10:00:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:16.611 10:00:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:16.611 10:00:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:16.611 10:00:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:16.611 10:00:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:16.611 10:00:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:16.611 10:00:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:16.611 10:00:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:18.507 10:00:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:18.507 00:11:18.507 real 0m26.698s 00:11:18.507 user 1m34.531s 00:11:18.507 sys 0m8.205s 00:11:18.507 10:00:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:18.507 10:00:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:18.507 ************************************ 00:11:18.507 END TEST nvmf_fio_target 00:11:18.507 ************************************ 00:11:18.507 10:00:03 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:18.507 10:00:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:18.507 10:00:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:18.507 10:00:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:18.507 ************************************ 00:11:18.507 START TEST nvmf_bdevio 00:11:18.507 ************************************ 00:11:18.507 10:00:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:18.507 * Looking for test storage... 00:11:18.507 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:18.507 10:00:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:18.507 10:00:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:11:18.507 10:00:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:18.507 10:00:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:18.507 10:00:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:18.507 10:00:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:18.507 10:00:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:18.507 10:00:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:18.507 10:00:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:18.507 10:00:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:18.507 10:00:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:18.766 10:00:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:18.766 10:00:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:11:18.766 10:00:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:11:18.766 10:00:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:18.766 10:00:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:18.766 10:00:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:18.766 10:00:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:18.766 10:00:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:18.766 10:00:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:18.766 10:00:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:18.766 10:00:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:18.766 10:00:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:18.766 10:00:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:18.766 10:00:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:18.766 10:00:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:11:18.766 10:00:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:18.766 10:00:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:11:18.766 10:00:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:18.766 10:00:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:18.766 10:00:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:18.766 10:00:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:18.766 10:00:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:18.766 10:00:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:18.766 10:00:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:18.766 10:00:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:18.766 10:00:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:18.766 10:00:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:18.766 10:00:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:11:18.766 10:00:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:18.766 10:00:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:18.766 10:00:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:18.766 10:00:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:18.766 10:00:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:18.766 10:00:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:18.766 10:00:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:18.766 10:00:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:18.766 10:00:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:18.766 10:00:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:18.766 10:00:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:11:18.766 10:00:03 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:21.338 10:00:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:21.338 10:00:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:11:21.338 10:00:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:21.338 10:00:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:21.338 10:00:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:21.338 10:00:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:21.338 10:00:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:21.338 10:00:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:11:21.338 10:00:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:21.338 10:00:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:11:21.338 10:00:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:11:21.338 10:00:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:11:21.338 10:00:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:11:21.338 10:00:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:11:21.338 10:00:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:11:21.338 10:00:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:21.338 10:00:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:21.338 10:00:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:21.338 10:00:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:21.338 10:00:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:21.338 10:00:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:21.338 10:00:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:21.338 10:00:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:21.338 10:00:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:21.338 10:00:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:21.338 10:00:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:21.338 10:00:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:21.338 10:00:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:21.338 10:00:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:21.338 10:00:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:21.338 10:00:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:21.338 10:00:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:21.338 10:00:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:21.338 10:00:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:11:21.338 Found 0000:84:00.0 (0x8086 - 0x159b) 00:11:21.338 10:00:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:21.339 10:00:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:21.339 10:00:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:21.339 10:00:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:21.339 10:00:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:21.339 10:00:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:21.339 10:00:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:11:21.339 Found 0000:84:00.1 (0x8086 - 0x159b) 00:11:21.339 10:00:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:21.339 10:00:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:21.339 10:00:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:21.339 10:00:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:21.339 10:00:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:21.339 10:00:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:21.339 10:00:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:21.339 10:00:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:21.339 10:00:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:21.339 10:00:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:21.339 10:00:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:21.339 10:00:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:21.339 10:00:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:21.339 10:00:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:21.339 10:00:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:21.339 10:00:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:11:21.339 Found net devices under 0000:84:00.0: cvl_0_0 00:11:21.339 10:00:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:21.339 10:00:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:21.339 10:00:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:21.339 10:00:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:21.339 10:00:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:21.339 10:00:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:21.339 10:00:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:21.339 10:00:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:21.339 10:00:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:11:21.339 Found net devices under 0000:84:00.1: cvl_0_1 00:11:21.339 10:00:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:21.339 10:00:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:21.339 10:00:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:11:21.339 10:00:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:21.339 10:00:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:21.339 10:00:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:21.339 10:00:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:21.339 10:00:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:21.339 10:00:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:21.339 10:00:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:21.339 10:00:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:21.339 10:00:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:21.339 10:00:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:21.339 10:00:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:21.339 10:00:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:21.339 10:00:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:21.339 10:00:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:21.339 10:00:05 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:21.339 10:00:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:21.339 10:00:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:21.339 10:00:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:21.339 10:00:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:21.339 10:00:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:21.339 10:00:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:21.339 10:00:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:21.339 10:00:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:21.339 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:21.339 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.121 ms 00:11:21.339 00:11:21.339 --- 10.0.0.2 ping statistics --- 00:11:21.339 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:21.339 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:11:21.339 10:00:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:21.339 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:21.339 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.098 ms 00:11:21.339 00:11:21.339 --- 10.0.0.1 ping statistics --- 00:11:21.339 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:21.339 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:11:21.339 10:00:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:21.339 10:00:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:11:21.339 10:00:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:21.339 10:00:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:21.339 10:00:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:21.339 10:00:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:21.339 10:00:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:21.339 10:00:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:21.339 10:00:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:21.339 10:00:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:11:21.339 10:00:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:21.339 10:00:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:21.339 10:00:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:21.339 10:00:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=373370 00:11:21.339 10:00:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:11:21.339 10:00:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 373370 00:11:21.339 10:00:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 373370 ']' 00:11:21.339 10:00:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:21.339 10:00:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:21.339 10:00:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:21.339 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:21.339 10:00:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:21.339 10:00:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:21.339 [2024-07-25 10:00:06.218647] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:11:21.339 [2024-07-25 10:00:06.218758] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:21.339 EAL: No free 2048 kB hugepages reported on node 1 00:11:21.339 [2024-07-25 10:00:06.298352] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:21.339 [2024-07-25 10:00:06.425613] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:21.339 [2024-07-25 10:00:06.425674] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:21.339 [2024-07-25 10:00:06.425696] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:21.339 [2024-07-25 10:00:06.425709] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:21.339 [2024-07-25 10:00:06.425720] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:21.339 [2024-07-25 10:00:06.425819] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:11:21.339 [2024-07-25 10:00:06.425875] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:11:21.339 [2024-07-25 10:00:06.425928] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:11:21.339 [2024-07-25 10:00:06.425932] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:21.597 10:00:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:21.597 10:00:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:11:21.597 10:00:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:21.597 10:00:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:21.597 10:00:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:21.597 10:00:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:21.597 10:00:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:21.597 10:00:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.597 10:00:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:21.597 [2024-07-25 10:00:06.597293] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:21.597 10:00:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.597 10:00:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:21.597 10:00:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.597 10:00:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:21.597 Malloc0 00:11:21.597 10:00:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.597 10:00:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:21.597 10:00:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.597 10:00:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:21.597 10:00:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.597 10:00:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:21.597 10:00:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.597 10:00:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:21.597 10:00:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.597 10:00:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:21.597 10:00:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:21.597 10:00:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:21.597 [2024-07-25 10:00:06.652690] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:21.597 10:00:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:21.597 10:00:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:11:21.597 10:00:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:11:21.597 10:00:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:11:21.598 10:00:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:11:21.598 10:00:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:21.598 10:00:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:21.598 { 00:11:21.598 "params": { 00:11:21.598 "name": "Nvme$subsystem", 00:11:21.598 "trtype": "$TEST_TRANSPORT", 00:11:21.598 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:21.598 "adrfam": "ipv4", 00:11:21.598 "trsvcid": "$NVMF_PORT", 00:11:21.598 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:21.598 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:21.598 "hdgst": ${hdgst:-false}, 00:11:21.598 "ddgst": ${ddgst:-false} 00:11:21.598 }, 00:11:21.598 "method": "bdev_nvme_attach_controller" 00:11:21.598 } 00:11:21.598 EOF 00:11:21.598 )") 00:11:21.598 10:00:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:11:21.598 10:00:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:11:21.598 10:00:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:11:21.598 10:00:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:21.598 "params": { 00:11:21.598 "name": "Nvme1", 00:11:21.598 "trtype": "tcp", 00:11:21.598 "traddr": "10.0.0.2", 00:11:21.598 "adrfam": "ipv4", 00:11:21.598 "trsvcid": "4420", 00:11:21.598 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:21.598 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:21.598 "hdgst": false, 00:11:21.598 "ddgst": false 00:11:21.598 }, 00:11:21.598 "method": "bdev_nvme_attach_controller" 00:11:21.598 }' 00:11:21.598 [2024-07-25 10:00:06.706202] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:11:21.598 [2024-07-25 10:00:06.706294] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid373521 ] 00:11:21.855 EAL: No free 2048 kB hugepages reported on node 1 00:11:21.855 [2024-07-25 10:00:06.807639] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:21.855 [2024-07-25 10:00:06.933578] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:21.855 [2024-07-25 10:00:06.933635] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:21.855 [2024-07-25 10:00:06.933639] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:22.113 I/O targets: 00:11:22.113 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:11:22.113 00:11:22.113 00:11:22.113 CUnit - A unit testing framework for C - Version 2.1-3 00:11:22.113 http://cunit.sourceforge.net/ 00:11:22.113 00:11:22.113 00:11:22.113 Suite: bdevio tests on: Nvme1n1 00:11:22.113 Test: blockdev write read block ...passed 00:11:22.113 Test: blockdev write zeroes read block ...passed 00:11:22.113 Test: blockdev write zeroes read no split ...passed 00:11:22.369 Test: blockdev write zeroes read split ...passed 00:11:22.369 Test: blockdev write zeroes read split partial ...passed 00:11:22.369 Test: blockdev reset ...[2024-07-25 10:00:07.344675] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:11:22.369 [2024-07-25 10:00:07.344797] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe79bd0 (9): Bad file descriptor 00:11:22.369 [2024-07-25 10:00:07.405128] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:11:22.369 passed 00:11:22.369 Test: blockdev write read 8 blocks ...passed 00:11:22.369 Test: blockdev write read size > 128k ...passed 00:11:22.369 Test: blockdev write read invalid size ...passed 00:11:22.369 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:22.369 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:22.369 Test: blockdev write read max offset ...passed 00:11:22.369 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:22.625 Test: blockdev writev readv 8 blocks ...passed 00:11:22.625 Test: blockdev writev readv 30 x 1block ...passed 00:11:22.625 Test: blockdev writev readv block ...passed 00:11:22.625 Test: blockdev writev readv size > 128k ...passed 00:11:22.625 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:22.625 Test: blockdev comparev and writev ...[2024-07-25 10:00:07.618809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:22.625 [2024-07-25 10:00:07.618850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:11:22.625 [2024-07-25 10:00:07.618878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:22.625 [2024-07-25 10:00:07.618898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:11:22.626 [2024-07-25 10:00:07.619292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:22.626 [2024-07-25 10:00:07.619318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:11:22.626 [2024-07-25 10:00:07.619342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:22.626 [2024-07-25 10:00:07.619361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:11:22.626 [2024-07-25 10:00:07.619813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:22.626 [2024-07-25 10:00:07.619844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:11:22.626 [2024-07-25 10:00:07.619868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:22.626 [2024-07-25 10:00:07.619888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:11:22.626 [2024-07-25 10:00:07.620284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:22.626 [2024-07-25 10:00:07.620311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:11:22.626 [2024-07-25 10:00:07.620336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:22.626 [2024-07-25 10:00:07.620355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:11:22.626 passed 00:11:22.626 Test: blockdev nvme passthru rw ...passed 00:11:22.626 Test: blockdev nvme passthru vendor specific ...[2024-07-25 10:00:07.702821] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:22.626 [2024-07-25 10:00:07.702852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:11:22.626 [2024-07-25 10:00:07.703058] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:22.626 [2024-07-25 10:00:07.703101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:11:22.626 [2024-07-25 10:00:07.703338] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:22.626 [2024-07-25 10:00:07.703364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:11:22.626 [2024-07-25 10:00:07.703566] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:22.626 [2024-07-25 10:00:07.703593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:11:22.626 passed 00:11:22.626 Test: blockdev nvme admin passthru ...passed 00:11:22.626 Test: blockdev copy ...passed 00:11:22.626 00:11:22.626 Run Summary: Type Total Ran Passed Failed Inactive 00:11:22.626 suites 1 1 n/a 0 0 00:11:22.626 tests 23 23 23 0 0 00:11:22.626 asserts 152 152 152 0 n/a 00:11:22.626 00:11:22.626 Elapsed time = 1.264 seconds 00:11:22.883 10:00:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:22.883 10:00:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:22.883 10:00:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:22.883 10:00:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:22.883 10:00:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:11:22.883 10:00:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:11:22.883 10:00:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:22.883 10:00:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:11:22.883 10:00:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:22.883 10:00:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:11:22.883 10:00:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:22.883 10:00:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:22.883 rmmod nvme_tcp 00:11:22.883 rmmod nvme_fabrics 00:11:23.140 rmmod nvme_keyring 00:11:23.140 10:00:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:23.140 10:00:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:11:23.140 10:00:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:11:23.140 10:00:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 373370 ']' 00:11:23.140 10:00:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 373370 00:11:23.140 10:00:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 373370 ']' 00:11:23.140 10:00:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 373370 00:11:23.140 10:00:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:11:23.140 10:00:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:23.140 10:00:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 373370 00:11:23.140 10:00:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:11:23.140 10:00:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:11:23.140 10:00:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 373370' 00:11:23.140 killing process with pid 373370 00:11:23.140 10:00:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 373370 00:11:23.140 10:00:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 373370 00:11:23.396 10:00:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:23.396 10:00:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:23.396 10:00:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:23.396 10:00:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:23.396 10:00:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:23.396 10:00:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:23.396 10:00:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:23.396 10:00:08 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:25.925 10:00:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:25.925 00:11:25.925 real 0m6.889s 00:11:25.925 user 0m11.008s 00:11:25.925 sys 0m2.407s 00:11:25.925 10:00:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:25.925 10:00:10 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:25.925 ************************************ 00:11:25.925 END TEST nvmf_bdevio 00:11:25.925 ************************************ 00:11:25.925 10:00:10 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:11:25.925 00:11:25.925 real 4m13.067s 00:11:25.925 user 10m52.848s 00:11:25.925 sys 1m19.144s 00:11:25.925 10:00:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:25.925 10:00:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:25.925 ************************************ 00:11:25.925 END TEST nvmf_target_core 00:11:25.925 ************************************ 00:11:25.925 10:00:10 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:25.926 10:00:10 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:25.926 10:00:10 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:25.926 10:00:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:25.926 ************************************ 00:11:25.926 START TEST nvmf_target_extra 00:11:25.926 ************************************ 00:11:25.926 10:00:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:25.926 * Looking for test storage... 00:11:25.926 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:11:25.926 10:00:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:25.926 10:00:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:11:25.926 10:00:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:25.926 10:00:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:25.926 10:00:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:25.926 10:00:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:25.926 10:00:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:25.926 10:00:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:25.926 10:00:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:25.926 10:00:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:25.926 10:00:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:25.926 10:00:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:25.926 10:00:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:11:25.926 10:00:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:11:25.926 10:00:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:25.926 10:00:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:25.926 10:00:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:25.926 10:00:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:25.926 10:00:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:25.926 10:00:10 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:25.926 10:00:10 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:25.926 10:00:10 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:25.926 10:00:10 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:25.926 10:00:10 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:25.926 10:00:10 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:25.926 10:00:10 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:11:25.926 10:00:10 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:25.926 10:00:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@47 -- # : 0 00:11:25.926 10:00:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:25.926 10:00:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:25.926 10:00:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:25.926 10:00:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:25.926 10:00:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:25.926 10:00:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:25.926 10:00:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:25.926 10:00:10 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:25.926 10:00:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:11:25.926 10:00:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:11:25.926 10:00:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:11:25.926 10:00:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:25.926 10:00:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:25.926 10:00:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:25.926 10:00:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:25.926 ************************************ 00:11:25.926 START TEST nvmf_example 00:11:25.926 ************************************ 00:11:25.926 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:25.926 * Looking for test storage... 00:11:25.926 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:25.926 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:25.926 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:11:25.926 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:25.926 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:25.926 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:25.926 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:25.926 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:25.926 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:25.926 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:25.926 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:25.926 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:25.926 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:25.926 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:11:25.926 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:11:25.926 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:25.926 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:25.926 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:25.926 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:25.926 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:25.926 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:25.926 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:25.926 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:25.926 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:25.926 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:25.927 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:25.927 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:11:25.927 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:25.927 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:11:25.927 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:25.927 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:25.927 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:25.927 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:25.927 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:25.927 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:25.927 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:25.927 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:25.927 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:11:25.927 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:11:25.927 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:11:25.927 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:11:25.927 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:11:25.927 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:11:25.927 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:11:25.927 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:11:25.927 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:25.927 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:25.927 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:11:25.927 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:25.927 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:25.927 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:25.927 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:25.927 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:25.927 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:25.927 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:25.927 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:25.927 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:25.927 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:25.927 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:11:25.927 10:00:10 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:28.456 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:28.456 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:11:28.456 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:28.456 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:28.456 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:28.456 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:28.456 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:28.456 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:11:28.456 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:28.456 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:11:28.456 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:11:28.456 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:11:28.456 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:11:28.456 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:11:28.456 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:11:28.456 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:28.456 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:28.456 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:28.456 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:28.456 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:28.456 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:28.456 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:28.456 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:28.456 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:28.456 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:28.456 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:28.456 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:28.456 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:28.456 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:28.456 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:28.456 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:28.456 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:28.456 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:28.456 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:11:28.456 Found 0000:84:00.0 (0x8086 - 0x159b) 00:11:28.456 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:28.456 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:28.456 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:28.456 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:28.456 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:28.456 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:28.456 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:11:28.456 Found 0000:84:00.1 (0x8086 - 0x159b) 00:11:28.456 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:28.456 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:28.456 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:28.456 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:28.456 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:28.456 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:28.456 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:28.456 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:28.456 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:28.456 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:28.457 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:28.457 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:28.457 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:28.457 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:28.457 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:28.457 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:11:28.457 Found net devices under 0000:84:00.0: cvl_0_0 00:11:28.457 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:28.457 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:28.457 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:28.457 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:28.457 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:28.457 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:28.457 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:28.457 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:28.457 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:11:28.457 Found net devices under 0000:84:00.1: cvl_0_1 00:11:28.457 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:28.457 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:28.457 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:11:28.457 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:28.457 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:28.457 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:28.457 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:28.457 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:28.457 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:28.457 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:28.457 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:28.457 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:28.457 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:28.457 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:28.457 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:28.457 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:28.457 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:28.457 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:28.457 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:28.457 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:28.457 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:28.457 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:28.457 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:28.457 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:28.457 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:28.457 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:28.457 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:28.457 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.128 ms 00:11:28.457 00:11:28.457 --- 10.0.0.2 ping statistics --- 00:11:28.457 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:28.457 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:11:28.457 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:28.457 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:28.457 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.100 ms 00:11:28.457 00:11:28.457 --- 10.0.0.1 ping statistics --- 00:11:28.457 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:28.457 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:11:28.457 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:28.457 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:11:28.457 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:28.457 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:28.457 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:28.457 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:28.457 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:28.457 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:28.457 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:28.457 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:11:28.457 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:11:28.457 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:28.457 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:28.457 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:11:28.457 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:11:28.457 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=376283 00:11:28.457 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:28.457 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:11:28.457 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 376283 00:11:28.457 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@831 -- # '[' -z 376283 ']' 00:11:28.457 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:28.457 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:28.457 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:28.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:28.457 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:28.457 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:28.457 EAL: No free 2048 kB hugepages reported on node 1 00:11:28.715 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:28.715 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # return 0 00:11:28.715 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:11:28.715 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:28.715 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:28.715 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:28.715 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.715 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:28.715 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.715 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:11:28.715 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.715 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:28.715 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.715 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:11:28.715 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:28.715 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.715 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:28.715 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.715 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:11:28.715 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:28.715 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.715 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:28.715 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.715 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:28.715 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.715 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:28.715 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.715 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:11:28.972 10:00:13 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:28.972 EAL: No free 2048 kB hugepages reported on node 1 00:11:38.935 Initializing NVMe Controllers 00:11:38.935 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:38.935 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:38.935 Initialization complete. Launching workers. 00:11:38.935 ======================================================== 00:11:38.935 Latency(us) 00:11:38.935 Device Information : IOPS MiB/s Average min max 00:11:38.935 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14566.70 56.90 4395.22 726.93 18944.13 00:11:38.935 ======================================================== 00:11:38.935 Total : 14566.70 56.90 4395.22 726.93 18944.13 00:11:38.935 00:11:38.935 10:00:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:11:38.935 10:00:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:11:38.935 10:00:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:38.935 10:00:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # sync 00:11:38.935 10:00:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:38.935 10:00:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:11:38.935 10:00:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:39.192 10:00:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:39.192 rmmod nvme_tcp 00:11:39.193 rmmod nvme_fabrics 00:11:39.193 rmmod nvme_keyring 00:11:39.193 10:00:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:39.193 10:00:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:11:39.193 10:00:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:11:39.193 10:00:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 376283 ']' 00:11:39.193 10:00:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@490 -- # killprocess 376283 00:11:39.193 10:00:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@950 -- # '[' -z 376283 ']' 00:11:39.193 10:00:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # kill -0 376283 00:11:39.193 10:00:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # uname 00:11:39.193 10:00:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:39.193 10:00:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 376283 00:11:39.193 10:00:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # process_name=nvmf 00:11:39.193 10:00:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # '[' nvmf = sudo ']' 00:11:39.193 10:00:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@968 -- # echo 'killing process with pid 376283' 00:11:39.193 killing process with pid 376283 00:11:39.193 10:00:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@969 -- # kill 376283 00:11:39.193 10:00:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@974 -- # wait 376283 00:11:39.451 nvmf threads initialize successfully 00:11:39.451 bdev subsystem init successfully 00:11:39.451 created a nvmf target service 00:11:39.451 create targets's poll groups done 00:11:39.451 all subsystems of target started 00:11:39.451 nvmf target is running 00:11:39.451 all subsystems of target stopped 00:11:39.451 destroy targets's poll groups done 00:11:39.451 destroyed the nvmf target service 00:11:39.451 bdev subsystem finish successfully 00:11:39.451 nvmf threads destroy successfully 00:11:39.451 10:00:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:39.451 10:00:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:39.451 10:00:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:39.451 10:00:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:39.451 10:00:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:39.451 10:00:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:39.451 10:00:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:39.451 10:00:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:41.345 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:41.345 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:11:41.345 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:41.345 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:41.606 00:11:41.606 real 0m15.826s 00:11:41.606 user 0m42.334s 00:11:41.606 sys 0m4.044s 00:11:41.606 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:41.606 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:41.606 ************************************ 00:11:41.606 END TEST nvmf_example 00:11:41.606 ************************************ 00:11:41.606 10:00:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:41.606 10:00:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:41.606 10:00:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:41.606 10:00:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:41.606 ************************************ 00:11:41.606 START TEST nvmf_filesystem 00:11:41.606 ************************************ 00:11:41.606 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:41.606 * Looking for test storage... 00:11:41.606 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:41.606 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:11:41.606 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:11:41.606 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:11:41.607 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:11:41.607 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:11:41.607 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:11:41.607 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:11:41.607 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:11:41.607 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:11:41.607 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:11:41.607 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:11:41.607 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:11:41.607 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:11:41.607 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:11:41.607 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:11:41.607 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:11:41.607 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:11:41.607 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:11:41.607 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:11:41.607 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:11:41.607 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:11:41.607 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:11:41.607 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:11:41.607 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:11:41.607 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:11:41.607 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:11:41.607 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:11:41.607 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:41.607 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:11:41.607 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:11:41.607 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:11:41.607 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:11:41.607 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:11:41.607 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:11:41.607 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:11:41.607 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:11:41.607 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:11:41.607 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:11:41.607 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:11:41.607 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:11:41.607 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:11:41.607 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:11:41.607 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:11:41.607 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:11:41.607 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:11:41.607 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:11:41.607 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:11:41.607 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:11:41.607 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:11:41.607 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:11:41.607 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:11:41.607 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:11:41.607 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:11:41.607 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:11:41.607 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:11:41.607 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:11:41.607 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:11:41.607 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:11:41.607 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:11:41.607 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:11:41.607 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:11:41.607 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:11:41.607 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:11:41.607 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:11:41.607 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:11:41.607 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:11:41.607 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:11:41.607 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:11:41.607 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:11:41.607 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:11:41.607 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:11:41.607 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:11:41.607 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:11:41.607 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:11:41.607 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:11:41.607 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:11:41.607 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:11:41.607 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:11:41.607 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:11:41.607 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:11:41.607 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:11:41.607 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:11:41.607 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:11:41.607 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:11:41.607 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:11:41.607 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:11:41.607 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:11:41.607 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:11:41.607 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:11:41.607 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:11:41.607 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:11:41.607 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:11:41.607 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:41.607 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:41.607 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:41.607 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:41.607 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:41.607 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:41.608 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:11:41.608 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:41.608 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:11:41.608 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:11:41.608 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:11:41.608 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:11:41.608 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:11:41.608 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:11:41.608 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:11:41.608 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:11:41.608 #define SPDK_CONFIG_H 00:11:41.608 #define SPDK_CONFIG_APPS 1 00:11:41.608 #define SPDK_CONFIG_ARCH native 00:11:41.608 #undef SPDK_CONFIG_ASAN 00:11:41.608 #undef SPDK_CONFIG_AVAHI 00:11:41.608 #undef SPDK_CONFIG_CET 00:11:41.608 #define SPDK_CONFIG_COVERAGE 1 00:11:41.608 #define SPDK_CONFIG_CROSS_PREFIX 00:11:41.608 #undef SPDK_CONFIG_CRYPTO 00:11:41.608 #undef SPDK_CONFIG_CRYPTO_MLX5 00:11:41.608 #undef SPDK_CONFIG_CUSTOMOCF 00:11:41.608 #undef SPDK_CONFIG_DAOS 00:11:41.608 #define SPDK_CONFIG_DAOS_DIR 00:11:41.608 #define SPDK_CONFIG_DEBUG 1 00:11:41.608 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:11:41.608 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:11:41.608 #define SPDK_CONFIG_DPDK_INC_DIR 00:11:41.608 #define SPDK_CONFIG_DPDK_LIB_DIR 00:11:41.608 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:11:41.608 #undef SPDK_CONFIG_DPDK_UADK 00:11:41.608 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:41.608 #define SPDK_CONFIG_EXAMPLES 1 00:11:41.608 #undef SPDK_CONFIG_FC 00:11:41.608 #define SPDK_CONFIG_FC_PATH 00:11:41.608 #define SPDK_CONFIG_FIO_PLUGIN 1 00:11:41.608 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:11:41.608 #undef SPDK_CONFIG_FUSE 00:11:41.608 #undef SPDK_CONFIG_FUZZER 00:11:41.608 #define SPDK_CONFIG_FUZZER_LIB 00:11:41.608 #undef SPDK_CONFIG_GOLANG 00:11:41.608 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:11:41.608 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:11:41.608 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:11:41.608 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:11:41.608 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:11:41.608 #undef SPDK_CONFIG_HAVE_LIBBSD 00:11:41.608 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:11:41.608 #define SPDK_CONFIG_IDXD 1 00:11:41.608 #define SPDK_CONFIG_IDXD_KERNEL 1 00:11:41.608 #undef SPDK_CONFIG_IPSEC_MB 00:11:41.608 #define SPDK_CONFIG_IPSEC_MB_DIR 00:11:41.608 #define SPDK_CONFIG_ISAL 1 00:11:41.608 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:11:41.608 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:11:41.608 #define SPDK_CONFIG_LIBDIR 00:11:41.608 #undef SPDK_CONFIG_LTO 00:11:41.608 #define SPDK_CONFIG_MAX_LCORES 128 00:11:41.608 #define SPDK_CONFIG_NVME_CUSE 1 00:11:41.608 #undef SPDK_CONFIG_OCF 00:11:41.608 #define SPDK_CONFIG_OCF_PATH 00:11:41.608 #define SPDK_CONFIG_OPENSSL_PATH 00:11:41.608 #undef SPDK_CONFIG_PGO_CAPTURE 00:11:41.608 #define SPDK_CONFIG_PGO_DIR 00:11:41.608 #undef SPDK_CONFIG_PGO_USE 00:11:41.608 #define SPDK_CONFIG_PREFIX /usr/local 00:11:41.608 #undef SPDK_CONFIG_RAID5F 00:11:41.608 #undef SPDK_CONFIG_RBD 00:11:41.608 #define SPDK_CONFIG_RDMA 1 00:11:41.608 #define SPDK_CONFIG_RDMA_PROV verbs 00:11:41.608 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:11:41.608 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:11:41.608 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:11:41.608 #define SPDK_CONFIG_SHARED 1 00:11:41.608 #undef SPDK_CONFIG_SMA 00:11:41.608 #define SPDK_CONFIG_TESTS 1 00:11:41.608 #undef SPDK_CONFIG_TSAN 00:11:41.608 #define SPDK_CONFIG_UBLK 1 00:11:41.608 #define SPDK_CONFIG_UBSAN 1 00:11:41.608 #undef SPDK_CONFIG_UNIT_TESTS 00:11:41.608 #undef SPDK_CONFIG_URING 00:11:41.608 #define SPDK_CONFIG_URING_PATH 00:11:41.608 #undef SPDK_CONFIG_URING_ZNS 00:11:41.608 #undef SPDK_CONFIG_USDT 00:11:41.608 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:11:41.608 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:11:41.608 #define SPDK_CONFIG_VFIO_USER 1 00:11:41.608 #define SPDK_CONFIG_VFIO_USER_DIR 00:11:41.608 #define SPDK_CONFIG_VHOST 1 00:11:41.608 #define SPDK_CONFIG_VIRTIO 1 00:11:41.608 #undef SPDK_CONFIG_VTUNE 00:11:41.608 #define SPDK_CONFIG_VTUNE_DIR 00:11:41.608 #define SPDK_CONFIG_WERROR 1 00:11:41.608 #define SPDK_CONFIG_WPDK_DIR 00:11:41.608 #undef SPDK_CONFIG_XNVME 00:11:41.608 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:11:41.608 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:11:41.608 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:41.608 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:41.608 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:41.608 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:41.608 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:41.608 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:41.608 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:41.608 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:41.608 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:41.608 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:41.608 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:41.608 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:41.608 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:41.608 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:11:41.608 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:41.608 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:11:41.608 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:11:41.608 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:11:41.608 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:11:41.608 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:11:41.608 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:11:41.608 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:11:41.608 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:11:41.608 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:11:41.608 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:11:41.609 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:11:41.609 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:11:41.609 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:11:41.609 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:11:41.609 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:11:41.609 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:11:41.609 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:11:41.609 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:11:41.609 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:11:41.609 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:11:41.609 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:11:41.609 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:11:41.609 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:11:41.609 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:11:41.609 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:11:41.609 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:11:41.609 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:11:41.609 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:11:41.609 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:11:41.609 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:11:41.609 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:11:41.609 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:11:41.609 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:11:41.609 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:11:41.609 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:11:41.609 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:11:41.609 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:11:41.609 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:11:41.609 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:11:41.609 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:11:41.609 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:11:41.609 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:11:41.609 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:11:41.609 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:11:41.609 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:11:41.609 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:11:41.609 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:11:41.609 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:11:41.609 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:11:41.609 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:11:41.609 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:11:41.609 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:11:41.609 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:11:41.609 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:11:41.609 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:11:41.609 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:11:41.609 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:11:41.609 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:11:41.609 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:11:41.609 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:11:41.609 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:11:41.609 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:11:41.609 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:11:41.609 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:11:41.609 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:11:41.609 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:11:41.609 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:11:41.609 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:11:41.609 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:11:41.609 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:11:41.609 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:11:41.609 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:11:41.609 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:11:41.609 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:11:41.609 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:11:41.609 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:11:41.609 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:11:41.609 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:11:41.609 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:11:41.609 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:11:41.609 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:11:41.609 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:11:41.609 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:11:41.609 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:11:41.609 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:11:41.609 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:11:41.609 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:11:41.609 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:11:41.609 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:11:41.609 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:11:41.609 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:11:41.609 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:11:41.609 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:11:41.609 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:11:41.609 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:11:41.609 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:11:41.609 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:11:41.609 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:11:41.609 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:11:41.609 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:11:41.609 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:11:41.610 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:11:41.610 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:11:41.610 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:11:41.610 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:11:41.610 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:11:41.610 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:11:41.610 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:11:41.610 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:11:41.610 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:11:41.610 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:11:41.610 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:11:41.610 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:11:41.610 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:11:41.610 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:11:41.610 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:11:41.610 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:11:41.610 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:11:41.610 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:11:41.610 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:11:41.610 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:11:41.610 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:11:41.610 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:11:41.610 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:11:41.610 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:11:41.610 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:11:41.610 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:11:41.610 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:11:41.610 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:11:41.610 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:11:41.610 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:11:41.610 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:11:41.610 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:11:41.610 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:11:41.610 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:11:41.610 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:41.610 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:41.610 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:11:41.610 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:11:41.610 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:41.610 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:41.610 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:41.610 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:41.610 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:11:41.610 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:11:41.610 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:41.610 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:41.610 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONDONTWRITEBYTECODE=1 00:11:41.610 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONDONTWRITEBYTECODE=1 00:11:41.610 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:41.610 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:41.610 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@196 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:41.610 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@196 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:41.610 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:11:41.610 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@201 -- # rm -rf /var/tmp/asan_suppression_file 00:11:41.610 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@202 -- # cat 00:11:41.610 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@238 -- # echo leak:libfuse3.so 00:11:41.610 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:41.610 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:41.610 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:41.610 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:41.610 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # '[' -z /var/spdk/dependencies ']' 00:11:41.610 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@247 -- # export DEPENDENCY_DIR 00:11:41.610 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:41.610 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:41.610 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@252 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:41.610 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@252 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:41.610 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:41.610 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:41.610 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:41.610 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:41.611 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:41.611 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:41.611 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@261 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:41.611 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@261 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:41.611 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@264 -- # '[' 0 -eq 0 ']' 00:11:41.611 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export valgrind= 00:11:41.611 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # valgrind= 00:11:41.611 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # uname -s 00:11:41.611 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # '[' Linux = Linux ']' 00:11:41.611 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # HUGEMEM=4096 00:11:41.611 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # export CLEAR_HUGE=yes 00:11:41.611 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # CLEAR_HUGE=yes 00:11:41.611 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@274 -- # [[ 0 -eq 1 ]] 00:11:41.611 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@274 -- # [[ 0 -eq 1 ]] 00:11:41.611 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@281 -- # MAKE=make 00:11:41.611 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@282 -- # MAKEFLAGS=-j48 00:11:41.611 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@298 -- # export HUGEMEM=4096 00:11:41.611 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@298 -- # HUGEMEM=4096 00:11:41.611 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@300 -- # NO_HUGE=() 00:11:41.611 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@301 -- # TEST_MODE= 00:11:41.611 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@302 -- # for i in "$@" 00:11:41.611 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@303 -- # case "$i" in 00:11:41.611 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # TEST_TRANSPORT=tcp 00:11:41.611 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@320 -- # [[ -z 377864 ]] 00:11:41.611 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@320 -- # kill -0 377864 00:11:41.611 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:11:41.611 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@330 -- # [[ -v testdir ]] 00:11:41.611 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@332 -- # local requested_size=2147483648 00:11:41.611 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@333 -- # local mount target_dir 00:11:41.611 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@335 -- # local -A mounts fss sizes avails uses 00:11:41.611 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@336 -- # local source fs size avail mount use 00:11:41.611 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # local storage_fallback storage_candidates 00:11:41.611 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # mktemp -udt spdk.XXXXXX 00:11:41.611 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # storage_fallback=/tmp/spdk.7KsaSI 00:11:41.611 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:11:41.611 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # [[ -n '' ]] 00:11:41.611 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@352 -- # [[ -n '' ]] 00:11:41.611 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@357 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.7KsaSI/tests/target /tmp/spdk.7KsaSI 00:11:41.611 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # requested_size=2214592512 00:11:41.611 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:11:41.611 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # df -T 00:11:41.611 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # grep -v Filesystem 00:11:41.611 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=spdk_devtmpfs 00:11:41.611 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=devtmpfs 00:11:41.611 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=67108864 00:11:41.611 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=67108864 00:11:41.611 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=0 00:11:41.611 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:11:41.611 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=/dev/pmem0 00:11:41.611 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=ext2 00:11:41.611 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=949354496 00:11:41.611 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=5284429824 00:11:41.611 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=4335075328 00:11:41.611 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:11:41.611 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=spdk_root 00:11:41.611 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=overlay 00:11:41.611 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=38634520576 00:11:41.611 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=45083295744 00:11:41.611 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=6448775168 00:11:41.611 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:11:41.611 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:11:41.611 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:11:41.611 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=22531727360 00:11:41.611 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=22541647872 00:11:41.611 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=9920512 00:11:41.611 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:11:41.611 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:11:41.611 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:11:41.611 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=8994222080 00:11:41.611 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=9016659968 00:11:41.611 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=22437888 00:11:41.611 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:11:41.611 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:11:41.611 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:11:41.612 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=22540812288 00:11:41.612 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=22541647872 00:11:41.612 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=835584 00:11:41.612 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:11:41.612 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:11:41.612 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:11:41.612 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=4508323840 00:11:41.612 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=4508327936 00:11:41.612 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=4096 00:11:41.612 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:11:41.612 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # printf '* Looking for test storage...\n' 00:11:41.612 * Looking for test storage... 00:11:41.612 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@370 -- # local target_space new_size 00:11:41.612 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # for target_dir in "${storage_candidates[@]}" 00:11:41.612 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:41.612 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # awk '$1 !~ /Filesystem/{print $6}' 00:11:41.612 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mount=/ 00:11:41.612 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # target_space=38634520576 00:11:41.612 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # (( target_space == 0 || target_space < requested_size )) 00:11:41.612 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # (( target_space >= requested_size )) 00:11:41.612 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # [[ overlay == tmpfs ]] 00:11:41.612 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # [[ overlay == ramfs ]] 00:11:41.612 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # [[ / == / ]] 00:11:41.612 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # new_size=8663367680 00:11:41.612 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@384 -- # (( new_size * 100 / sizes[/] > 95 )) 00:11:41.612 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:41.612 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:41.612 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@390 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:41.612 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:41.612 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # return 0 00:11:41.612 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:11:41.612 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:11:41.612 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:11:41.612 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:11:41.612 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:11:41.612 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:11:41.612 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:11:41.612 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:11:41.612 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:11:41.612 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:11:41.612 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:11:41.612 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:11:41.612 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:11:41.612 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:11:41.612 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:41.871 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:11:41.871 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:41.871 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:41.871 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:41.871 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:41.871 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:41.871 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:41.871 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:41.871 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:41.871 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:41.871 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:41.871 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:11:41.871 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:11:41.871 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:41.871 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:41.871 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:41.871 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:41.871 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:41.871 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:41.871 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:41.871 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:41.871 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:41.871 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:41.871 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:41.871 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:41.871 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:41.871 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:11:41.871 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:41.871 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:41.871 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:41.871 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:41.871 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:41.871 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:41.871 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:41.871 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:41.871 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:11:41.871 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:41.871 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:11:41.871 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:41.871 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:41.871 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:41.871 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:41.871 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:41.871 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:41.871 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:41.871 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:41.871 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:41.871 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:41.871 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:11:41.871 10:00:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:44.426 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:44.426 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:11:44.426 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:44.426 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:44.426 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:44.427 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:44.427 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:44.427 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:11:44.427 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:44.427 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:11:44.427 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:11:44.427 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:11:44.427 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:11:44.427 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:11:44.427 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:11:44.427 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:44.427 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:44.427 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:44.427 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:44.427 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:44.427 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:44.427 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:44.427 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:44.427 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:44.427 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:44.427 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:44.427 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:44.427 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:44.427 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:44.427 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:44.427 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:44.427 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:44.427 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:44.427 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:11:44.427 Found 0000:84:00.0 (0x8086 - 0x159b) 00:11:44.427 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:44.427 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:44.427 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:44.427 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:44.427 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:44.427 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:44.427 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:11:44.427 Found 0000:84:00.1 (0x8086 - 0x159b) 00:11:44.427 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:44.427 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:44.427 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:44.427 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:44.427 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:44.427 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:44.427 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:44.427 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:44.427 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:44.427 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:44.427 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:44.427 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:44.427 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:44.427 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:44.427 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:44.427 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:11:44.427 Found net devices under 0000:84:00.0: cvl_0_0 00:11:44.427 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:44.427 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:44.427 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:44.427 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:44.427 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:44.427 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:44.427 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:44.427 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:44.427 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:11:44.427 Found net devices under 0000:84:00.1: cvl_0_1 00:11:44.427 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:44.427 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:44.427 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:11:44.427 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:44.427 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:44.427 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:44.427 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:44.427 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:44.427 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:44.427 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:44.427 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:44.427 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:44.427 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:44.427 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:44.427 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:44.427 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:44.427 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:44.427 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:44.427 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:44.427 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:44.427 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:44.427 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:44.427 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:44.427 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:44.427 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:44.427 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:44.427 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:44.427 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.156 ms 00:11:44.427 00:11:44.427 --- 10.0.0.2 ping statistics --- 00:11:44.427 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:44.427 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:11:44.427 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:44.427 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:44.427 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.147 ms 00:11:44.428 00:11:44.428 --- 10.0.0.1 ping statistics --- 00:11:44.428 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:44.428 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:11:44.428 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:44.428 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:11:44.428 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:44.428 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:44.428 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:44.428 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:44.428 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:44.428 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:44.428 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:44.428 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:11:44.428 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:44.428 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:44.428 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:44.428 ************************************ 00:11:44.428 START TEST nvmf_filesystem_no_in_capsule 00:11:44.428 ************************************ 00:11:44.428 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 0 00:11:44.428 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:11:44.428 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:44.428 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:44.428 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:44.428 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:44.428 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=379522 00:11:44.428 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 379522 00:11:44.428 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 379522 ']' 00:11:44.428 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:44.428 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:44.428 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:44.428 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:44.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:44.428 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:44.428 10:00:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:44.686 [2024-07-25 10:00:29.605838] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:11:44.686 [2024-07-25 10:00:29.606015] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:44.686 EAL: No free 2048 kB hugepages reported on node 1 00:11:44.686 [2024-07-25 10:00:29.717183] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:44.686 [2024-07-25 10:00:29.844918] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:44.686 [2024-07-25 10:00:29.844978] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:44.686 [2024-07-25 10:00:29.845006] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:44.686 [2024-07-25 10:00:29.845029] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:44.686 [2024-07-25 10:00:29.845048] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:44.686 [2024-07-25 10:00:29.845118] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:44.686 [2024-07-25 10:00:29.845178] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:44.686 [2024-07-25 10:00:29.845241] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:44.686 [2024-07-25 10:00:29.845231] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:45.621 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:45.621 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:11:45.621 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:45.621 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:45.621 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:45.621 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:45.621 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:45.621 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:45.621 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.621 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:45.621 [2024-07-25 10:00:30.650396] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:45.621 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.621 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:45.621 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.621 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:45.878 Malloc1 00:11:45.878 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.878 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:45.878 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.878 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:45.878 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.878 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:45.878 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.878 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:45.878 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.878 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:45.878 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.878 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:45.878 [2024-07-25 10:00:30.837690] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:45.878 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.878 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:45.878 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:11:45.878 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:11:45.878 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:11:45.878 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:11:45.878 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:45.878 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:45.878 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:45.878 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:45.878 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:11:45.878 { 00:11:45.878 "name": "Malloc1", 00:11:45.878 "aliases": [ 00:11:45.878 "1b4a9907-9df4-4b20-abb7-f9895108fd7a" 00:11:45.878 ], 00:11:45.878 "product_name": "Malloc disk", 00:11:45.878 "block_size": 512, 00:11:45.878 "num_blocks": 1048576, 00:11:45.878 "uuid": "1b4a9907-9df4-4b20-abb7-f9895108fd7a", 00:11:45.878 "assigned_rate_limits": { 00:11:45.878 "rw_ios_per_sec": 0, 00:11:45.878 "rw_mbytes_per_sec": 0, 00:11:45.878 "r_mbytes_per_sec": 0, 00:11:45.878 "w_mbytes_per_sec": 0 00:11:45.878 }, 00:11:45.878 "claimed": true, 00:11:45.878 "claim_type": "exclusive_write", 00:11:45.878 "zoned": false, 00:11:45.878 "supported_io_types": { 00:11:45.878 "read": true, 00:11:45.878 "write": true, 00:11:45.878 "unmap": true, 00:11:45.878 "flush": true, 00:11:45.878 "reset": true, 00:11:45.878 "nvme_admin": false, 00:11:45.878 "nvme_io": false, 00:11:45.878 "nvme_io_md": false, 00:11:45.878 "write_zeroes": true, 00:11:45.878 "zcopy": true, 00:11:45.878 "get_zone_info": false, 00:11:45.878 "zone_management": false, 00:11:45.878 "zone_append": false, 00:11:45.878 "compare": false, 00:11:45.878 "compare_and_write": false, 00:11:45.878 "abort": true, 00:11:45.878 "seek_hole": false, 00:11:45.879 "seek_data": false, 00:11:45.879 "copy": true, 00:11:45.879 "nvme_iov_md": false 00:11:45.879 }, 00:11:45.879 "memory_domains": [ 00:11:45.879 { 00:11:45.879 "dma_device_id": "system", 00:11:45.879 "dma_device_type": 1 00:11:45.879 }, 00:11:45.879 { 00:11:45.879 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:45.879 "dma_device_type": 2 00:11:45.879 } 00:11:45.879 ], 00:11:45.879 "driver_specific": {} 00:11:45.879 } 00:11:45.879 ]' 00:11:45.879 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:11:45.879 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:11:45.879 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:11:45.879 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:11:45.879 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:11:45.879 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:11:45.879 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:45.879 10:00:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:46.809 10:00:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:46.809 10:00:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:11:46.809 10:00:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:46.809 10:00:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:46.809 10:00:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:11:48.704 10:00:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:48.704 10:00:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:48.704 10:00:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:48.704 10:00:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:48.704 10:00:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:48.704 10:00:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:11:48.705 10:00:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:48.705 10:00:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:48.705 10:00:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:48.705 10:00:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:48.705 10:00:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:48.705 10:00:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:48.705 10:00:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:48.705 10:00:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:48.705 10:00:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:48.705 10:00:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:48.705 10:00:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:48.962 10:00:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:49.525 10:00:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:50.895 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:11:50.895 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:50.895 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:50.895 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:50.895 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:50.895 ************************************ 00:11:50.895 START TEST filesystem_ext4 00:11:50.895 ************************************ 00:11:50.895 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:50.895 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:50.895 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:50.895 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:50.895 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:11:50.895 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:50.895 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:11:50.895 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # local force 00:11:50.895 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:11:50.895 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:11:50.895 10:00:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:50.895 mke2fs 1.46.5 (30-Dec-2021) 00:11:50.895 Discarding device blocks: 0/522240 done 00:11:50.895 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:50.895 Filesystem UUID: 9d8e4d7f-8a22-44dc-a58f-6bda2ac0f812 00:11:50.895 Superblock backups stored on blocks: 00:11:50.895 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:50.895 00:11:50.895 Allocating group tables: 0/64 done 00:11:50.895 Writing inode tables: 0/64 done 00:11:50.895 Creating journal (8192 blocks): done 00:11:51.972 Writing superblocks and filesystem accounting information: 0/64 done 00:11:51.972 00:11:51.972 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@945 -- # return 0 00:11:51.972 10:00:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:52.535 10:00:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:52.535 10:00:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:11:52.535 10:00:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:52.535 10:00:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:11:52.535 10:00:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:52.535 10:00:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:52.535 10:00:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 379522 00:11:52.535 10:00:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:52.535 10:00:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:52.535 10:00:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:52.535 10:00:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:52.535 00:11:52.535 real 0m2.007s 00:11:52.535 user 0m0.021s 00:11:52.535 sys 0m0.061s 00:11:52.535 10:00:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:52.535 10:00:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:52.535 ************************************ 00:11:52.535 END TEST filesystem_ext4 00:11:52.535 ************************************ 00:11:52.535 10:00:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:52.535 10:00:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:52.535 10:00:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:52.535 10:00:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:52.792 ************************************ 00:11:52.792 START TEST filesystem_btrfs 00:11:52.792 ************************************ 00:11:52.793 10:00:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:52.793 10:00:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:52.793 10:00:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:52.793 10:00:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:52.793 10:00:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:11:52.793 10:00:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:52.793 10:00:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:11:52.793 10:00:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # local force 00:11:52.793 10:00:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:11:52.793 10:00:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:11:52.793 10:00:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:53.050 btrfs-progs v6.6.2 00:11:53.050 See https://btrfs.readthedocs.io for more information. 00:11:53.050 00:11:53.050 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:53.050 NOTE: several default settings have changed in version 5.15, please make sure 00:11:53.050 this does not affect your deployments: 00:11:53.050 - DUP for metadata (-m dup) 00:11:53.050 - enabled no-holes (-O no-holes) 00:11:53.050 - enabled free-space-tree (-R free-space-tree) 00:11:53.050 00:11:53.050 Label: (null) 00:11:53.050 UUID: c1a709d9-cac3-4904-8605-b9602149a6b4 00:11:53.050 Node size: 16384 00:11:53.050 Sector size: 4096 00:11:53.050 Filesystem size: 510.00MiB 00:11:53.050 Block group profiles: 00:11:53.050 Data: single 8.00MiB 00:11:53.050 Metadata: DUP 32.00MiB 00:11:53.050 System: DUP 8.00MiB 00:11:53.050 SSD detected: yes 00:11:53.050 Zoned device: no 00:11:53.050 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:11:53.050 Runtime features: free-space-tree 00:11:53.050 Checksum: crc32c 00:11:53.050 Number of devices: 1 00:11:53.050 Devices: 00:11:53.050 ID SIZE PATH 00:11:53.050 1 510.00MiB /dev/nvme0n1p1 00:11:53.050 00:11:53.050 10:00:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@945 -- # return 0 00:11:53.050 10:00:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:53.618 10:00:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:53.618 10:00:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:11:53.618 10:00:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:53.618 10:00:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:11:53.618 10:00:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:53.618 10:00:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:53.618 10:00:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 379522 00:11:53.618 10:00:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:53.618 10:00:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:53.618 10:00:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:53.618 10:00:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:53.618 00:11:53.618 real 0m0.850s 00:11:53.618 user 0m0.018s 00:11:53.618 sys 0m0.118s 00:11:53.618 10:00:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:53.618 10:00:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:53.618 ************************************ 00:11:53.618 END TEST filesystem_btrfs 00:11:53.618 ************************************ 00:11:53.618 10:00:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:11:53.618 10:00:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:11:53.618 10:00:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:53.618 10:00:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:53.618 ************************************ 00:11:53.618 START TEST filesystem_xfs 00:11:53.618 ************************************ 00:11:53.618 10:00:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:11:53.618 10:00:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:53.618 10:00:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:53.618 10:00:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:53.618 10:00:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:11:53.618 10:00:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:11:53.618 10:00:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local i=0 00:11:53.618 10:00:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # local force 00:11:53.618 10:00:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:11:53.618 10:00:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@934 -- # force=-f 00:11:53.618 10:00:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:53.618 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:53.618 = sectsz=512 attr=2, projid32bit=1 00:11:53.618 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:53.618 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:53.618 data = bsize=4096 blocks=130560, imaxpct=25 00:11:53.618 = sunit=0 swidth=0 blks 00:11:53.618 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:53.618 log =internal log bsize=4096 blocks=16384, version=2 00:11:53.618 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:53.618 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:54.996 Discarding blocks...Done. 00:11:54.996 10:00:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@945 -- # return 0 00:11:54.996 10:00:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:57.537 10:00:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:57.537 10:00:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:11:57.537 10:00:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:57.537 10:00:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:11:57.537 10:00:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:11:57.537 10:00:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:57.537 10:00:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 379522 00:11:57.537 10:00:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:57.537 10:00:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:57.537 10:00:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:57.537 10:00:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:57.537 00:11:57.537 real 0m3.609s 00:11:57.537 user 0m0.019s 00:11:57.537 sys 0m0.060s 00:11:57.537 10:00:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:57.537 10:00:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:57.537 ************************************ 00:11:57.537 END TEST filesystem_xfs 00:11:57.537 ************************************ 00:11:57.537 10:00:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:57.537 10:00:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:57.537 10:00:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:57.537 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:57.537 10:00:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:57.537 10:00:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:11:57.537 10:00:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:57.537 10:00:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:57.537 10:00:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:57.537 10:00:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:57.537 10:00:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:11:57.537 10:00:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:57.537 10:00:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:57.537 10:00:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:57.537 10:00:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:57.537 10:00:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:57.537 10:00:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 379522 00:11:57.537 10:00:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 379522 ']' 00:11:57.537 10:00:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # kill -0 379522 00:11:57.537 10:00:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # uname 00:11:57.537 10:00:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:57.537 10:00:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 379522 00:11:57.797 10:00:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:57.797 10:00:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:57.797 10:00:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 379522' 00:11:57.797 killing process with pid 379522 00:11:57.797 10:00:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@969 -- # kill 379522 00:11:57.797 10:00:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@974 -- # wait 379522 00:11:58.378 10:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:58.378 00:11:58.378 real 0m13.740s 00:11:58.378 user 0m52.632s 00:11:58.378 sys 0m2.070s 00:11:58.378 10:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:58.378 10:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:58.378 ************************************ 00:11:58.378 END TEST nvmf_filesystem_no_in_capsule 00:11:58.378 ************************************ 00:11:58.378 10:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:11:58.378 10:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:58.378 10:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:58.378 10:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:58.378 ************************************ 00:11:58.378 START TEST nvmf_filesystem_in_capsule 00:11:58.378 ************************************ 00:11:58.378 10:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 4096 00:11:58.378 10:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:11:58.378 10:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:58.378 10:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:58.378 10:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:58.378 10:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:58.378 10:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=381330 00:11:58.378 10:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:58.378 10:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 381330 00:11:58.378 10:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 381330 ']' 00:11:58.378 10:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:58.378 10:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:58.378 10:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:58.378 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:58.378 10:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:58.378 10:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:58.378 [2024-07-25 10:00:43.355628] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:11:58.378 [2024-07-25 10:00:43.355721] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:58.378 EAL: No free 2048 kB hugepages reported on node 1 00:11:58.378 [2024-07-25 10:00:43.436065] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:58.665 [2024-07-25 10:00:43.560586] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:58.665 [2024-07-25 10:00:43.560642] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:58.665 [2024-07-25 10:00:43.560668] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:58.665 [2024-07-25 10:00:43.560689] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:58.665 [2024-07-25 10:00:43.560707] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:58.665 [2024-07-25 10:00:43.560784] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:58.665 [2024-07-25 10:00:43.560837] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:58.665 [2024-07-25 10:00:43.560902] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:58.665 [2024-07-25 10:00:43.560895] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:58.665 10:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:58.665 10:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:11:58.665 10:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:58.665 10:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:58.665 10:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:58.665 10:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:58.665 10:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:58.665 10:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:11:58.665 10:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.665 10:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:58.665 [2024-07-25 10:00:43.738272] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:58.665 10:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.665 10:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:58.665 10:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.665 10:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:58.933 Malloc1 00:11:58.933 10:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.933 10:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:58.933 10:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.933 10:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:58.933 10:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.933 10:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:58.933 10:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.933 10:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:58.933 10:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.933 10:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:58.933 10:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.933 10:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:58.933 [2024-07-25 10:00:43.931353] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:58.933 10:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.933 10:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:58.933 10:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:11:58.933 10:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:11:58.933 10:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:11:58.933 10:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:11:58.933 10:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:58.933 10:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:58.933 10:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:58.933 10:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:58.933 10:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:11:58.933 { 00:11:58.933 "name": "Malloc1", 00:11:58.933 "aliases": [ 00:11:58.933 "f631e1f3-b180-435c-b90a-5ffa455f0e35" 00:11:58.933 ], 00:11:58.933 "product_name": "Malloc disk", 00:11:58.933 "block_size": 512, 00:11:58.933 "num_blocks": 1048576, 00:11:58.933 "uuid": "f631e1f3-b180-435c-b90a-5ffa455f0e35", 00:11:58.933 "assigned_rate_limits": { 00:11:58.933 "rw_ios_per_sec": 0, 00:11:58.933 "rw_mbytes_per_sec": 0, 00:11:58.933 "r_mbytes_per_sec": 0, 00:11:58.933 "w_mbytes_per_sec": 0 00:11:58.933 }, 00:11:58.933 "claimed": true, 00:11:58.933 "claim_type": "exclusive_write", 00:11:58.933 "zoned": false, 00:11:58.933 "supported_io_types": { 00:11:58.933 "read": true, 00:11:58.933 "write": true, 00:11:58.933 "unmap": true, 00:11:58.933 "flush": true, 00:11:58.933 "reset": true, 00:11:58.933 "nvme_admin": false, 00:11:58.933 "nvme_io": false, 00:11:58.933 "nvme_io_md": false, 00:11:58.933 "write_zeroes": true, 00:11:58.933 "zcopy": true, 00:11:58.933 "get_zone_info": false, 00:11:58.933 "zone_management": false, 00:11:58.933 "zone_append": false, 00:11:58.933 "compare": false, 00:11:58.933 "compare_and_write": false, 00:11:58.933 "abort": true, 00:11:58.933 "seek_hole": false, 00:11:58.933 "seek_data": false, 00:11:58.933 "copy": true, 00:11:58.933 "nvme_iov_md": false 00:11:58.933 }, 00:11:58.933 "memory_domains": [ 00:11:58.933 { 00:11:58.933 "dma_device_id": "system", 00:11:58.933 "dma_device_type": 1 00:11:58.933 }, 00:11:58.933 { 00:11:58.933 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:58.933 "dma_device_type": 2 00:11:58.933 } 00:11:58.933 ], 00:11:58.933 "driver_specific": {} 00:11:58.933 } 00:11:58.933 ]' 00:11:58.934 10:00:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:11:58.934 10:00:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:11:58.934 10:00:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:11:59.194 10:00:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:11:59.194 10:00:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:11:59.194 10:00:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:11:59.194 10:00:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:59.194 10:00:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:59.762 10:00:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:59.762 10:00:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:11:59.762 10:00:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:59.762 10:00:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:59.762 10:00:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:12:01.682 10:00:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:01.682 10:00:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:01.682 10:00:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:01.682 10:00:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:01.682 10:00:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:01.682 10:00:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:12:01.682 10:00:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:12:01.682 10:00:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:12:01.682 10:00:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:12:01.682 10:00:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:12:01.682 10:00:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:12:01.682 10:00:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:12:01.682 10:00:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:12:01.682 10:00:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:12:01.682 10:00:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:12:01.682 10:00:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:12:01.682 10:00:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:12:02.251 10:00:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:12:03.189 10:00:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:12:04.129 10:00:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:12:04.129 10:00:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:12:04.129 10:00:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:04.129 10:00:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:04.129 10:00:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:04.129 ************************************ 00:12:04.129 START TEST filesystem_in_capsule_ext4 00:12:04.129 ************************************ 00:12:04.129 10:00:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:12:04.129 10:00:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:12:04.129 10:00:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:04.129 10:00:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:12:04.129 10:00:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:12:04.129 10:00:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:12:04.129 10:00:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:12:04.129 10:00:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # local force 00:12:04.129 10:00:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:12:04.129 10:00:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:12:04.129 10:00:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:12:04.129 mke2fs 1.46.5 (30-Dec-2021) 00:12:04.389 Discarding device blocks: 0/522240 done 00:12:04.389 Creating filesystem with 522240 1k blocks and 130560 inodes 00:12:04.389 Filesystem UUID: 9ff1cf42-e039-4607-a78c-f2c0e11b4198 00:12:04.389 Superblock backups stored on blocks: 00:12:04.389 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:12:04.389 00:12:04.389 Allocating group tables: 0/64 done 00:12:04.389 Writing inode tables: 0/64 done 00:12:04.389 Creating journal (8192 blocks): done 00:12:04.389 Writing superblocks and filesystem accounting information: 0/64 done 00:12:04.389 00:12:04.389 10:00:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@945 -- # return 0 00:12:04.389 10:00:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:05.331 10:00:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:05.331 10:00:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:12:05.331 10:00:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:05.331 10:00:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:12:05.331 10:00:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:05.331 10:00:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:05.331 10:00:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 381330 00:12:05.331 10:00:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:05.331 10:00:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:05.331 10:00:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:05.331 10:00:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:05.331 00:12:05.331 real 0m1.188s 00:12:05.331 user 0m0.021s 00:12:05.331 sys 0m0.049s 00:12:05.331 10:00:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:05.331 10:00:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:05.331 ************************************ 00:12:05.331 END TEST filesystem_in_capsule_ext4 00:12:05.331 ************************************ 00:12:05.331 10:00:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:05.331 10:00:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:05.331 10:00:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:05.331 10:00:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:05.331 ************************************ 00:12:05.331 START TEST filesystem_in_capsule_btrfs 00:12:05.331 ************************************ 00:12:05.331 10:00:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:05.331 10:00:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:05.331 10:00:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:05.331 10:00:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:05.331 10:00:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:12:05.331 10:00:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:12:05.331 10:00:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:12:05.331 10:00:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # local force 00:12:05.331 10:00:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:12:05.331 10:00:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:12:05.331 10:00:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:05.901 btrfs-progs v6.6.2 00:12:05.901 See https://btrfs.readthedocs.io for more information. 00:12:05.901 00:12:05.901 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:05.901 NOTE: several default settings have changed in version 5.15, please make sure 00:12:05.901 this does not affect your deployments: 00:12:05.901 - DUP for metadata (-m dup) 00:12:05.901 - enabled no-holes (-O no-holes) 00:12:05.901 - enabled free-space-tree (-R free-space-tree) 00:12:05.901 00:12:05.901 Label: (null) 00:12:05.901 UUID: 0607a480-52e6-401e-9651-9c6cad299aff 00:12:05.901 Node size: 16384 00:12:05.901 Sector size: 4096 00:12:05.901 Filesystem size: 510.00MiB 00:12:05.901 Block group profiles: 00:12:05.901 Data: single 8.00MiB 00:12:05.901 Metadata: DUP 32.00MiB 00:12:05.901 System: DUP 8.00MiB 00:12:05.901 SSD detected: yes 00:12:05.901 Zoned device: no 00:12:05.901 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:12:05.901 Runtime features: free-space-tree 00:12:05.901 Checksum: crc32c 00:12:05.901 Number of devices: 1 00:12:05.901 Devices: 00:12:05.901 ID SIZE PATH 00:12:05.901 1 510.00MiB /dev/nvme0n1p1 00:12:05.901 00:12:05.901 10:00:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@945 -- # return 0 00:12:05.901 10:00:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:06.472 10:00:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:06.472 10:00:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:12:06.472 10:00:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:06.472 10:00:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:12:06.472 10:00:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:06.472 10:00:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:06.733 10:00:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 381330 00:12:06.733 10:00:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:06.733 10:00:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:06.733 10:00:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:06.733 10:00:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:06.733 00:12:06.734 real 0m1.232s 00:12:06.734 user 0m0.011s 00:12:06.734 sys 0m0.131s 00:12:06.734 10:00:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:06.734 10:00:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:06.734 ************************************ 00:12:06.734 END TEST filesystem_in_capsule_btrfs 00:12:06.734 ************************************ 00:12:06.734 10:00:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:12:06.734 10:00:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:12:06.734 10:00:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:06.734 10:00:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:06.734 ************************************ 00:12:06.734 START TEST filesystem_in_capsule_xfs 00:12:06.734 ************************************ 00:12:06.734 10:00:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:12:06.734 10:00:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:06.734 10:00:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:06.734 10:00:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:06.734 10:00:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:12:06.734 10:00:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:12:06.734 10:00:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local i=0 00:12:06.734 10:00:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # local force 00:12:06.734 10:00:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:12:06.734 10:00:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@934 -- # force=-f 00:12:06.734 10:00:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:06.734 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:06.734 = sectsz=512 attr=2, projid32bit=1 00:12:06.734 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:06.734 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:06.734 data = bsize=4096 blocks=130560, imaxpct=25 00:12:06.734 = sunit=0 swidth=0 blks 00:12:06.734 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:06.734 log =internal log bsize=4096 blocks=16384, version=2 00:12:06.734 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:06.734 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:07.672 Discarding blocks...Done. 00:12:07.672 10:00:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@945 -- # return 0 00:12:07.672 10:00:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:09.579 10:00:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:09.839 10:00:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:12:09.839 10:00:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:09.839 10:00:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:12:09.839 10:00:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:12:09.839 10:00:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:09.839 10:00:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 381330 00:12:09.839 10:00:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:09.839 10:00:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:09.839 10:00:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:09.839 10:00:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:09.839 00:12:09.839 real 0m3.098s 00:12:09.839 user 0m0.013s 00:12:09.839 sys 0m0.066s 00:12:09.839 10:00:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:09.839 10:00:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:09.839 ************************************ 00:12:09.839 END TEST filesystem_in_capsule_xfs 00:12:09.839 ************************************ 00:12:09.839 10:00:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:09.839 10:00:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:09.839 10:00:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:09.839 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:09.839 10:00:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:09.839 10:00:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:12:09.839 10:00:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:09.839 10:00:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:09.839 10:00:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:09.839 10:00:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:09.839 10:00:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:12:09.839 10:00:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:09.839 10:00:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:09.839 10:00:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:09.839 10:00:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:09.839 10:00:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:09.839 10:00:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 381330 00:12:09.839 10:00:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 381330 ']' 00:12:09.839 10:00:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # kill -0 381330 00:12:09.839 10:00:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # uname 00:12:10.098 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:10.098 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 381330 00:12:10.098 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:10.098 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:10.099 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 381330' 00:12:10.099 killing process with pid 381330 00:12:10.099 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@969 -- # kill 381330 00:12:10.099 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@974 -- # wait 381330 00:12:10.666 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:10.666 00:12:10.666 real 0m12.260s 00:12:10.666 user 0m46.838s 00:12:10.666 sys 0m1.897s 00:12:10.666 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:10.666 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:10.666 ************************************ 00:12:10.666 END TEST nvmf_filesystem_in_capsule 00:12:10.666 ************************************ 00:12:10.666 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:12:10.666 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:10.666 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:12:10.666 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:10.666 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:12:10.666 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:10.666 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:10.666 rmmod nvme_tcp 00:12:10.666 rmmod nvme_fabrics 00:12:10.666 rmmod nvme_keyring 00:12:10.666 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:10.666 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:12:10.666 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:12:10.666 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:12:10.666 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:10.666 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:10.666 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:10.666 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:10.666 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:10.666 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:10.666 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:10.667 10:00:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:12.573 10:00:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:12.573 00:12:12.573 real 0m31.104s 00:12:12.573 user 1m40.477s 00:12:12.573 sys 0m6.079s 00:12:12.573 10:00:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:12.573 10:00:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:12.573 ************************************ 00:12:12.573 END TEST nvmf_filesystem 00:12:12.573 ************************************ 00:12:12.573 10:00:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:12.573 10:00:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:12.573 10:00:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:12.573 10:00:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:12.832 ************************************ 00:12:12.832 START TEST nvmf_target_discovery 00:12:12.832 ************************************ 00:12:12.832 10:00:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:12.832 * Looking for test storage... 00:12:12.832 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:12.832 10:00:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:12.832 10:00:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:12:12.832 10:00:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:12.832 10:00:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:12.832 10:00:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:12.832 10:00:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:12.832 10:00:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:12.832 10:00:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:12.832 10:00:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:12.832 10:00:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:12.832 10:00:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:12.832 10:00:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:12.832 10:00:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:12:12.832 10:00:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:12:12.832 10:00:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:12.832 10:00:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:12.832 10:00:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:12.832 10:00:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:12.832 10:00:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:12.832 10:00:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:12.832 10:00:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:12.832 10:00:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:12.832 10:00:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.833 10:00:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.833 10:00:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.833 10:00:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:12:12.833 10:00:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.833 10:00:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:12:12.833 10:00:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:12.833 10:00:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:12.833 10:00:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:12.833 10:00:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:12.833 10:00:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:12.833 10:00:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:12.833 10:00:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:12.833 10:00:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:12.833 10:00:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:12:12.833 10:00:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:12:12.833 10:00:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:12:12.833 10:00:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:12:12.833 10:00:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:12:12.833 10:00:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:12.833 10:00:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:12.833 10:00:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:12.833 10:00:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:12.833 10:00:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:12.833 10:00:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:12.833 10:00:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:12.833 10:00:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:12.833 10:00:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:12.833 10:00:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:12.833 10:00:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:12:12.833 10:00:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:15.437 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:15.437 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:12:15.437 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:15.437 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:15.437 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:15.437 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:15.437 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:15.437 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:12:15.437 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:15.437 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:12:15.437 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:12:15.437 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:12:15.437 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:12:15.437 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:12:15.437 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:12:15.437 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:15.437 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:15.437 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:15.437 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:15.437 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:15.437 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:15.437 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:15.437 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:15.437 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:15.437 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:15.437 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:15.437 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:15.437 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:15.437 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:15.437 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:15.437 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:15.437 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:15.437 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:15.437 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:12:15.437 Found 0000:84:00.0 (0x8086 - 0x159b) 00:12:15.437 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:15.437 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:15.437 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:15.438 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:15.438 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:15.438 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:15.438 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:12:15.438 Found 0000:84:00.1 (0x8086 - 0x159b) 00:12:15.438 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:15.438 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:15.438 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:15.438 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:15.438 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:15.438 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:15.438 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:15.438 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:15.438 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:15.438 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:15.438 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:15.438 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:15.438 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:15.438 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:15.438 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:15.438 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:12:15.438 Found net devices under 0000:84:00.0: cvl_0_0 00:12:15.438 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:15.438 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:15.438 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:15.438 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:15.438 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:15.438 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:15.438 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:15.438 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:15.438 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:12:15.438 Found net devices under 0000:84:00.1: cvl_0_1 00:12:15.438 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:15.438 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:15.438 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:12:15.438 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:15.438 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:15.438 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:15.438 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:15.438 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:15.438 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:15.438 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:15.438 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:15.438 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:15.438 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:15.438 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:15.438 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:15.438 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:15.438 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:15.438 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:15.438 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:15.438 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:15.438 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:15.438 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:15.438 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:15.438 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:15.438 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:15.438 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:15.438 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:15.438 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.144 ms 00:12:15.438 00:12:15.438 --- 10.0.0.2 ping statistics --- 00:12:15.438 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:15.438 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:12:15.438 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:15.438 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:15.438 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:12:15.438 00:12:15.438 --- 10.0.0.1 ping statistics --- 00:12:15.438 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:15.438 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:12:15.438 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:15.438 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:12:15.438 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:15.438 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:15.438 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:15.438 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:15.438 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:15.438 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:15.438 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:15.438 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:12:15.438 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:15.438 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:15.438 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:15.438 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=384945 00:12:15.438 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:15.438 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 384945 00:12:15.438 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@831 -- # '[' -z 384945 ']' 00:12:15.438 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:15.438 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:15.438 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:15.438 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:15.438 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:15.438 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:15.438 [2024-07-25 10:01:00.380273] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:12:15.438 [2024-07-25 10:01:00.380368] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:15.438 EAL: No free 2048 kB hugepages reported on node 1 00:12:15.438 [2024-07-25 10:01:00.456250] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:15.438 [2024-07-25 10:01:00.579322] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:15.438 [2024-07-25 10:01:00.579376] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:15.438 [2024-07-25 10:01:00.579401] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:15.438 [2024-07-25 10:01:00.579422] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:15.438 [2024-07-25 10:01:00.579460] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:15.438 [2024-07-25 10:01:00.579532] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:15.438 [2024-07-25 10:01:00.579589] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:15.438 [2024-07-25 10:01:00.579643] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:15.438 [2024-07-25 10:01:00.579652] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:15.696 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:15.696 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # return 0 00:12:15.696 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:15.696 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:15.696 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:15.696 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:15.696 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:15.696 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.696 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:15.696 [2024-07-25 10:01:00.748213] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:15.696 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.696 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:12:15.696 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:15.696 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:12:15.696 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.696 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:15.696 Null1 00:12:15.696 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.696 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:15.696 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.696 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:15.696 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.696 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:12:15.696 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.696 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:15.696 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.696 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:15.696 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.696 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:15.696 [2024-07-25 10:01:00.788567] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:15.696 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.696 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:15.696 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:12:15.696 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.696 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:15.696 Null2 00:12:15.696 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.696 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:12:15.696 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.696 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:15.696 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.696 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:12:15.696 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.696 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:15.696 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.696 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:12:15.696 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.696 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:15.696 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.696 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:15.696 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:12:15.696 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.696 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:15.696 Null3 00:12:15.696 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.696 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:12:15.696 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.696 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:15.696 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.696 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:12:15.696 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.696 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:15.696 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.696 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:12:15.696 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.696 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:15.696 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.696 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:15.696 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:12:15.696 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.696 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:15.956 Null4 00:12:15.956 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.956 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:12:15.956 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.956 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:15.956 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.956 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:12:15.956 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.956 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:15.956 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.956 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:12:15.956 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.956 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:15.956 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.956 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:15.956 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.957 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:15.957 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.957 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:12:15.957 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.957 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:15.957 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.957 10:01:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 4420 00:12:15.957 00:12:15.957 Discovery Log Number of Records 6, Generation counter 6 00:12:15.957 =====Discovery Log Entry 0====== 00:12:15.957 trtype: tcp 00:12:15.957 adrfam: ipv4 00:12:15.957 subtype: current discovery subsystem 00:12:15.957 treq: not required 00:12:15.957 portid: 0 00:12:15.957 trsvcid: 4420 00:12:15.957 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:15.957 traddr: 10.0.0.2 00:12:15.957 eflags: explicit discovery connections, duplicate discovery information 00:12:15.957 sectype: none 00:12:15.957 =====Discovery Log Entry 1====== 00:12:15.957 trtype: tcp 00:12:15.957 adrfam: ipv4 00:12:15.957 subtype: nvme subsystem 00:12:15.957 treq: not required 00:12:15.957 portid: 0 00:12:15.957 trsvcid: 4420 00:12:15.957 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:15.957 traddr: 10.0.0.2 00:12:15.957 eflags: none 00:12:15.957 sectype: none 00:12:15.957 =====Discovery Log Entry 2====== 00:12:15.957 trtype: tcp 00:12:15.957 adrfam: ipv4 00:12:15.957 subtype: nvme subsystem 00:12:15.957 treq: not required 00:12:15.957 portid: 0 00:12:15.957 trsvcid: 4420 00:12:15.957 subnqn: nqn.2016-06.io.spdk:cnode2 00:12:15.957 traddr: 10.0.0.2 00:12:15.957 eflags: none 00:12:15.957 sectype: none 00:12:15.957 =====Discovery Log Entry 3====== 00:12:15.957 trtype: tcp 00:12:15.957 adrfam: ipv4 00:12:15.957 subtype: nvme subsystem 00:12:15.957 treq: not required 00:12:15.957 portid: 0 00:12:15.957 trsvcid: 4420 00:12:15.957 subnqn: nqn.2016-06.io.spdk:cnode3 00:12:15.957 traddr: 10.0.0.2 00:12:15.957 eflags: none 00:12:15.957 sectype: none 00:12:15.957 =====Discovery Log Entry 4====== 00:12:15.957 trtype: tcp 00:12:15.957 adrfam: ipv4 00:12:15.957 subtype: nvme subsystem 00:12:15.957 treq: not required 00:12:15.957 portid: 0 00:12:15.957 trsvcid: 4420 00:12:15.957 subnqn: nqn.2016-06.io.spdk:cnode4 00:12:15.957 traddr: 10.0.0.2 00:12:15.957 eflags: none 00:12:15.957 sectype: none 00:12:15.957 =====Discovery Log Entry 5====== 00:12:15.957 trtype: tcp 00:12:15.957 adrfam: ipv4 00:12:15.957 subtype: discovery subsystem referral 00:12:15.957 treq: not required 00:12:15.957 portid: 0 00:12:15.957 trsvcid: 4430 00:12:15.957 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:15.957 traddr: 10.0.0.2 00:12:15.957 eflags: none 00:12:15.957 sectype: none 00:12:15.957 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:12:15.957 Perform nvmf subsystem discovery via RPC 00:12:15.957 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:12:15.957 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.957 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:15.957 [ 00:12:15.957 { 00:12:15.957 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:15.957 "subtype": "Discovery", 00:12:15.957 "listen_addresses": [ 00:12:15.957 { 00:12:15.957 "trtype": "TCP", 00:12:15.957 "adrfam": "IPv4", 00:12:15.957 "traddr": "10.0.0.2", 00:12:15.957 "trsvcid": "4420" 00:12:15.957 } 00:12:15.957 ], 00:12:15.957 "allow_any_host": true, 00:12:15.957 "hosts": [] 00:12:15.957 }, 00:12:15.957 { 00:12:15.957 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:15.957 "subtype": "NVMe", 00:12:15.957 "listen_addresses": [ 00:12:15.957 { 00:12:15.957 "trtype": "TCP", 00:12:15.957 "adrfam": "IPv4", 00:12:15.957 "traddr": "10.0.0.2", 00:12:15.957 "trsvcid": "4420" 00:12:15.957 } 00:12:15.957 ], 00:12:15.957 "allow_any_host": true, 00:12:15.957 "hosts": [], 00:12:15.957 "serial_number": "SPDK00000000000001", 00:12:15.957 "model_number": "SPDK bdev Controller", 00:12:15.957 "max_namespaces": 32, 00:12:15.957 "min_cntlid": 1, 00:12:15.957 "max_cntlid": 65519, 00:12:15.957 "namespaces": [ 00:12:15.957 { 00:12:15.957 "nsid": 1, 00:12:15.957 "bdev_name": "Null1", 00:12:15.957 "name": "Null1", 00:12:15.957 "nguid": "8209636286BB48FB80E2A8B88F5484AA", 00:12:15.957 "uuid": "82096362-86bb-48fb-80e2-a8b88f5484aa" 00:12:15.957 } 00:12:15.957 ] 00:12:15.957 }, 00:12:15.957 { 00:12:15.957 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:12:15.957 "subtype": "NVMe", 00:12:15.957 "listen_addresses": [ 00:12:15.957 { 00:12:15.957 "trtype": "TCP", 00:12:15.957 "adrfam": "IPv4", 00:12:15.957 "traddr": "10.0.0.2", 00:12:15.957 "trsvcid": "4420" 00:12:15.957 } 00:12:15.957 ], 00:12:15.957 "allow_any_host": true, 00:12:15.957 "hosts": [], 00:12:15.957 "serial_number": "SPDK00000000000002", 00:12:15.957 "model_number": "SPDK bdev Controller", 00:12:15.957 "max_namespaces": 32, 00:12:15.957 "min_cntlid": 1, 00:12:15.957 "max_cntlid": 65519, 00:12:15.957 "namespaces": [ 00:12:15.957 { 00:12:15.957 "nsid": 1, 00:12:15.957 "bdev_name": "Null2", 00:12:15.957 "name": "Null2", 00:12:15.957 "nguid": "0C8DC4D83095477BBB413A4A9A37D805", 00:12:15.957 "uuid": "0c8dc4d8-3095-477b-bb41-3a4a9a37d805" 00:12:15.957 } 00:12:15.957 ] 00:12:15.957 }, 00:12:15.957 { 00:12:15.957 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:12:15.957 "subtype": "NVMe", 00:12:15.957 "listen_addresses": [ 00:12:15.957 { 00:12:15.957 "trtype": "TCP", 00:12:15.957 "adrfam": "IPv4", 00:12:15.957 "traddr": "10.0.0.2", 00:12:15.957 "trsvcid": "4420" 00:12:15.957 } 00:12:15.957 ], 00:12:15.957 "allow_any_host": true, 00:12:15.957 "hosts": [], 00:12:15.957 "serial_number": "SPDK00000000000003", 00:12:15.957 "model_number": "SPDK bdev Controller", 00:12:15.957 "max_namespaces": 32, 00:12:15.957 "min_cntlid": 1, 00:12:15.957 "max_cntlid": 65519, 00:12:15.957 "namespaces": [ 00:12:15.957 { 00:12:15.957 "nsid": 1, 00:12:15.957 "bdev_name": "Null3", 00:12:15.957 "name": "Null3", 00:12:15.957 "nguid": "C7E75308496C4C77A226812AC25D6172", 00:12:15.957 "uuid": "c7e75308-496c-4c77-a226-812ac25d6172" 00:12:15.957 } 00:12:15.957 ] 00:12:15.957 }, 00:12:15.957 { 00:12:15.957 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:12:15.957 "subtype": "NVMe", 00:12:15.957 "listen_addresses": [ 00:12:15.957 { 00:12:15.957 "trtype": "TCP", 00:12:15.957 "adrfam": "IPv4", 00:12:15.957 "traddr": "10.0.0.2", 00:12:15.957 "trsvcid": "4420" 00:12:15.957 } 00:12:15.957 ], 00:12:15.957 "allow_any_host": true, 00:12:15.957 "hosts": [], 00:12:15.957 "serial_number": "SPDK00000000000004", 00:12:15.957 "model_number": "SPDK bdev Controller", 00:12:15.957 "max_namespaces": 32, 00:12:15.957 "min_cntlid": 1, 00:12:15.957 "max_cntlid": 65519, 00:12:15.957 "namespaces": [ 00:12:15.957 { 00:12:15.957 "nsid": 1, 00:12:15.957 "bdev_name": "Null4", 00:12:15.957 "name": "Null4", 00:12:15.957 "nguid": "A1BE6000C61B4F8284288BE4AFBCB5C2", 00:12:15.957 "uuid": "a1be6000-c61b-4f82-8428-8be4afbcb5c2" 00:12:15.957 } 00:12:15.957 ] 00:12:15.957 } 00:12:15.957 ] 00:12:15.957 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.957 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:12:15.957 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:15.957 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:15.957 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.957 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:15.957 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.957 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:12:15.957 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.957 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:15.957 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.957 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:15.957 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:12:15.957 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.957 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:15.958 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.958 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:12:15.958 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.958 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:16.216 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.216 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:16.216 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:12:16.216 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.216 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:16.216 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.216 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:12:16.216 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.216 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:16.216 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.216 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:16.216 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:12:16.216 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.216 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:16.216 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.216 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:12:16.216 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.216 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:16.216 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.216 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:12:16.216 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.216 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:16.216 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.216 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:12:16.216 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:16.216 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:12:16.216 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:16.216 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:16.216 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:12:16.216 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:12:16.216 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:12:16.216 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:12:16.216 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:16.216 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:12:16.216 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:16.216 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:12:16.216 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:16.216 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:16.216 rmmod nvme_tcp 00:12:16.216 rmmod nvme_fabrics 00:12:16.216 rmmod nvme_keyring 00:12:16.216 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:16.216 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:12:16.216 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:12:16.216 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 384945 ']' 00:12:16.216 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 384945 00:12:16.216 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@950 -- # '[' -z 384945 ']' 00:12:16.216 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # kill -0 384945 00:12:16.216 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # uname 00:12:16.216 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:16.216 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 384945 00:12:16.216 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:16.216 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:16.216 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 384945' 00:12:16.216 killing process with pid 384945 00:12:16.216 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@969 -- # kill 384945 00:12:16.216 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@974 -- # wait 384945 00:12:16.473 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:16.473 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:16.473 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:16.473 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:16.473 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:16.473 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:16.473 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:16.473 10:01:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:19.013 10:01:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:19.013 00:12:19.013 real 0m5.834s 00:12:19.013 user 0m4.654s 00:12:19.013 sys 0m2.137s 00:12:19.013 10:01:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:19.013 10:01:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:19.013 ************************************ 00:12:19.013 END TEST nvmf_target_discovery 00:12:19.013 ************************************ 00:12:19.013 10:01:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:19.013 10:01:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:19.013 10:01:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:19.013 10:01:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:19.013 ************************************ 00:12:19.013 START TEST nvmf_referrals 00:12:19.013 ************************************ 00:12:19.013 10:01:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:19.013 * Looking for test storage... 00:12:19.013 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:19.013 10:01:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:19.013 10:01:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:12:19.013 10:01:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:19.013 10:01:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:19.013 10:01:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:19.013 10:01:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:19.013 10:01:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:19.013 10:01:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:19.013 10:01:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:19.013 10:01:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:19.013 10:01:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:19.013 10:01:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:19.013 10:01:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:12:19.013 10:01:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:12:19.013 10:01:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:19.013 10:01:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:19.013 10:01:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:19.013 10:01:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:19.013 10:01:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:19.013 10:01:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:19.013 10:01:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:19.013 10:01:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:19.013 10:01:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.013 10:01:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.013 10:01:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.013 10:01:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:12:19.013 10:01:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.013 10:01:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:12:19.013 10:01:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:19.013 10:01:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:19.013 10:01:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:19.013 10:01:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:19.013 10:01:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:19.013 10:01:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:19.013 10:01:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:19.013 10:01:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:19.013 10:01:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:12:19.013 10:01:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:12:19.013 10:01:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:12:19.013 10:01:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:12:19.013 10:01:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:12:19.013 10:01:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:12:19.013 10:01:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:12:19.013 10:01:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:19.013 10:01:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:19.013 10:01:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:19.013 10:01:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:19.013 10:01:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:19.013 10:01:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:19.013 10:01:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:19.013 10:01:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:19.013 10:01:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:19.013 10:01:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:19.013 10:01:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:12:19.013 10:01:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:21.544 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:21.544 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:12:21.544 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:21.544 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:21.544 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:21.544 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:21.544 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:21.544 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:12:21.544 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:21.544 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:12:21.544 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:12:21.544 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:12:21.544 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:12:21.544 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:12:21.544 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:12:21.544 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:21.544 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:21.544 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:21.544 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:21.544 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:21.544 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:21.544 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:21.544 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:21.544 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:21.544 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:21.544 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:21.544 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:21.544 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:21.544 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:21.544 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:21.544 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:21.544 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:21.544 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:21.544 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:12:21.544 Found 0000:84:00.0 (0x8086 - 0x159b) 00:12:21.544 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:21.544 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:21.544 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:21.544 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:21.544 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:21.544 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:21.544 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:12:21.544 Found 0000:84:00.1 (0x8086 - 0x159b) 00:12:21.544 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:21.544 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:21.544 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:21.544 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:21.544 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:21.544 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:21.544 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:21.544 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:21.544 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:21.544 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:21.544 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:21.544 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:21.544 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:21.544 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:21.544 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:21.544 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:12:21.544 Found net devices under 0000:84:00.0: cvl_0_0 00:12:21.544 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:21.544 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:21.544 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:21.544 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:21.544 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:21.544 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:21.544 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:21.544 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:21.544 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:12:21.544 Found net devices under 0000:84:00.1: cvl_0_1 00:12:21.544 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:21.544 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:21.544 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:12:21.544 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:21.544 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:21.544 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:21.544 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:21.544 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:21.544 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:21.544 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:21.545 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:21.545 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:21.545 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:21.545 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:21.545 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:21.545 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:21.545 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:21.545 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:21.545 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:21.545 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:21.545 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:21.545 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:21.545 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:21.545 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:21.545 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:21.545 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:21.545 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:21.545 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.263 ms 00:12:21.545 00:12:21.545 --- 10.0.0.2 ping statistics --- 00:12:21.545 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:21.545 rtt min/avg/max/mdev = 0.263/0.263/0.263/0.000 ms 00:12:21.545 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:21.545 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:21.545 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.183 ms 00:12:21.545 00:12:21.545 --- 10.0.0.1 ping statistics --- 00:12:21.545 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:21.545 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:12:21.545 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:21.545 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:12:21.545 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:21.545 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:21.545 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:21.545 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:21.545 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:21.545 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:21.545 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:21.545 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:12:21.545 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:21.545 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:21.545 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:21.545 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=387053 00:12:21.545 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 387053 00:12:21.545 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:21.545 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@831 -- # '[' -z 387053 ']' 00:12:21.545 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:21.545 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:21.545 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:21.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:21.545 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:21.545 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:21.545 [2024-07-25 10:01:06.368110] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:12:21.545 [2024-07-25 10:01:06.368218] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:21.545 EAL: No free 2048 kB hugepages reported on node 1 00:12:21.545 [2024-07-25 10:01:06.452621] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:21.545 [2024-07-25 10:01:06.576631] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:21.545 [2024-07-25 10:01:06.576698] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:21.545 [2024-07-25 10:01:06.576723] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:21.545 [2024-07-25 10:01:06.576744] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:21.545 [2024-07-25 10:01:06.576761] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:21.545 [2024-07-25 10:01:06.576855] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:21.545 [2024-07-25 10:01:06.576926] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:21.545 [2024-07-25 10:01:06.576983] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:21.545 [2024-07-25 10:01:06.576991] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:21.803 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:21.803 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # return 0 00:12:21.803 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:21.803 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:21.803 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:21.803 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:21.803 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:21.803 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.803 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:21.803 [2024-07-25 10:01:06.745268] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:21.803 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.803 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:12:21.803 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.803 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:21.803 [2024-07-25 10:01:06.757553] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:12:21.803 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.803 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:12:21.803 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.803 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:21.803 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.803 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:12:21.803 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.803 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:21.803 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.803 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:12:21.803 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.803 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:21.803 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.803 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:21.803 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:12:21.803 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.803 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:21.803 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.803 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:12:21.803 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:12:21.803 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:21.803 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:21.803 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:21.803 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:21.804 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:21.804 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:21.804 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:21.804 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:21.804 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:21.804 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:12:21.804 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:21.804 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:21.804 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:21.804 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:21.804 10:01:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:22.060 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:22.060 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:22.060 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:12:22.060 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.060 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:22.060 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.060 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:12:22.060 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.060 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:22.060 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.060 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:12:22.060 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.060 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:22.060 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.060 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:22.060 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:12:22.060 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.060 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:22.060 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.060 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:12:22.060 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:12:22.060 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:22.060 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:22.060 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:22.060 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:22.060 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:22.060 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:22.060 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:12:22.060 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:12:22.060 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.060 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:22.060 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.060 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:22.060 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.060 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:22.317 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.317 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:12:22.317 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:22.317 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:22.317 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:22.317 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.317 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:22.317 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:22.317 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.317 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:12:22.317 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:22.317 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:12:22.317 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:22.317 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:22.317 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:22.317 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:22.317 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:22.317 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:12:22.317 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:22.317 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:12:22.317 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:12:22.317 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:22.317 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:22.317 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:22.317 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:12:22.317 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:12:22.317 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:22.317 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:12:22.317 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:22.317 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:22.575 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:22.575 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:22.575 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.575 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:22.575 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.575 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:12:22.575 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:22.575 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:22.575 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:22.575 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.575 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:22.575 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:22.575 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.575 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:12:22.575 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:22.575 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:12:22.575 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:22.575 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:22.575 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:22.575 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:22.575 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:22.833 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:12:22.833 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:22.833 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:12:22.833 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:12:22.833 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:22.833 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:22.833 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:22.833 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:12:22.833 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:12:22.833 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:12:22.833 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:22.833 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:22.833 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:22.833 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:22.833 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:12:22.833 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.833 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:22.833 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:22.833 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:22.833 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:12:22.833 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:22.833 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:22.833 10:01:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:23.090 10:01:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:12:23.090 10:01:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:12:23.090 10:01:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:23.090 10:01:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:23.090 10:01:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:23.090 10:01:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:23.090 10:01:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:23.090 10:01:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:23.090 10:01:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:12:23.090 10:01:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:12:23.090 10:01:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:12:23.090 10:01:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:23.090 10:01:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:12:23.090 10:01:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:23.090 10:01:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:12:23.090 10:01:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:23.090 10:01:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:23.090 rmmod nvme_tcp 00:12:23.090 rmmod nvme_fabrics 00:12:23.090 rmmod nvme_keyring 00:12:23.090 10:01:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:23.090 10:01:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:12:23.090 10:01:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:12:23.090 10:01:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 387053 ']' 00:12:23.090 10:01:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 387053 00:12:23.090 10:01:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@950 -- # '[' -z 387053 ']' 00:12:23.090 10:01:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # kill -0 387053 00:12:23.090 10:01:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # uname 00:12:23.090 10:01:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:23.090 10:01:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 387053 00:12:23.090 10:01:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:23.090 10:01:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:23.090 10:01:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@968 -- # echo 'killing process with pid 387053' 00:12:23.090 killing process with pid 387053 00:12:23.090 10:01:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@969 -- # kill 387053 00:12:23.090 10:01:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@974 -- # wait 387053 00:12:23.657 10:01:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:23.657 10:01:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:23.657 10:01:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:23.657 10:01:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:23.657 10:01:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:23.657 10:01:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:23.657 10:01:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:23.657 10:01:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:25.557 10:01:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:25.557 00:12:25.557 real 0m6.929s 00:12:25.557 user 0m9.436s 00:12:25.557 sys 0m2.512s 00:12:25.557 10:01:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:25.557 10:01:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:25.557 ************************************ 00:12:25.557 END TEST nvmf_referrals 00:12:25.557 ************************************ 00:12:25.557 10:01:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:25.557 10:01:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:25.557 10:01:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:25.557 10:01:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:25.557 ************************************ 00:12:25.557 START TEST nvmf_connect_disconnect 00:12:25.557 ************************************ 00:12:25.557 10:01:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:25.815 * Looking for test storage... 00:12:25.815 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:25.815 10:01:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:25.815 10:01:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:12:25.815 10:01:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:25.815 10:01:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:25.815 10:01:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:25.815 10:01:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:25.815 10:01:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:25.815 10:01:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:25.815 10:01:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:25.815 10:01:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:25.815 10:01:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:25.815 10:01:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:25.815 10:01:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:12:25.815 10:01:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:12:25.815 10:01:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:25.815 10:01:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:25.815 10:01:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:25.815 10:01:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:25.815 10:01:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:25.815 10:01:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:25.815 10:01:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:25.815 10:01:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:25.815 10:01:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.815 10:01:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.815 10:01:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.815 10:01:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:12:25.815 10:01:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:25.815 10:01:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:12:25.815 10:01:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:25.815 10:01:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:25.815 10:01:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:25.815 10:01:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:25.815 10:01:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:25.815 10:01:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:25.815 10:01:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:25.815 10:01:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:25.815 10:01:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:25.815 10:01:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:25.815 10:01:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:12:25.815 10:01:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:25.815 10:01:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:25.815 10:01:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:25.815 10:01:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:25.815 10:01:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:25.815 10:01:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:25.815 10:01:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:25.815 10:01:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:25.815 10:01:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:25.815 10:01:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:25.815 10:01:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:12:25.815 10:01:10 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:28.343 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:28.343 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:12:28.343 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:28.343 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:28.343 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:28.343 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:28.343 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:28.343 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:12:28.343 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:28.343 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:12:28.343 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:12:28.343 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:12:28.343 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:12:28.343 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:12:28.343 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:12:28.343 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:28.343 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:28.343 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:28.343 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:28.343 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:28.343 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:28.343 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:28.343 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:28.343 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:28.343 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:28.343 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:28.343 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:28.344 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:28.344 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:28.344 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:28.344 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:28.344 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:28.344 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:28.344 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:12:28.344 Found 0000:84:00.0 (0x8086 - 0x159b) 00:12:28.344 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:28.344 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:28.344 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:28.344 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:28.344 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:28.344 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:28.344 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:12:28.344 Found 0000:84:00.1 (0x8086 - 0x159b) 00:12:28.344 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:28.344 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:28.344 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:28.344 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:28.344 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:28.344 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:28.344 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:28.344 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:28.344 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:28.344 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:28.344 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:28.344 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:28.344 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:28.344 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:28.344 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:28.344 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:12:28.344 Found net devices under 0000:84:00.0: cvl_0_0 00:12:28.344 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:28.344 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:28.344 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:28.344 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:28.344 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:28.344 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:28.344 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:28.344 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:28.344 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:12:28.344 Found net devices under 0000:84:00.1: cvl_0_1 00:12:28.344 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:28.344 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:28.344 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:12:28.344 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:28.344 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:28.344 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:28.344 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:28.344 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:28.344 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:28.344 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:28.344 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:28.344 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:28.344 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:28.344 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:28.344 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:28.344 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:28.344 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:28.344 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:28.344 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:28.344 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:28.344 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:28.344 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:28.344 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:28.344 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:28.344 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:28.344 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:28.344 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:28.344 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.192 ms 00:12:28.344 00:12:28.344 --- 10.0.0.2 ping statistics --- 00:12:28.344 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:28.344 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:12:28.344 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:28.344 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:28.344 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.179 ms 00:12:28.344 00:12:28.344 --- 10.0.0.1 ping statistics --- 00:12:28.344 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:28.344 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:12:28.344 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:28.344 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:12:28.344 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:28.344 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:28.344 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:28.344 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:28.344 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:28.344 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:28.344 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:28.344 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:12:28.344 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:28.344 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:28.344 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:28.344 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=389402 00:12:28.344 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:28.344 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 389402 00:12:28.344 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # '[' -z 389402 ']' 00:12:28.344 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:28.344 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:28.344 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:28.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:28.344 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:28.344 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:28.344 [2024-07-25 10:01:13.459158] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:12:28.345 [2024-07-25 10:01:13.459271] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:28.345 EAL: No free 2048 kB hugepages reported on node 1 00:12:28.601 [2024-07-25 10:01:13.545925] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:28.601 [2024-07-25 10:01:13.673272] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:28.601 [2024-07-25 10:01:13.673339] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:28.601 [2024-07-25 10:01:13.673365] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:28.601 [2024-07-25 10:01:13.673387] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:28.601 [2024-07-25 10:01:13.673405] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:28.601 [2024-07-25 10:01:13.673503] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:28.601 [2024-07-25 10:01:13.673560] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:28.601 [2024-07-25 10:01:13.673616] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:28.601 [2024-07-25 10:01:13.673624] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:28.858 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:28.858 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # return 0 00:12:28.858 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:28.858 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:28.858 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:28.858 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:28.858 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:28.858 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.858 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:28.858 [2024-07-25 10:01:13.844285] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:28.858 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.858 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:12:28.858 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.858 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:28.858 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.858 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:12:28.858 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:28.858 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.858 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:28.858 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.858 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:28.858 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.858 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:28.858 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.858 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:28.858 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:28.858 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:28.858 [2024-07-25 10:01:13.905513] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:28.858 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:28.858 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:12:28.858 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:12:28.858 10:01:13 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:12:32.131 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:34.684 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:37.211 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:40.489 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:43.014 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:43.014 10:01:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:43.014 10:01:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:43.014 10:01:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:43.014 10:01:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:12:43.014 10:01:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:43.014 10:01:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:12:43.014 10:01:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:43.014 10:01:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:43.014 rmmod nvme_tcp 00:12:43.014 rmmod nvme_fabrics 00:12:43.014 rmmod nvme_keyring 00:12:43.014 10:01:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:43.014 10:01:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:12:43.014 10:01:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:12:43.014 10:01:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 389402 ']' 00:12:43.014 10:01:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 389402 00:12:43.014 10:01:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # '[' -z 389402 ']' 00:12:43.014 10:01:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # kill -0 389402 00:12:43.014 10:01:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # uname 00:12:43.014 10:01:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:43.014 10:01:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 389402 00:12:43.014 10:01:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:43.014 10:01:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:43.014 10:01:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 389402' 00:12:43.014 killing process with pid 389402 00:12:43.014 10:01:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@969 -- # kill 389402 00:12:43.014 10:01:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@974 -- # wait 389402 00:12:43.273 10:01:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:43.273 10:01:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:43.273 10:01:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:43.273 10:01:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:43.273 10:01:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:43.273 10:01:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:43.273 10:01:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:43.273 10:01:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:45.807 10:01:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:45.807 00:12:45.807 real 0m19.698s 00:12:45.807 user 0m58.044s 00:12:45.807 sys 0m3.817s 00:12:45.808 10:01:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:45.808 10:01:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:45.808 ************************************ 00:12:45.808 END TEST nvmf_connect_disconnect 00:12:45.808 ************************************ 00:12:45.808 10:01:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:45.808 10:01:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:45.808 10:01:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:45.808 10:01:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:45.808 ************************************ 00:12:45.808 START TEST nvmf_multitarget 00:12:45.808 ************************************ 00:12:45.808 10:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:45.808 * Looking for test storage... 00:12:45.808 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:45.808 10:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:45.808 10:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:12:45.808 10:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:45.808 10:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:45.808 10:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:45.808 10:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:45.808 10:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:45.808 10:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:45.808 10:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:45.808 10:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:45.808 10:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:45.808 10:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:45.808 10:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:12:45.808 10:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:12:45.808 10:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:45.808 10:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:45.808 10:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:45.808 10:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:45.808 10:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:45.808 10:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:45.808 10:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:45.808 10:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:45.808 10:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.808 10:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.808 10:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.808 10:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:12:45.808 10:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:45.808 10:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:12:45.808 10:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:45.808 10:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:45.808 10:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:45.808 10:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:45.808 10:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:45.808 10:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:45.808 10:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:45.808 10:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:45.808 10:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:45.808 10:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:12:45.808 10:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:45.808 10:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:45.808 10:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:45.808 10:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:45.808 10:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:45.808 10:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:45.808 10:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:45.808 10:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:45.808 10:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:45.808 10:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:45.808 10:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:12:45.808 10:01:30 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:48.339 10:01:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:48.339 10:01:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:12:48.339 10:01:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:48.339 10:01:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:48.339 10:01:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:48.339 10:01:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:48.339 10:01:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:48.339 10:01:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:12:48.339 10:01:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:48.339 10:01:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:12:48.339 10:01:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:12:48.339 10:01:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:12:48.339 10:01:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:12:48.339 10:01:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:12:48.339 10:01:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:12:48.339 10:01:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:48.339 10:01:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:48.339 10:01:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:48.339 10:01:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:48.339 10:01:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:48.339 10:01:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:48.339 10:01:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:48.339 10:01:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:48.339 10:01:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:48.339 10:01:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:48.339 10:01:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:48.339 10:01:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:48.339 10:01:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:48.339 10:01:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:48.339 10:01:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:48.339 10:01:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:48.339 10:01:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:48.339 10:01:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:48.339 10:01:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:12:48.339 Found 0000:84:00.0 (0x8086 - 0x159b) 00:12:48.339 10:01:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:48.339 10:01:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:48.339 10:01:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:48.339 10:01:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:48.339 10:01:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:48.339 10:01:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:48.339 10:01:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:12:48.339 Found 0000:84:00.1 (0x8086 - 0x159b) 00:12:48.339 10:01:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:48.340 10:01:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:48.340 10:01:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:48.340 10:01:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:48.340 10:01:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:48.340 10:01:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:48.340 10:01:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:48.340 10:01:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:48.340 10:01:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:48.340 10:01:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:48.340 10:01:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:48.340 10:01:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:48.340 10:01:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:48.340 10:01:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:48.340 10:01:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:48.340 10:01:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:12:48.340 Found net devices under 0000:84:00.0: cvl_0_0 00:12:48.340 10:01:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:48.340 10:01:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:48.340 10:01:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:48.340 10:01:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:48.340 10:01:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:48.340 10:01:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:48.340 10:01:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:48.340 10:01:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:48.340 10:01:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:12:48.340 Found net devices under 0000:84:00.1: cvl_0_1 00:12:48.340 10:01:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:48.340 10:01:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:48.340 10:01:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:12:48.340 10:01:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:48.340 10:01:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:48.340 10:01:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:48.340 10:01:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:48.340 10:01:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:48.340 10:01:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:48.340 10:01:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:48.340 10:01:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:48.340 10:01:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:48.340 10:01:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:48.340 10:01:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:48.340 10:01:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:48.340 10:01:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:48.340 10:01:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:48.340 10:01:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:48.340 10:01:32 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:48.340 10:01:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:48.340 10:01:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:48.340 10:01:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:48.340 10:01:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:48.340 10:01:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:48.340 10:01:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:48.340 10:01:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:48.340 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:48.340 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.249 ms 00:12:48.340 00:12:48.340 --- 10.0.0.2 ping statistics --- 00:12:48.340 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:48.340 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:12:48.340 10:01:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:48.340 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:48.340 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.205 ms 00:12:48.340 00:12:48.340 --- 10.0.0.1 ping statistics --- 00:12:48.340 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:48.340 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:12:48.340 10:01:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:48.340 10:01:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:12:48.340 10:01:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:48.340 10:01:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:48.340 10:01:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:48.340 10:01:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:48.340 10:01:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:48.340 10:01:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:48.340 10:01:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:48.340 10:01:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:48.340 10:01:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:48.340 10:01:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:48.340 10:01:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:48.340 10:01:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=393121 00:12:48.340 10:01:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:48.340 10:01:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 393121 00:12:48.340 10:01:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@831 -- # '[' -z 393121 ']' 00:12:48.340 10:01:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:48.340 10:01:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:48.340 10:01:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:48.340 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:48.340 10:01:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:48.340 10:01:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:48.340 [2024-07-25 10:01:33.224212] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:12:48.340 [2024-07-25 10:01:33.224309] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:48.340 EAL: No free 2048 kB hugepages reported on node 1 00:12:48.340 [2024-07-25 10:01:33.300469] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:48.340 [2024-07-25 10:01:33.423425] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:48.340 [2024-07-25 10:01:33.423497] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:48.340 [2024-07-25 10:01:33.423524] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:48.340 [2024-07-25 10:01:33.423544] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:48.340 [2024-07-25 10:01:33.423561] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:48.340 [2024-07-25 10:01:33.423629] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:48.340 [2024-07-25 10:01:33.423689] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:48.340 [2024-07-25 10:01:33.423752] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:48.340 [2024-07-25 10:01:33.423743] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:48.597 10:01:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:48.597 10:01:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # return 0 00:12:48.597 10:01:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:48.597 10:01:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:48.597 10:01:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:48.597 10:01:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:48.597 10:01:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:48.597 10:01:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:48.597 10:01:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:12:48.597 10:01:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:48.597 10:01:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:48.854 "nvmf_tgt_1" 00:12:48.854 10:01:33 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:48.854 "nvmf_tgt_2" 00:12:48.854 10:01:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:48.854 10:01:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:12:49.111 10:01:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:49.111 10:01:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:49.111 true 00:12:49.111 10:01:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:49.369 true 00:12:49.369 10:01:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:49.369 10:01:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:12:49.627 10:01:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:49.627 10:01:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:49.627 10:01:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:12:49.627 10:01:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:49.627 10:01:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:12:49.627 10:01:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:49.627 10:01:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:12:49.627 10:01:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:49.627 10:01:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:49.627 rmmod nvme_tcp 00:12:49.627 rmmod nvme_fabrics 00:12:49.627 rmmod nvme_keyring 00:12:49.627 10:01:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:49.627 10:01:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:12:49.627 10:01:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:12:49.627 10:01:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 393121 ']' 00:12:49.627 10:01:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 393121 00:12:49.627 10:01:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@950 -- # '[' -z 393121 ']' 00:12:49.627 10:01:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # kill -0 393121 00:12:49.627 10:01:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # uname 00:12:49.627 10:01:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:49.627 10:01:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 393121 00:12:49.627 10:01:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:49.627 10:01:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:49.627 10:01:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@968 -- # echo 'killing process with pid 393121' 00:12:49.627 killing process with pid 393121 00:12:49.627 10:01:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@969 -- # kill 393121 00:12:49.627 10:01:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@974 -- # wait 393121 00:12:49.886 10:01:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:49.886 10:01:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:49.886 10:01:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:49.886 10:01:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:49.886 10:01:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:49.886 10:01:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:49.886 10:01:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:49.886 10:01:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:52.420 10:01:36 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:52.420 00:12:52.420 real 0m6.569s 00:12:52.420 user 0m7.727s 00:12:52.420 sys 0m2.357s 00:12:52.420 10:01:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:52.420 10:01:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:52.420 ************************************ 00:12:52.420 END TEST nvmf_multitarget 00:12:52.420 ************************************ 00:12:52.420 10:01:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:52.420 10:01:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:52.420 10:01:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:52.420 10:01:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:52.420 ************************************ 00:12:52.420 START TEST nvmf_rpc 00:12:52.420 ************************************ 00:12:52.420 10:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:52.420 * Looking for test storage... 00:12:52.420 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:52.420 10:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:52.420 10:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:12:52.420 10:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:52.420 10:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:52.420 10:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:52.420 10:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:52.420 10:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:52.420 10:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:52.420 10:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:52.420 10:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:52.420 10:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:52.420 10:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:52.420 10:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:12:52.420 10:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:12:52.420 10:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:52.420 10:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:52.420 10:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:52.420 10:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:52.420 10:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:52.420 10:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:52.420 10:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:52.420 10:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:52.420 10:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.420 10:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.420 10:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.420 10:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:12:52.420 10:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.420 10:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:12:52.420 10:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:52.420 10:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:52.420 10:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:52.420 10:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:52.420 10:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:52.420 10:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:52.420 10:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:52.420 10:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:52.420 10:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:12:52.420 10:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:12:52.420 10:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:52.420 10:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:52.420 10:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:52.420 10:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:52.420 10:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:52.420 10:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:52.420 10:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:52.420 10:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:52.420 10:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:52.420 10:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:52.420 10:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:12:52.420 10:01:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.958 10:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:54.958 10:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:12:54.958 10:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:54.958 10:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:54.958 10:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:54.958 10:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:54.958 10:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:54.958 10:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:12:54.958 10:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:54.958 10:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:12:54.958 10:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:12:54.958 10:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:12:54.958 10:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:12:54.958 10:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:12:54.958 10:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:12:54.958 10:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:54.958 10:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:54.958 10:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:54.958 10:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:54.958 10:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:54.958 10:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:54.958 10:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:54.958 10:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:54.959 10:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:54.959 10:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:54.959 10:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:54.959 10:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:54.959 10:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:54.959 10:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:54.959 10:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:54.959 10:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:54.959 10:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:54.959 10:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:54.959 10:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:12:54.959 Found 0000:84:00.0 (0x8086 - 0x159b) 00:12:54.959 10:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:54.959 10:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:54.959 10:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:54.959 10:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:54.959 10:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:54.959 10:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:54.959 10:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:12:54.959 Found 0000:84:00.1 (0x8086 - 0x159b) 00:12:54.959 10:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:54.959 10:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:54.959 10:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:54.959 10:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:54.959 10:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:54.959 10:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:54.959 10:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:54.959 10:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:54.959 10:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:54.959 10:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:54.959 10:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:54.959 10:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:54.959 10:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:54.959 10:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:54.959 10:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:54.959 10:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:12:54.959 Found net devices under 0000:84:00.0: cvl_0_0 00:12:54.959 10:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:54.959 10:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:54.959 10:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:54.959 10:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:54.959 10:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:54.959 10:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:54.959 10:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:54.959 10:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:54.959 10:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:12:54.959 Found net devices under 0000:84:00.1: cvl_0_1 00:12:54.959 10:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:54.959 10:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:54.959 10:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:12:54.959 10:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:54.959 10:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:54.959 10:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:54.959 10:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:54.959 10:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:54.959 10:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:54.959 10:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:54.959 10:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:54.959 10:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:54.959 10:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:54.959 10:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:54.959 10:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:54.959 10:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:54.959 10:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:54.959 10:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:54.959 10:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:54.959 10:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:54.959 10:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:54.959 10:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:54.959 10:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:54.959 10:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:54.959 10:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:54.959 10:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:54.959 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:54.959 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.138 ms 00:12:54.959 00:12:54.959 --- 10.0.0.2 ping statistics --- 00:12:54.959 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:54.959 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:12:54.959 10:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:54.959 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:54.959 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.136 ms 00:12:54.959 00:12:54.959 --- 10.0.0.1 ping statistics --- 00:12:54.959 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:54.959 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:12:54.959 10:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:54.959 10:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:12:54.959 10:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:54.959 10:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:54.959 10:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:54.959 10:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:54.959 10:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:54.959 10:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:54.959 10:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:54.959 10:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:54.959 10:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:54.959 10:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:54.959 10:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.959 10:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=395354 00:12:54.959 10:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:54.959 10:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 395354 00:12:54.959 10:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@831 -- # '[' -z 395354 ']' 00:12:54.959 10:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:54.959 10:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:54.959 10:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:54.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:54.959 10:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:54.959 10:01:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:54.959 [2024-07-25 10:01:39.844731] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:12:54.959 [2024-07-25 10:01:39.844906] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:54.959 EAL: No free 2048 kB hugepages reported on node 1 00:12:54.959 [2024-07-25 10:01:39.952366] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:54.959 [2024-07-25 10:01:40.085599] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:54.960 [2024-07-25 10:01:40.085664] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:54.960 [2024-07-25 10:01:40.085691] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:54.960 [2024-07-25 10:01:40.085715] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:54.960 [2024-07-25 10:01:40.085735] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:54.960 [2024-07-25 10:01:40.085801] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:54.960 [2024-07-25 10:01:40.085857] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:54.960 [2024-07-25 10:01:40.085923] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:54.960 [2024-07-25 10:01:40.085932] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:55.218 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:55.218 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # return 0 00:12:55.218 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:55.218 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:55.218 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.218 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:55.218 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:12:55.218 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.218 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.218 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.218 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:12:55.218 "tick_rate": 2700000000, 00:12:55.218 "poll_groups": [ 00:12:55.218 { 00:12:55.218 "name": "nvmf_tgt_poll_group_000", 00:12:55.218 "admin_qpairs": 0, 00:12:55.218 "io_qpairs": 0, 00:12:55.218 "current_admin_qpairs": 0, 00:12:55.218 "current_io_qpairs": 0, 00:12:55.218 "pending_bdev_io": 0, 00:12:55.218 "completed_nvme_io": 0, 00:12:55.218 "transports": [] 00:12:55.218 }, 00:12:55.218 { 00:12:55.218 "name": "nvmf_tgt_poll_group_001", 00:12:55.218 "admin_qpairs": 0, 00:12:55.218 "io_qpairs": 0, 00:12:55.218 "current_admin_qpairs": 0, 00:12:55.218 "current_io_qpairs": 0, 00:12:55.218 "pending_bdev_io": 0, 00:12:55.218 "completed_nvme_io": 0, 00:12:55.218 "transports": [] 00:12:55.218 }, 00:12:55.218 { 00:12:55.218 "name": "nvmf_tgt_poll_group_002", 00:12:55.218 "admin_qpairs": 0, 00:12:55.218 "io_qpairs": 0, 00:12:55.218 "current_admin_qpairs": 0, 00:12:55.218 "current_io_qpairs": 0, 00:12:55.218 "pending_bdev_io": 0, 00:12:55.218 "completed_nvme_io": 0, 00:12:55.218 "transports": [] 00:12:55.218 }, 00:12:55.218 { 00:12:55.218 "name": "nvmf_tgt_poll_group_003", 00:12:55.218 "admin_qpairs": 0, 00:12:55.218 "io_qpairs": 0, 00:12:55.218 "current_admin_qpairs": 0, 00:12:55.218 "current_io_qpairs": 0, 00:12:55.218 "pending_bdev_io": 0, 00:12:55.218 "completed_nvme_io": 0, 00:12:55.218 "transports": [] 00:12:55.218 } 00:12:55.218 ] 00:12:55.218 }' 00:12:55.218 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:12:55.218 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:12:55.218 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:12:55.218 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:12:55.218 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:12:55.218 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:12:55.218 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:12:55.218 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:55.218 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.218 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.218 [2024-07-25 10:01:40.376628] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:55.476 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.476 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:12:55.476 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.476 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.476 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.476 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:12:55.476 "tick_rate": 2700000000, 00:12:55.476 "poll_groups": [ 00:12:55.476 { 00:12:55.476 "name": "nvmf_tgt_poll_group_000", 00:12:55.476 "admin_qpairs": 0, 00:12:55.476 "io_qpairs": 0, 00:12:55.476 "current_admin_qpairs": 0, 00:12:55.476 "current_io_qpairs": 0, 00:12:55.476 "pending_bdev_io": 0, 00:12:55.476 "completed_nvme_io": 0, 00:12:55.476 "transports": [ 00:12:55.476 { 00:12:55.476 "trtype": "TCP" 00:12:55.476 } 00:12:55.476 ] 00:12:55.476 }, 00:12:55.476 { 00:12:55.476 "name": "nvmf_tgt_poll_group_001", 00:12:55.476 "admin_qpairs": 0, 00:12:55.476 "io_qpairs": 0, 00:12:55.476 "current_admin_qpairs": 0, 00:12:55.476 "current_io_qpairs": 0, 00:12:55.476 "pending_bdev_io": 0, 00:12:55.476 "completed_nvme_io": 0, 00:12:55.476 "transports": [ 00:12:55.476 { 00:12:55.476 "trtype": "TCP" 00:12:55.476 } 00:12:55.476 ] 00:12:55.476 }, 00:12:55.476 { 00:12:55.476 "name": "nvmf_tgt_poll_group_002", 00:12:55.476 "admin_qpairs": 0, 00:12:55.476 "io_qpairs": 0, 00:12:55.476 "current_admin_qpairs": 0, 00:12:55.476 "current_io_qpairs": 0, 00:12:55.476 "pending_bdev_io": 0, 00:12:55.476 "completed_nvme_io": 0, 00:12:55.476 "transports": [ 00:12:55.476 { 00:12:55.476 "trtype": "TCP" 00:12:55.476 } 00:12:55.476 ] 00:12:55.476 }, 00:12:55.476 { 00:12:55.476 "name": "nvmf_tgt_poll_group_003", 00:12:55.476 "admin_qpairs": 0, 00:12:55.476 "io_qpairs": 0, 00:12:55.476 "current_admin_qpairs": 0, 00:12:55.476 "current_io_qpairs": 0, 00:12:55.476 "pending_bdev_io": 0, 00:12:55.476 "completed_nvme_io": 0, 00:12:55.476 "transports": [ 00:12:55.476 { 00:12:55.476 "trtype": "TCP" 00:12:55.476 } 00:12:55.476 ] 00:12:55.476 } 00:12:55.476 ] 00:12:55.476 }' 00:12:55.476 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:12:55.476 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:55.476 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:55.476 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:55.476 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:12:55.477 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:12:55.477 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:55.477 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:55.477 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:55.477 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:12:55.477 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:12:55.477 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:12:55.477 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:12:55.477 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:55.477 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.477 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.477 Malloc1 00:12:55.477 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.477 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:55.477 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.477 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.477 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.477 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:55.477 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.477 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.477 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.477 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:12:55.477 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.477 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.477 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.477 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:55.477 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.477 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.477 [2024-07-25 10:01:40.574557] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:55.477 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.477 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -a 10.0.0.2 -s 4420 00:12:55.477 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:12:55.477 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -a 10.0.0.2 -s 4420 00:12:55.477 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:12:55.477 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:55.477 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:12:55.477 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:55.477 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:12:55.477 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:55.477 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:12:55.477 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:12:55.477 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -a 10.0.0.2 -s 4420 00:12:55.477 [2024-07-25 10:01:40.597105] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02' 00:12:55.477 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:55.477 could not add new controller: failed to write to nvme-fabrics device 00:12:55.477 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:12:55.477 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:55.477 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:55.477 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:55.477 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:12:55.477 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:55.477 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.477 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:55.477 10:01:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:56.411 10:01:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:12:56.411 10:01:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:56.411 10:01:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:56.411 10:01:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:56.411 10:01:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:58.308 10:01:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:58.308 10:01:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:58.308 10:01:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:58.308 10:01:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:58.308 10:01:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:58.308 10:01:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:58.308 10:01:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:58.308 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:58.308 10:01:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:58.308 10:01:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:58.308 10:01:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:58.308 10:01:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:58.308 10:01:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:58.308 10:01:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:58.308 10:01:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:58.308 10:01:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:12:58.308 10:01:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.308 10:01:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.308 10:01:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.308 10:01:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:58.308 10:01:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:12:58.308 10:01:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:58.308 10:01:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:12:58.308 10:01:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:58.308 10:01:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:12:58.308 10:01:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:58.308 10:01:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:12:58.308 10:01:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:58.308 10:01:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:12:58.308 10:01:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:12:58.308 10:01:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:58.308 [2024-07-25 10:01:43.377887] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02' 00:12:58.308 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:58.308 could not add new controller: failed to write to nvme-fabrics device 00:12:58.308 10:01:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:12:58.308 10:01:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:58.308 10:01:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:58.308 10:01:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:58.308 10:01:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:12:58.308 10:01:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:58.308 10:01:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.308 10:01:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:58.308 10:01:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:58.872 10:01:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:12:58.872 10:01:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:58.872 10:01:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:58.872 10:01:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:58.872 10:01:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:01.399 10:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:01.399 10:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:01.399 10:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:01.399 10:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:01.399 10:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:01.399 10:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:01.399 10:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:01.399 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:01.399 10:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:01.399 10:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:01.399 10:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:01.399 10:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:01.399 10:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:01.399 10:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:01.399 10:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:01.399 10:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:01.399 10:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.399 10:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.399 10:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.399 10:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:13:01.399 10:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:01.399 10:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:01.399 10:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.399 10:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.399 10:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.399 10:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:01.399 10:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.399 10:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.399 [2024-07-25 10:01:46.147721] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:01.399 10:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.399 10:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:01.399 10:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.399 10:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.399 10:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.399 10:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:01.399 10:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:01.399 10:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.399 10:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:01.399 10:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:01.964 10:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:01.964 10:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:01.964 10:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:01.964 10:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:01.964 10:01:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:03.859 10:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:03.859 10:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:03.859 10:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:03.859 10:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:03.859 10:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:03.859 10:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:03.859 10:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:03.859 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:03.859 10:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:03.859 10:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:03.859 10:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:03.859 10:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:03.859 10:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:03.859 10:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:03.859 10:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:03.859 10:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:03.859 10:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.859 10:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:03.859 10:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.859 10:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:03.859 10:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.859 10:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:03.859 10:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.859 10:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:03.859 10:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:03.859 10:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.859 10:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:03.859 10:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.859 10:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:03.859 10:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.859 10:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:03.859 [2024-07-25 10:01:48.984765] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:03.859 10:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.859 10:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:03.859 10:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.859 10:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:03.859 10:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.859 10:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:03.859 10:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:03.859 10:01:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:03.859 10:01:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:03.859 10:01:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:04.790 10:01:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:04.790 10:01:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:04.790 10:01:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:04.790 10:01:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:04.790 10:01:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:06.689 10:01:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:06.689 10:01:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:06.689 10:01:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:06.689 10:01:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:06.689 10:01:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:06.689 10:01:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:06.689 10:01:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:06.689 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:06.689 10:01:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:06.689 10:01:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:06.689 10:01:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:06.689 10:01:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:06.689 10:01:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:06.689 10:01:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:06.689 10:01:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:06.689 10:01:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:06.689 10:01:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.689 10:01:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:06.689 10:01:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.689 10:01:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:06.689 10:01:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.689 10:01:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:06.689 10:01:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.689 10:01:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:06.689 10:01:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:06.689 10:01:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.689 10:01:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:06.689 10:01:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.689 10:01:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:06.689 10:01:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.689 10:01:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:06.689 [2024-07-25 10:01:51.765844] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:06.689 10:01:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.689 10:01:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:06.689 10:01:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.689 10:01:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:06.689 10:01:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.689 10:01:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:06.689 10:01:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.689 10:01:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:06.689 10:01:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.689 10:01:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:07.254 10:01:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:07.254 10:01:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:07.254 10:01:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:07.254 10:01:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:07.254 10:01:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:09.780 10:01:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:09.780 10:01:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:09.780 10:01:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:09.781 10:01:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:09.781 10:01:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:09.781 10:01:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:09.781 10:01:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:09.781 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:09.781 10:01:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:09.781 10:01:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:09.781 10:01:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:09.781 10:01:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:09.781 10:01:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:09.781 10:01:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:09.781 10:01:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:09.781 10:01:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:09.781 10:01:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.781 10:01:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.781 10:01:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.781 10:01:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:09.781 10:01:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.781 10:01:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.781 10:01:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.781 10:01:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:09.781 10:01:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:09.781 10:01:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.781 10:01:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.781 10:01:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.781 10:01:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:09.781 10:01:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.781 10:01:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.781 [2024-07-25 10:01:54.525818] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:09.781 10:01:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.781 10:01:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:09.781 10:01:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.781 10:01:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.781 10:01:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.781 10:01:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:09.781 10:01:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:09.781 10:01:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.781 10:01:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:09.781 10:01:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:10.346 10:01:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:10.346 10:01:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:10.346 10:01:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:10.346 10:01:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:10.346 10:01:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:12.276 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:12.276 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:12.276 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:12.276 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:12.276 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:12.276 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:12.276 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:12.276 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:12.276 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:12.276 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:12.276 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:12.276 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:12.276 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:12.276 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:12.276 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:12.276 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:12.276 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.276 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.276 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.276 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:12.276 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.276 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.276 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.276 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:12.276 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:12.276 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.276 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.276 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.276 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:12.276 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.276 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.276 [2024-07-25 10:01:57.378054] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:12.276 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.276 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:12.276 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.276 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.276 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.276 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:12.276 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:12.276 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.276 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:12.276 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:12.841 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:12.841 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:12.841 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:12.841 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:12.841 10:01:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:15.367 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:15.367 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:15.367 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:15.367 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:15.367 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:15.367 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:15.367 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:15.367 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:15.367 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:15.367 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:15.367 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:15.367 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:15.367 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:15.367 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:15.367 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:15.367 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:15.367 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.367 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.367 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.367 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:15.367 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.367 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.367 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.367 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:13:15.367 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:15.367 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:15.367 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.367 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.367 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.367 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:15.367 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.367 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.367 [2024-07-25 10:02:00.164112] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:15.367 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.367 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:15.367 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.367 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.367 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.367 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:15.367 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.367 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.367 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.367 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:15.367 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.367 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.367 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.367 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:15.367 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.367 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.367 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.367 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:15.367 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:15.367 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.367 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.367 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.368 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:15.368 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.368 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.368 [2024-07-25 10:02:00.212157] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:15.368 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.368 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:15.368 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.368 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.368 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.368 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:15.368 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.368 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.368 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.368 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:15.368 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.368 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.368 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.368 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:15.368 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.368 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.368 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.368 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:15.368 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:15.368 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.368 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.368 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.368 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:15.368 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.368 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.368 [2024-07-25 10:02:00.260324] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:15.368 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.368 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:15.368 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.368 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.368 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.368 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:15.368 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.368 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.368 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.368 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:15.368 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.368 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.368 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.368 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:15.368 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.368 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.368 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.368 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:15.368 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:15.368 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.368 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.368 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.368 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:15.368 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.368 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.368 [2024-07-25 10:02:00.308503] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:15.368 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.368 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:15.368 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.368 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.368 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.368 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:15.368 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.368 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.368 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.368 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:15.368 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.368 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.368 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.368 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:15.368 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.368 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.368 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.368 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:15.368 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:15.368 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.368 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.368 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.368 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:15.368 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.368 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.368 [2024-07-25 10:02:00.356665] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:15.368 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.368 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:15.368 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.368 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.368 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.368 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:15.368 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.368 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.368 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.368 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:15.368 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.368 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.368 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.368 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:15.368 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.368 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.368 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.368 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:13:15.368 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:15.368 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.368 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:15.368 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:13:15.368 "tick_rate": 2700000000, 00:13:15.368 "poll_groups": [ 00:13:15.368 { 00:13:15.368 "name": "nvmf_tgt_poll_group_000", 00:13:15.368 "admin_qpairs": 2, 00:13:15.368 "io_qpairs": 84, 00:13:15.368 "current_admin_qpairs": 0, 00:13:15.368 "current_io_qpairs": 0, 00:13:15.368 "pending_bdev_io": 0, 00:13:15.368 "completed_nvme_io": 168, 00:13:15.368 "transports": [ 00:13:15.368 { 00:13:15.368 "trtype": "TCP" 00:13:15.368 } 00:13:15.368 ] 00:13:15.368 }, 00:13:15.368 { 00:13:15.368 "name": "nvmf_tgt_poll_group_001", 00:13:15.368 "admin_qpairs": 2, 00:13:15.368 "io_qpairs": 84, 00:13:15.368 "current_admin_qpairs": 0, 00:13:15.368 "current_io_qpairs": 0, 00:13:15.368 "pending_bdev_io": 0, 00:13:15.368 "completed_nvme_io": 157, 00:13:15.368 "transports": [ 00:13:15.368 { 00:13:15.368 "trtype": "TCP" 00:13:15.368 } 00:13:15.368 ] 00:13:15.368 }, 00:13:15.368 { 00:13:15.368 "name": "nvmf_tgt_poll_group_002", 00:13:15.368 "admin_qpairs": 1, 00:13:15.368 "io_qpairs": 84, 00:13:15.368 "current_admin_qpairs": 0, 00:13:15.368 "current_io_qpairs": 0, 00:13:15.368 "pending_bdev_io": 0, 00:13:15.368 "completed_nvme_io": 138, 00:13:15.368 "transports": [ 00:13:15.368 { 00:13:15.368 "trtype": "TCP" 00:13:15.368 } 00:13:15.368 ] 00:13:15.368 }, 00:13:15.368 { 00:13:15.368 "name": "nvmf_tgt_poll_group_003", 00:13:15.368 "admin_qpairs": 2, 00:13:15.368 "io_qpairs": 84, 00:13:15.368 "current_admin_qpairs": 0, 00:13:15.368 "current_io_qpairs": 0, 00:13:15.368 "pending_bdev_io": 0, 00:13:15.368 "completed_nvme_io": 223, 00:13:15.368 "transports": [ 00:13:15.368 { 00:13:15.368 "trtype": "TCP" 00:13:15.368 } 00:13:15.368 ] 00:13:15.368 } 00:13:15.368 ] 00:13:15.368 }' 00:13:15.368 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:13:15.368 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:15.368 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:15.368 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:15.368 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:13:15.368 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:13:15.368 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:15.368 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:15.368 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:15.626 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:13:15.626 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:13:15.626 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:13:15.626 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:13:15.626 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:15.626 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:13:15.626 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:15.626 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:13:15.626 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:15.626 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:15.626 rmmod nvme_tcp 00:13:15.626 rmmod nvme_fabrics 00:13:15.626 rmmod nvme_keyring 00:13:15.626 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:15.626 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:13:15.626 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:13:15.626 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 395354 ']' 00:13:15.626 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 395354 00:13:15.626 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@950 -- # '[' -z 395354 ']' 00:13:15.626 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # kill -0 395354 00:13:15.626 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # uname 00:13:15.626 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:15.626 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 395354 00:13:15.626 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:15.626 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:15.626 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 395354' 00:13:15.626 killing process with pid 395354 00:13:15.626 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@969 -- # kill 395354 00:13:15.626 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@974 -- # wait 395354 00:13:15.885 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:15.885 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:15.885 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:15.885 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:15.885 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:15.885 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:15.885 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:15.885 10:02:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:18.417 10:02:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:18.417 00:13:18.417 real 0m25.938s 00:13:18.417 user 1m22.891s 00:13:18.417 sys 0m4.606s 00:13:18.417 10:02:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:18.417 10:02:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:18.417 ************************************ 00:13:18.417 END TEST nvmf_rpc 00:13:18.417 ************************************ 00:13:18.417 10:02:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:18.417 10:02:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:18.417 10:02:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:18.418 10:02:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:18.418 ************************************ 00:13:18.418 START TEST nvmf_invalid 00:13:18.418 ************************************ 00:13:18.418 10:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:18.418 * Looking for test storage... 00:13:18.418 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:18.418 10:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:18.418 10:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:13:18.418 10:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:18.418 10:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:18.418 10:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:18.418 10:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:18.418 10:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:18.418 10:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:18.418 10:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:18.418 10:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:18.418 10:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:18.418 10:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:18.418 10:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:13:18.418 10:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:13:18.418 10:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:18.418 10:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:18.418 10:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:18.418 10:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:18.418 10:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:18.418 10:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:18.418 10:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:18.418 10:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:18.418 10:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.418 10:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.418 10:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.418 10:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:13:18.418 10:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:18.418 10:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:13:18.418 10:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:18.418 10:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:18.418 10:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:18.418 10:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:18.418 10:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:18.418 10:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:18.418 10:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:18.418 10:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:18.418 10:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:18.418 10:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:18.418 10:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:13:18.418 10:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:13:18.418 10:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:13:18.418 10:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:13:18.418 10:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:18.418 10:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:18.418 10:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:18.418 10:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:18.418 10:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:18.418 10:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:18.418 10:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:18.418 10:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:18.418 10:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:18.418 10:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:18.418 10:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:13:18.418 10:02:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:20.948 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:20.948 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:13:20.948 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:20.948 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:20.948 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:20.948 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:20.948 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:20.948 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:13:20.948 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:20.948 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:13:20.948 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:13:20.948 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:13:20.948 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:13:20.948 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:13:20.948 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:13:20.948 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:20.948 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:20.948 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:20.948 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:20.948 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:20.948 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:20.948 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:20.948 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:20.948 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:20.948 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:20.948 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:20.948 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:20.948 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:20.948 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:20.948 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:20.948 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:20.948 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:20.948 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:20.948 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:13:20.948 Found 0000:84:00.0 (0x8086 - 0x159b) 00:13:20.948 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:20.948 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:20.948 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:20.948 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:20.948 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:20.948 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:20.948 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:13:20.948 Found 0000:84:00.1 (0x8086 - 0x159b) 00:13:20.948 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:20.948 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:20.948 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:20.948 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:20.948 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:20.948 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:20.948 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:20.948 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:20.948 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:20.948 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:20.948 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:20.948 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:20.948 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:20.948 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:20.948 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:20.948 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:13:20.948 Found net devices under 0000:84:00.0: cvl_0_0 00:13:20.948 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:20.948 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:20.948 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:20.948 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:20.948 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:20.948 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:20.948 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:20.948 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:20.948 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:13:20.948 Found net devices under 0000:84:00.1: cvl_0_1 00:13:20.948 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:20.948 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:20.948 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:13:20.948 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:20.948 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:20.948 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:20.948 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:20.948 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:20.948 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:20.948 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:20.948 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:20.948 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:20.948 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:20.948 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:20.948 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:20.948 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:20.948 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:20.949 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:20.949 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:20.949 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:20.949 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:20.949 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:20.949 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:20.949 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:20.949 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:20.949 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:20.949 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:20.949 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.270 ms 00:13:20.949 00:13:20.949 --- 10.0.0.2 ping statistics --- 00:13:20.949 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:20.949 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:13:20.949 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:20.949 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:20.949 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.172 ms 00:13:20.949 00:13:20.949 --- 10.0.0.1 ping statistics --- 00:13:20.949 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:20.949 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:13:20.949 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:20.949 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:13:20.949 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:20.949 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:20.949 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:20.949 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:20.949 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:20.949 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:20.949 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:20.949 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:13:20.949 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:20.949 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:20.949 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:20.949 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=399854 00:13:20.949 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:20.949 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 399854 00:13:20.949 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@831 -- # '[' -z 399854 ']' 00:13:20.949 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:20.949 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:20.949 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:20.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:20.949 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:20.949 10:02:05 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:20.949 [2024-07-25 10:02:05.840025] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:13:20.949 [2024-07-25 10:02:05.840128] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:20.949 EAL: No free 2048 kB hugepages reported on node 1 00:13:20.949 [2024-07-25 10:02:05.922542] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:20.949 [2024-07-25 10:02:06.050557] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:20.949 [2024-07-25 10:02:06.050616] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:20.949 [2024-07-25 10:02:06.050644] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:20.949 [2024-07-25 10:02:06.050665] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:20.949 [2024-07-25 10:02:06.050682] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:20.949 [2024-07-25 10:02:06.051298] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:20.949 [2024-07-25 10:02:06.051380] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:20.949 [2024-07-25 10:02:06.051444] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:20.949 [2024-07-25 10:02:06.051454] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:21.207 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:21.207 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # return 0 00:13:21.207 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:21.207 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:21.207 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:21.207 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:21.207 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:21.207 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode8819 00:13:21.465 [2024-07-25 10:02:06.510197] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:13:21.465 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:13:21.465 { 00:13:21.465 "nqn": "nqn.2016-06.io.spdk:cnode8819", 00:13:21.465 "tgt_name": "foobar", 00:13:21.465 "method": "nvmf_create_subsystem", 00:13:21.465 "req_id": 1 00:13:21.465 } 00:13:21.465 Got JSON-RPC error response 00:13:21.465 response: 00:13:21.465 { 00:13:21.465 "code": -32603, 00:13:21.465 "message": "Unable to find target foobar" 00:13:21.465 }' 00:13:21.465 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:13:21.465 { 00:13:21.465 "nqn": "nqn.2016-06.io.spdk:cnode8819", 00:13:21.465 "tgt_name": "foobar", 00:13:21.465 "method": "nvmf_create_subsystem", 00:13:21.465 "req_id": 1 00:13:21.465 } 00:13:21.465 Got JSON-RPC error response 00:13:21.465 response: 00:13:21.465 { 00:13:21.465 "code": -32603, 00:13:21.465 "message": "Unable to find target foobar" 00:13:21.465 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:13:21.465 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:13:21.465 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode5517 00:13:21.724 [2024-07-25 10:02:06.811244] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5517: invalid serial number 'SPDKISFASTANDAWESOME' 00:13:21.724 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:13:21.724 { 00:13:21.724 "nqn": "nqn.2016-06.io.spdk:cnode5517", 00:13:21.724 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:21.724 "method": "nvmf_create_subsystem", 00:13:21.724 "req_id": 1 00:13:21.724 } 00:13:21.724 Got JSON-RPC error response 00:13:21.724 response: 00:13:21.724 { 00:13:21.724 "code": -32602, 00:13:21.724 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:21.724 }' 00:13:21.724 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:13:21.724 { 00:13:21.724 "nqn": "nqn.2016-06.io.spdk:cnode5517", 00:13:21.724 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:21.724 "method": "nvmf_create_subsystem", 00:13:21.724 "req_id": 1 00:13:21.724 } 00:13:21.724 Got JSON-RPC error response 00:13:21.724 response: 00:13:21.724 { 00:13:21.724 "code": -32602, 00:13:21.724 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:21.724 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:21.724 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:13:21.724 10:02:06 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode31296 00:13:22.291 [2024-07-25 10:02:07.337065] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31296: invalid model number 'SPDK_Controller' 00:13:22.291 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:13:22.291 { 00:13:22.291 "nqn": "nqn.2016-06.io.spdk:cnode31296", 00:13:22.291 "model_number": "SPDK_Controller\u001f", 00:13:22.291 "method": "nvmf_create_subsystem", 00:13:22.291 "req_id": 1 00:13:22.291 } 00:13:22.291 Got JSON-RPC error response 00:13:22.291 response: 00:13:22.291 { 00:13:22.291 "code": -32602, 00:13:22.291 "message": "Invalid MN SPDK_Controller\u001f" 00:13:22.291 }' 00:13:22.291 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:13:22.291 { 00:13:22.291 "nqn": "nqn.2016-06.io.spdk:cnode31296", 00:13:22.291 "model_number": "SPDK_Controller\u001f", 00:13:22.291 "method": "nvmf_create_subsystem", 00:13:22.291 "req_id": 1 00:13:22.291 } 00:13:22.291 Got JSON-RPC error response 00:13:22.291 response: 00:13:22.291 { 00:13:22.291 "code": -32602, 00:13:22.291 "message": "Invalid MN SPDK_Controller\u001f" 00:13:22.291 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:22.291 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:13:22.291 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:13:22.291 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:22.291 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:22.291 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:22.291 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:22.291 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:22.291 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:13:22.291 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:13:22.291 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:13:22.291 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:22.291 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:22.291 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:13:22.291 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:13:22.291 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:13:22.291 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:22.291 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:22.291 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:13:22.291 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:13:22.291 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:13:22.291 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:22.291 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:22.291 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:13:22.291 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:13:22.291 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:13:22.291 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:22.291 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:22.291 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:13:22.291 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:13:22.291 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:13:22.291 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:22.291 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:22.291 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:13:22.291 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:13:22.291 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:13:22.291 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:22.291 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:22.291 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:13:22.291 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:13:22.291 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:13:22.291 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:22.291 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:22.291 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:13:22.291 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:13:22.291 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:13:22.291 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:22.291 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:22.291 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:13:22.291 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:13:22.291 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:13:22.291 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:22.291 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:22.291 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:13:22.291 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:13:22.291 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:13:22.291 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:22.291 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:22.291 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:13:22.291 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:13:22.291 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:13:22.291 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:22.291 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:22.291 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:13:22.291 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:13:22.291 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:13:22.291 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:22.291 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:22.291 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:13:22.291 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:13:22.291 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:13:22.291 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:22.291 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:22.291 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:13:22.291 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:13:22.291 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:13:22.291 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:22.291 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:22.291 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:13:22.291 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:13:22.291 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:13:22.291 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:22.291 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:22.291 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:13:22.291 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:13:22.291 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:13:22.291 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:22.291 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:22.291 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:13:22.291 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:13:22.291 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:13:22.292 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:22.292 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:22.292 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:13:22.292 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:13:22.292 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:13:22.292 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:22.292 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:22.292 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:13:22.292 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:13:22.292 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:13:22.292 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:22.292 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:22.292 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:13:22.292 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:13:22.292 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:13:22.292 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:22.292 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:22.292 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:13:22.292 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:13:22.292 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:13:22.292 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:22.292 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:22.292 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ o == \- ]] 00:13:22.292 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'oBZV\a^.85V!Nf8o/yTR:' 00:13:22.292 10:02:07 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'oBZV\a^.85V!Nf8o/yTR:' nqn.2016-06.io.spdk:cnode15585 00:13:22.857 [2024-07-25 10:02:08.007266] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15585: invalid serial number 'oBZV\a^.85V!Nf8o/yTR:' 00:13:23.116 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:13:23.116 { 00:13:23.116 "nqn": "nqn.2016-06.io.spdk:cnode15585", 00:13:23.116 "serial_number": "oBZV\\a^.85V!Nf8o/yTR:", 00:13:23.116 "method": "nvmf_create_subsystem", 00:13:23.116 "req_id": 1 00:13:23.116 } 00:13:23.116 Got JSON-RPC error response 00:13:23.116 response: 00:13:23.116 { 00:13:23.116 "code": -32602, 00:13:23.116 "message": "Invalid SN oBZV\\a^.85V!Nf8o/yTR:" 00:13:23.116 }' 00:13:23.116 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:13:23.116 { 00:13:23.116 "nqn": "nqn.2016-06.io.spdk:cnode15585", 00:13:23.116 "serial_number": "oBZV\\a^.85V!Nf8o/yTR:", 00:13:23.116 "method": "nvmf_create_subsystem", 00:13:23.116 "req_id": 1 00:13:23.116 } 00:13:23.116 Got JSON-RPC error response 00:13:23.116 response: 00:13:23.116 { 00:13:23.116 "code": -32602, 00:13:23.116 "message": "Invalid SN oBZV\\a^.85V!Nf8o/yTR:" 00:13:23.116 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:23.116 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:13:23.116 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:13:23.116 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:23.116 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:23.116 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:23.116 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:23.116 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.116 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:13:23.116 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:13:23.116 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:13:23.116 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.116 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.116 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:13:23.116 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:13:23.116 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:13:23.116 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.116 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.116 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:13:23.116 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:13:23.116 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:13:23.116 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.116 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.116 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:13:23.116 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:13:23.116 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:13:23.116 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.116 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.116 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:13:23.116 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:13:23.116 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:13:23.116 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.116 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.116 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:13:23.116 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:13:23.116 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:13:23.116 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.116 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.116 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:13:23.116 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:13:23.116 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:13:23.116 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.116 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.116 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:13:23.116 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:13:23.116 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:13:23.116 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.116 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.116 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:13:23.116 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:13:23.117 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:13:23.117 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.117 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.117 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:13:23.117 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:13:23.117 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:13:23.117 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.117 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.117 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:13:23.117 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:13:23.117 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:13:23.117 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.117 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.117 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:13:23.117 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:13:23.117 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:13:23.117 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.117 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.117 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:13:23.117 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:13:23.117 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:13:23.117 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.117 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.117 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:13:23.117 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:13:23.117 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:13:23.117 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.117 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.117 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:13:23.117 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:13:23.117 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:13:23.117 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.117 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.117 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:13:23.117 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:13:23.117 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:13:23.117 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.117 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.117 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:13:23.117 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:13:23.117 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:13:23.117 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.117 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.117 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:13:23.117 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:13:23.117 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:13:23.117 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.117 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.117 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:13:23.117 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:13:23.117 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:13:23.117 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.117 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.117 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:13:23.117 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:13:23.117 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:13:23.117 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.117 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.117 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:13:23.117 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:13:23.117 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:13:23.117 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.117 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.117 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:13:23.117 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:13:23.117 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:13:23.117 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.117 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.117 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:13:23.117 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:13:23.117 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:13:23.117 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.117 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.117 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:13:23.117 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:13:23.117 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:13:23.117 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.117 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.117 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:13:23.117 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:13:23.117 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:13:23.117 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.117 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.117 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:13:23.117 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:13:23.117 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:13:23.117 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.117 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.117 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:13:23.117 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:13:23.117 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:13:23.117 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.117 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.117 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:13:23.117 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:13:23.117 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:13:23.117 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.117 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.117 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:13:23.117 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:13:23.117 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:13:23.117 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.117 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.117 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:13:23.117 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:13:23.117 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:13:23.117 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.117 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.117 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:13:23.117 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:13:23.117 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:13:23.117 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.117 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.117 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:13:23.117 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:13:23.117 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:13:23.117 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.117 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.118 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:13:23.118 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:13:23.118 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:13:23.118 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.118 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.118 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:13:23.118 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:13:23.118 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:13:23.118 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.118 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.118 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:13:23.118 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:13:23.118 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:13:23.118 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.118 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.118 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:13:23.118 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:13:23.118 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:13:23.118 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.118 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.118 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:13:23.118 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:13:23.118 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:13:23.118 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.118 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.118 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:13:23.118 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:13:23.118 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:13:23.118 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.118 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.118 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:13:23.118 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:13:23.118 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:13:23.118 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.118 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.118 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:13:23.118 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:13:23.118 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:13:23.118 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.118 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.118 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:13:23.118 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:13:23.118 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:13:23.118 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:23.118 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:23.118 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ & == \- ]] 00:13:23.118 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '&z~s?2'\''*|.L8mr`wfSt2]3jw@8~ep)WAN9":eqI' 00:13:23.118 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d '&z~s?2'\''*|.L8mr`wfSt2]3jw@8~ep)WAN9":eqI' nqn.2016-06.io.spdk:cnode12410 00:13:23.683 [2024-07-25 10:02:08.689534] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode12410: invalid model number '&z~s?2'*|.L8mr`wfSt2]3jw@8~ep)WAN9":eqI' 00:13:23.683 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:13:23.683 { 00:13:23.683 "nqn": "nqn.2016-06.io.spdk:cnode12410", 00:13:23.683 "model_number": "&z~s?2'\''*|.L8mr`wfSt2]3jw@\u007f8~ep)W\u007fAN9\":eqI", 00:13:23.683 "method": "nvmf_create_subsystem", 00:13:23.683 "req_id": 1 00:13:23.683 } 00:13:23.683 Got JSON-RPC error response 00:13:23.683 response: 00:13:23.683 { 00:13:23.683 "code": -32602, 00:13:23.683 "message": "Invalid MN &z~s?2'\''*|.L8mr`wfSt2]3jw@\u007f8~ep)W\u007fAN9\":eqI" 00:13:23.683 }' 00:13:23.683 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:13:23.683 { 00:13:23.683 "nqn": "nqn.2016-06.io.spdk:cnode12410", 00:13:23.683 "model_number": "&z~s?2'*|.L8mr`wfSt2]3jw@\u007f8~ep)W\u007fAN9\":eqI", 00:13:23.683 "method": "nvmf_create_subsystem", 00:13:23.683 "req_id": 1 00:13:23.683 } 00:13:23.683 Got JSON-RPC error response 00:13:23.683 response: 00:13:23.683 { 00:13:23.683 "code": -32602, 00:13:23.683 "message": "Invalid MN &z~s?2'*|.L8mr`wfSt2]3jw@\u007f8~ep)W\u007fAN9\":eqI" 00:13:23.683 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:23.683 10:02:08 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:13:24.248 [2024-07-25 10:02:09.231458] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:24.248 10:02:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:13:24.506 10:02:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:13:24.506 10:02:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:13:24.506 10:02:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:13:24.506 10:02:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:13:24.506 10:02:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:13:25.070 [2024-07-25 10:02:10.050129] nvmf_rpc.c: 809:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:13:25.070 10:02:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:13:25.070 { 00:13:25.070 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:25.070 "listen_address": { 00:13:25.070 "trtype": "tcp", 00:13:25.070 "traddr": "", 00:13:25.070 "trsvcid": "4421" 00:13:25.071 }, 00:13:25.071 "method": "nvmf_subsystem_remove_listener", 00:13:25.071 "req_id": 1 00:13:25.071 } 00:13:25.071 Got JSON-RPC error response 00:13:25.071 response: 00:13:25.071 { 00:13:25.071 "code": -32602, 00:13:25.071 "message": "Invalid parameters" 00:13:25.071 }' 00:13:25.071 10:02:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:13:25.071 { 00:13:25.071 "nqn": "nqn.2016-06.io.spdk:cnode", 00:13:25.071 "listen_address": { 00:13:25.071 "trtype": "tcp", 00:13:25.071 "traddr": "", 00:13:25.071 "trsvcid": "4421" 00:13:25.071 }, 00:13:25.071 "method": "nvmf_subsystem_remove_listener", 00:13:25.071 "req_id": 1 00:13:25.071 } 00:13:25.071 Got JSON-RPC error response 00:13:25.071 response: 00:13:25.071 { 00:13:25.071 "code": -32602, 00:13:25.071 "message": "Invalid parameters" 00:13:25.071 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:13:25.071 10:02:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode27671 -i 0 00:13:25.328 [2024-07-25 10:02:10.391205] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27671: invalid cntlid range [0-65519] 00:13:25.328 10:02:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:13:25.328 { 00:13:25.328 "nqn": "nqn.2016-06.io.spdk:cnode27671", 00:13:25.328 "min_cntlid": 0, 00:13:25.328 "method": "nvmf_create_subsystem", 00:13:25.328 "req_id": 1 00:13:25.328 } 00:13:25.328 Got JSON-RPC error response 00:13:25.328 response: 00:13:25.328 { 00:13:25.328 "code": -32602, 00:13:25.328 "message": "Invalid cntlid range [0-65519]" 00:13:25.328 }' 00:13:25.328 10:02:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:13:25.328 { 00:13:25.328 "nqn": "nqn.2016-06.io.spdk:cnode27671", 00:13:25.328 "min_cntlid": 0, 00:13:25.328 "method": "nvmf_create_subsystem", 00:13:25.328 "req_id": 1 00:13:25.328 } 00:13:25.328 Got JSON-RPC error response 00:13:25.328 response: 00:13:25.328 { 00:13:25.328 "code": -32602, 00:13:25.328 "message": "Invalid cntlid range [0-65519]" 00:13:25.328 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:25.328 10:02:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode13592 -i 65520 00:13:25.585 [2024-07-25 10:02:10.740350] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13592: invalid cntlid range [65520-65519] 00:13:25.843 10:02:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:13:25.843 { 00:13:25.843 "nqn": "nqn.2016-06.io.spdk:cnode13592", 00:13:25.843 "min_cntlid": 65520, 00:13:25.843 "method": "nvmf_create_subsystem", 00:13:25.843 "req_id": 1 00:13:25.843 } 00:13:25.843 Got JSON-RPC error response 00:13:25.843 response: 00:13:25.843 { 00:13:25.843 "code": -32602, 00:13:25.843 "message": "Invalid cntlid range [65520-65519]" 00:13:25.843 }' 00:13:25.843 10:02:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:13:25.843 { 00:13:25.843 "nqn": "nqn.2016-06.io.spdk:cnode13592", 00:13:25.843 "min_cntlid": 65520, 00:13:25.843 "method": "nvmf_create_subsystem", 00:13:25.843 "req_id": 1 00:13:25.843 } 00:13:25.843 Got JSON-RPC error response 00:13:25.843 response: 00:13:25.843 { 00:13:25.843 "code": -32602, 00:13:25.843 "message": "Invalid cntlid range [65520-65519]" 00:13:25.843 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:25.843 10:02:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10528 -I 0 00:13:26.100 [2024-07-25 10:02:11.225980] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode10528: invalid cntlid range [1-0] 00:13:26.100 10:02:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:13:26.100 { 00:13:26.100 "nqn": "nqn.2016-06.io.spdk:cnode10528", 00:13:26.100 "max_cntlid": 0, 00:13:26.100 "method": "nvmf_create_subsystem", 00:13:26.100 "req_id": 1 00:13:26.100 } 00:13:26.100 Got JSON-RPC error response 00:13:26.100 response: 00:13:26.100 { 00:13:26.100 "code": -32602, 00:13:26.100 "message": "Invalid cntlid range [1-0]" 00:13:26.100 }' 00:13:26.100 10:02:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:13:26.100 { 00:13:26.100 "nqn": "nqn.2016-06.io.spdk:cnode10528", 00:13:26.100 "max_cntlid": 0, 00:13:26.100 "method": "nvmf_create_subsystem", 00:13:26.100 "req_id": 1 00:13:26.100 } 00:13:26.100 Got JSON-RPC error response 00:13:26.100 response: 00:13:26.100 { 00:13:26.100 "code": -32602, 00:13:26.100 "message": "Invalid cntlid range [1-0]" 00:13:26.100 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:26.100 10:02:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode27236 -I 65520 00:13:26.668 [2024-07-25 10:02:11.583145] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27236: invalid cntlid range [1-65520] 00:13:26.668 10:02:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:13:26.668 { 00:13:26.668 "nqn": "nqn.2016-06.io.spdk:cnode27236", 00:13:26.668 "max_cntlid": 65520, 00:13:26.668 "method": "nvmf_create_subsystem", 00:13:26.668 "req_id": 1 00:13:26.668 } 00:13:26.668 Got JSON-RPC error response 00:13:26.668 response: 00:13:26.668 { 00:13:26.668 "code": -32602, 00:13:26.668 "message": "Invalid cntlid range [1-65520]" 00:13:26.668 }' 00:13:26.668 10:02:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:13:26.668 { 00:13:26.668 "nqn": "nqn.2016-06.io.spdk:cnode27236", 00:13:26.668 "max_cntlid": 65520, 00:13:26.668 "method": "nvmf_create_subsystem", 00:13:26.668 "req_id": 1 00:13:26.668 } 00:13:26.668 Got JSON-RPC error response 00:13:26.668 response: 00:13:26.668 { 00:13:26.668 "code": -32602, 00:13:26.668 "message": "Invalid cntlid range [1-65520]" 00:13:26.668 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:26.668 10:02:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode17746 -i 6 -I 5 00:13:26.957 [2024-07-25 10:02:11.928346] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17746: invalid cntlid range [6-5] 00:13:26.957 10:02:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:13:26.957 { 00:13:26.957 "nqn": "nqn.2016-06.io.spdk:cnode17746", 00:13:26.958 "min_cntlid": 6, 00:13:26.958 "max_cntlid": 5, 00:13:26.958 "method": "nvmf_create_subsystem", 00:13:26.958 "req_id": 1 00:13:26.958 } 00:13:26.958 Got JSON-RPC error response 00:13:26.958 response: 00:13:26.958 { 00:13:26.958 "code": -32602, 00:13:26.958 "message": "Invalid cntlid range [6-5]" 00:13:26.958 }' 00:13:26.958 10:02:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:13:26.958 { 00:13:26.958 "nqn": "nqn.2016-06.io.spdk:cnode17746", 00:13:26.958 "min_cntlid": 6, 00:13:26.958 "max_cntlid": 5, 00:13:26.958 "method": "nvmf_create_subsystem", 00:13:26.958 "req_id": 1 00:13:26.958 } 00:13:26.958 Got JSON-RPC error response 00:13:26.958 response: 00:13:26.958 { 00:13:26.958 "code": -32602, 00:13:26.958 "message": "Invalid cntlid range [6-5]" 00:13:26.958 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:26.958 10:02:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:13:26.958 10:02:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:13:26.958 { 00:13:26.958 "name": "foobar", 00:13:26.958 "method": "nvmf_delete_target", 00:13:26.958 "req_id": 1 00:13:26.958 } 00:13:26.958 Got JSON-RPC error response 00:13:26.958 response: 00:13:26.958 { 00:13:26.958 "code": -32602, 00:13:26.958 "message": "The specified target doesn'\''t exist, cannot delete it." 00:13:26.958 }' 00:13:26.958 10:02:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:13:26.958 { 00:13:26.958 "name": "foobar", 00:13:26.958 "method": "nvmf_delete_target", 00:13:26.958 "req_id": 1 00:13:26.958 } 00:13:26.958 Got JSON-RPC error response 00:13:26.958 response: 00:13:26.958 { 00:13:26.958 "code": -32602, 00:13:26.958 "message": "The specified target doesn't exist, cannot delete it." 00:13:26.958 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:13:26.958 10:02:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:13:26.958 10:02:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:13:26.958 10:02:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:26.958 10:02:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:13:27.216 10:02:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:27.216 10:02:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:13:27.216 10:02:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:27.216 10:02:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:27.216 rmmod nvme_tcp 00:13:27.216 rmmod nvme_fabrics 00:13:27.216 rmmod nvme_keyring 00:13:27.216 10:02:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:27.216 10:02:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:13:27.216 10:02:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:13:27.216 10:02:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 399854 ']' 00:13:27.216 10:02:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 399854 00:13:27.216 10:02:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@950 -- # '[' -z 399854 ']' 00:13:27.216 10:02:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # kill -0 399854 00:13:27.216 10:02:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # uname 00:13:27.216 10:02:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:27.216 10:02:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 399854 00:13:27.216 10:02:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:27.216 10:02:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:27.216 10:02:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 399854' 00:13:27.216 killing process with pid 399854 00:13:27.216 10:02:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@969 -- # kill 399854 00:13:27.216 10:02:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@974 -- # wait 399854 00:13:27.475 10:02:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:27.475 10:02:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:27.475 10:02:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:27.475 10:02:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:27.475 10:02:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:27.475 10:02:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:27.475 10:02:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:27.475 10:02:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:29.375 10:02:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:29.375 00:13:29.375 real 0m11.489s 00:13:29.375 user 0m31.298s 00:13:29.375 sys 0m3.163s 00:13:29.375 10:02:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:29.375 10:02:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:29.375 ************************************ 00:13:29.375 END TEST nvmf_invalid 00:13:29.375 ************************************ 00:13:29.634 10:02:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:29.634 10:02:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:29.634 10:02:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:29.634 10:02:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:29.634 ************************************ 00:13:29.634 START TEST nvmf_connect_stress 00:13:29.634 ************************************ 00:13:29.634 10:02:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:29.634 * Looking for test storage... 00:13:29.634 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:29.634 10:02:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:29.634 10:02:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:13:29.634 10:02:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:29.634 10:02:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:29.634 10:02:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:29.634 10:02:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:29.634 10:02:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:29.634 10:02:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:29.634 10:02:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:29.634 10:02:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:29.634 10:02:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:29.634 10:02:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:29.634 10:02:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:13:29.634 10:02:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:13:29.634 10:02:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:29.634 10:02:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:29.634 10:02:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:29.634 10:02:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:29.634 10:02:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:29.634 10:02:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:29.634 10:02:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:29.634 10:02:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:29.634 10:02:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:29.634 10:02:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:29.634 10:02:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:29.634 10:02:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:13:29.634 10:02:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:29.634 10:02:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:13:29.634 10:02:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:29.634 10:02:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:29.634 10:02:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:29.634 10:02:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:29.634 10:02:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:29.634 10:02:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:29.634 10:02:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:29.634 10:02:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:29.634 10:02:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:13:29.634 10:02:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:29.634 10:02:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:29.634 10:02:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:29.634 10:02:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:29.634 10:02:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:29.634 10:02:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:29.634 10:02:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:29.634 10:02:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:29.634 10:02:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:29.634 10:02:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:29.634 10:02:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:13:29.634 10:02:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:32.165 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:32.165 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:13:32.165 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:32.165 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:32.165 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:32.165 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:32.165 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:32.165 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:13:32.165 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:32.165 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:13:32.165 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:13:32.165 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:13:32.165 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:13:32.165 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:13:32.165 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:13:32.165 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:32.165 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:32.165 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:32.165 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:32.165 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:32.165 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:32.165 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:32.165 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:32.165 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:32.165 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:32.165 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:32.165 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:32.165 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:32.165 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:32.165 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:32.165 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:32.165 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:32.165 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:32.165 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:13:32.165 Found 0000:84:00.0 (0x8086 - 0x159b) 00:13:32.165 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:32.165 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:32.165 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:32.165 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:32.165 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:32.165 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:32.165 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:13:32.165 Found 0000:84:00.1 (0x8086 - 0x159b) 00:13:32.165 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:32.165 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:32.165 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:32.165 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:32.165 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:32.165 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:32.165 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:32.165 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:32.165 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:32.165 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:32.165 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:32.165 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:32.166 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:32.166 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:32.166 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:32.166 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:13:32.166 Found net devices under 0000:84:00.0: cvl_0_0 00:13:32.166 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:32.166 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:32.166 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:32.166 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:32.166 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:32.166 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:32.166 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:32.166 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:32.166 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:13:32.166 Found net devices under 0000:84:00.1: cvl_0_1 00:13:32.166 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:32.166 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:32.166 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:13:32.166 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:32.166 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:32.166 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:32.166 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:32.166 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:32.166 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:32.166 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:32.166 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:32.166 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:32.166 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:32.166 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:32.166 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:32.166 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:32.166 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:32.166 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:32.166 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:32.166 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:32.166 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:32.166 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:32.166 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:32.166 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:32.166 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:32.166 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:32.166 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:32.166 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.111 ms 00:13:32.166 00:13:32.166 --- 10.0.0.2 ping statistics --- 00:13:32.166 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:32.166 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:13:32.166 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:32.166 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:32.166 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.092 ms 00:13:32.166 00:13:32.166 --- 10.0.0.1 ping statistics --- 00:13:32.166 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:32.166 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:13:32.166 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:32.166 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:13:32.166 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:32.166 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:32.166 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:32.166 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:32.166 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:32.166 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:32.166 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:32.166 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:13:32.166 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:32.166 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:32.166 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:32.166 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=402769 00:13:32.166 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:32.166 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 402769 00:13:32.166 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@831 -- # '[' -z 402769 ']' 00:13:32.166 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:32.166 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:32.166 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:32.166 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:32.166 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:32.166 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:32.424 [2024-07-25 10:02:17.373323] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:13:32.425 [2024-07-25 10:02:17.373424] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:32.425 EAL: No free 2048 kB hugepages reported on node 1 00:13:32.425 [2024-07-25 10:02:17.452692] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:32.425 [2024-07-25 10:02:17.578564] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:32.425 [2024-07-25 10:02:17.578625] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:32.425 [2024-07-25 10:02:17.578641] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:32.425 [2024-07-25 10:02:17.578655] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:32.425 [2024-07-25 10:02:17.578667] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:32.425 [2024-07-25 10:02:17.578734] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:32.425 [2024-07-25 10:02:17.578795] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:32.425 [2024-07-25 10:02:17.578801] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:32.683 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:32.683 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # return 0 00:13:32.683 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:32.683 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:32.683 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:32.683 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:32.683 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:32.683 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.683 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:32.683 [2024-07-25 10:02:17.735680] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:32.683 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.683 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:32.683 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.683 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:32.683 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.683 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:32.683 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.683 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:32.683 [2024-07-25 10:02:17.764553] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:32.683 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.683 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:32.683 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.683 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:32.683 NULL1 00:13:32.683 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:32.683 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=402908 00:13:32.683 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:32.683 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:32.683 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:13:32.683 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:13:32.683 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:32.683 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:32.683 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:32.684 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:32.684 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:32.684 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:32.684 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:32.684 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:32.684 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:32.684 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:32.684 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:32.684 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:32.684 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:32.684 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:32.684 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:32.684 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:32.684 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:32.684 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:32.684 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:32.684 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:32.684 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:32.684 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:32.684 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:32.684 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:32.684 EAL: No free 2048 kB hugepages reported on node 1 00:13:32.684 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:32.684 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:32.684 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:32.684 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:32.684 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:32.684 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:32.684 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:32.684 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:32.684 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:32.684 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:32.684 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:32.684 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:32.684 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:32.684 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:32.684 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:32.684 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:32.684 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 402908 00:13:32.684 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:32.684 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:32.684 10:02:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:33.249 10:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.249 10:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 402908 00:13:33.249 10:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:33.249 10:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.249 10:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:33.506 10:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.506 10:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 402908 00:13:33.506 10:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:33.506 10:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.506 10:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:33.763 10:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:33.763 10:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 402908 00:13:33.763 10:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:33.763 10:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:33.763 10:02:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:34.020 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.021 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 402908 00:13:34.021 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:34.021 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.021 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:34.584 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.584 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 402908 00:13:34.585 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:34.585 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.585 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:34.842 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:34.842 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 402908 00:13:34.842 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:34.842 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:34.842 10:02:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:35.099 10:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.099 10:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 402908 00:13:35.099 10:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:35.099 10:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.099 10:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:35.356 10:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.356 10:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 402908 00:13:35.356 10:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:35.356 10:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.357 10:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:35.614 10:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:35.614 10:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 402908 00:13:35.614 10:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:35.614 10:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:35.614 10:02:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:36.180 10:02:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.180 10:02:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 402908 00:13:36.180 10:02:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:36.180 10:02:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.180 10:02:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:36.437 10:02:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.437 10:02:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 402908 00:13:36.437 10:02:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:36.437 10:02:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.437 10:02:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:36.694 10:02:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.694 10:02:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 402908 00:13:36.694 10:02:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:36.694 10:02:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.694 10:02:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:36.952 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:36.952 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 402908 00:13:36.952 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:36.952 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:36.952 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:37.209 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.209 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 402908 00:13:37.209 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:37.209 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.209 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:37.773 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:37.773 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 402908 00:13:37.773 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:37.773 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:37.773 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:38.030 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.030 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 402908 00:13:38.030 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:38.031 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.031 10:02:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:38.288 10:02:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.288 10:02:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 402908 00:13:38.288 10:02:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:38.288 10:02:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.288 10:02:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:38.545 10:02:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.545 10:02:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 402908 00:13:38.545 10:02:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:38.545 10:02:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.545 10:02:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:38.801 10:02:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:38.801 10:02:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 402908 00:13:38.801 10:02:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:38.801 10:02:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:38.801 10:02:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:39.365 10:02:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.365 10:02:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 402908 00:13:39.365 10:02:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:39.365 10:02:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.365 10:02:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:39.621 10:02:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.621 10:02:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 402908 00:13:39.621 10:02:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:39.621 10:02:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.621 10:02:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:39.878 10:02:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:39.878 10:02:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 402908 00:13:39.878 10:02:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:39.878 10:02:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:39.878 10:02:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:40.136 10:02:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.136 10:02:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 402908 00:13:40.136 10:02:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:40.136 10:02:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.136 10:02:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:40.393 10:02:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.393 10:02:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 402908 00:13:40.393 10:02:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:40.393 10:02:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.393 10:02:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:40.957 10:02:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:40.957 10:02:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 402908 00:13:40.957 10:02:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:40.957 10:02:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:40.957 10:02:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:41.218 10:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.218 10:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 402908 00:13:41.218 10:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:41.218 10:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.218 10:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:41.521 10:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.521 10:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 402908 00:13:41.521 10:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:41.521 10:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.521 10:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:41.778 10:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:41.778 10:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 402908 00:13:41.778 10:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:41.778 10:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:41.778 10:02:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:42.036 10:02:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.036 10:02:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 402908 00:13:42.036 10:02:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:42.036 10:02:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.036 10:02:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:42.599 10:02:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.599 10:02:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 402908 00:13:42.599 10:02:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:42.599 10:02:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.599 10:02:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:42.855 10:02:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:42.855 10:02:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 402908 00:13:42.855 10:02:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:42.855 10:02:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:42.855 10:02:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:42.855 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:43.111 10:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:43.111 10:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 402908 00:13:43.111 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (402908) - No such process 00:13:43.111 10:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 402908 00:13:43.111 10:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:43.111 10:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:43.111 10:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:13:43.111 10:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:43.111 10:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:13:43.111 10:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:43.111 10:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:13:43.111 10:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:43.111 10:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:43.111 rmmod nvme_tcp 00:13:43.111 rmmod nvme_fabrics 00:13:43.111 rmmod nvme_keyring 00:13:43.111 10:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:43.111 10:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:13:43.111 10:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:13:43.111 10:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 402769 ']' 00:13:43.111 10:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 402769 00:13:43.111 10:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@950 -- # '[' -z 402769 ']' 00:13:43.111 10:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # kill -0 402769 00:13:43.111 10:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # uname 00:13:43.111 10:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:43.111 10:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 402769 00:13:43.111 10:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:13:43.111 10:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:13:43.111 10:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 402769' 00:13:43.111 killing process with pid 402769 00:13:43.111 10:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@969 -- # kill 402769 00:13:43.111 10:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@974 -- # wait 402769 00:13:43.369 10:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:43.369 10:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:43.369 10:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:43.369 10:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:43.369 10:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:43.369 10:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:43.369 10:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:43.369 10:02:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:45.899 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:45.899 00:13:45.899 real 0m15.922s 00:13:45.899 user 0m38.395s 00:13:45.899 sys 0m6.682s 00:13:45.899 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:45.899 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:45.899 ************************************ 00:13:45.899 END TEST nvmf_connect_stress 00:13:45.899 ************************************ 00:13:45.899 10:02:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:45.899 10:02:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:45.899 10:02:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:45.899 10:02:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:45.899 ************************************ 00:13:45.899 START TEST nvmf_fused_ordering 00:13:45.899 ************************************ 00:13:45.899 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:45.899 * Looking for test storage... 00:13:45.899 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:45.899 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:45.899 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:13:45.899 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:45.899 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:45.899 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:45.899 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:45.899 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:45.899 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:45.899 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:45.899 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:45.899 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:45.899 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:45.899 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:13:45.899 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:13:45.899 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:45.899 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:45.899 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:45.899 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:45.899 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:45.899 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:45.899 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:45.899 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:45.899 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:45.899 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:45.900 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:45.900 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:13:45.900 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:45.900 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:13:45.900 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:45.900 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:45.900 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:45.900 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:45.900 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:45.900 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:45.900 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:45.900 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:45.900 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:13:45.900 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:45.900 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:45.900 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:45.900 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:45.900 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:45.900 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:45.900 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:45.900 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:45.900 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:45.900 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:45.900 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:13:45.900 10:02:30 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:48.431 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:48.431 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:13:48.431 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:48.431 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:48.431 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:48.431 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:48.431 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:48.431 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:13:48.431 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:48.431 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:13:48.431 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:13:48.431 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:13:48.431 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:13:48.431 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:13:48.431 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:13:48.431 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:48.431 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:48.431 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:48.431 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:48.431 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:48.431 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:48.431 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:48.431 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:48.431 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:48.431 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:48.431 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:48.431 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:48.431 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:48.431 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:48.431 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:48.431 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:48.431 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:48.431 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:48.431 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:13:48.431 Found 0000:84:00.0 (0x8086 - 0x159b) 00:13:48.431 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:48.431 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:48.431 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:48.431 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:48.431 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:48.431 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:48.431 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:13:48.431 Found 0000:84:00.1 (0x8086 - 0x159b) 00:13:48.431 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:48.431 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:48.431 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:48.431 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:48.431 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:48.431 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:48.431 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:48.431 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:48.431 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:48.431 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:48.431 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:48.431 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:48.431 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:48.431 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:48.431 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:48.431 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:13:48.431 Found net devices under 0000:84:00.0: cvl_0_0 00:13:48.431 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:48.431 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:48.431 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:48.431 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:48.431 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:48.431 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:48.431 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:48.431 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:48.431 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:13:48.431 Found net devices under 0000:84:00.1: cvl_0_1 00:13:48.431 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:48.431 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:48.431 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:13:48.431 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:48.431 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:48.431 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:48.431 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:48.431 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:48.431 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:48.431 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:48.431 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:48.431 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:48.431 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:48.431 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:48.431 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:48.431 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:48.431 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:48.431 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:48.431 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:48.431 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:48.431 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:48.431 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:48.431 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:48.431 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:48.431 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:48.431 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:48.431 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:48.431 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.155 ms 00:13:48.431 00:13:48.431 --- 10.0.0.2 ping statistics --- 00:13:48.431 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:48.431 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:13:48.431 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:48.431 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:48.431 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms 00:13:48.431 00:13:48.431 --- 10.0.0.1 ping statistics --- 00:13:48.431 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:48.431 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:13:48.431 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:48.431 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:13:48.431 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:48.431 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:48.431 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:48.431 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:48.431 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:48.431 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:48.431 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:48.431 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:13:48.431 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:48.431 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:48.431 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:48.431 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=406081 00:13:48.431 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:48.431 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 406081 00:13:48.431 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # '[' -z 406081 ']' 00:13:48.431 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:48.431 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:48.431 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:48.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:48.431 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:48.431 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:48.431 [2024-07-25 10:02:33.368246] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:13:48.431 [2024-07-25 10:02:33.368337] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:48.431 EAL: No free 2048 kB hugepages reported on node 1 00:13:48.431 [2024-07-25 10:02:33.443048] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:48.431 [2024-07-25 10:02:33.564704] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:48.431 [2024-07-25 10:02:33.564763] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:48.431 [2024-07-25 10:02:33.564780] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:48.431 [2024-07-25 10:02:33.564794] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:48.431 [2024-07-25 10:02:33.564805] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:48.431 [2024-07-25 10:02:33.564844] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:48.689 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:48.689 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # return 0 00:13:48.689 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:48.689 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:48.689 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:48.689 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:48.689 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:48.689 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.689 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:48.689 [2024-07-25 10:02:33.719142] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:48.689 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.689 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:48.689 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.689 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:48.689 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.689 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:48.689 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.689 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:48.689 [2024-07-25 10:02:33.735363] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:48.689 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.689 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:48.689 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.689 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:48.689 NULL1 00:13:48.689 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.689 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:13:48.689 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.689 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:48.689 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.689 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:48.689 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:48.690 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:48.690 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:48.690 10:02:33 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:13:48.690 [2024-07-25 10:02:33.781011] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:13:48.690 [2024-07-25 10:02:33.781059] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid406120 ] 00:13:48.690 EAL: No free 2048 kB hugepages reported on node 1 00:13:49.255 Attached to nqn.2016-06.io.spdk:cnode1 00:13:49.255 Namespace ID: 1 size: 1GB 00:13:49.255 fused_ordering(0) 00:13:49.255 fused_ordering(1) 00:13:49.255 fused_ordering(2) 00:13:49.255 fused_ordering(3) 00:13:49.255 fused_ordering(4) 00:13:49.255 fused_ordering(5) 00:13:49.255 fused_ordering(6) 00:13:49.255 fused_ordering(7) 00:13:49.255 fused_ordering(8) 00:13:49.255 fused_ordering(9) 00:13:49.255 fused_ordering(10) 00:13:49.255 fused_ordering(11) 00:13:49.255 fused_ordering(12) 00:13:49.255 fused_ordering(13) 00:13:49.255 fused_ordering(14) 00:13:49.255 fused_ordering(15) 00:13:49.255 fused_ordering(16) 00:13:49.255 fused_ordering(17) 00:13:49.255 fused_ordering(18) 00:13:49.255 fused_ordering(19) 00:13:49.255 fused_ordering(20) 00:13:49.255 fused_ordering(21) 00:13:49.255 fused_ordering(22) 00:13:49.255 fused_ordering(23) 00:13:49.255 fused_ordering(24) 00:13:49.255 fused_ordering(25) 00:13:49.255 fused_ordering(26) 00:13:49.255 fused_ordering(27) 00:13:49.255 fused_ordering(28) 00:13:49.255 fused_ordering(29) 00:13:49.255 fused_ordering(30) 00:13:49.255 fused_ordering(31) 00:13:49.255 fused_ordering(32) 00:13:49.255 fused_ordering(33) 00:13:49.255 fused_ordering(34) 00:13:49.255 fused_ordering(35) 00:13:49.255 fused_ordering(36) 00:13:49.255 fused_ordering(37) 00:13:49.255 fused_ordering(38) 00:13:49.255 fused_ordering(39) 00:13:49.255 fused_ordering(40) 00:13:49.255 fused_ordering(41) 00:13:49.255 fused_ordering(42) 00:13:49.255 fused_ordering(43) 00:13:49.255 fused_ordering(44) 00:13:49.255 fused_ordering(45) 00:13:49.255 fused_ordering(46) 00:13:49.255 fused_ordering(47) 00:13:49.255 fused_ordering(48) 00:13:49.255 fused_ordering(49) 00:13:49.255 fused_ordering(50) 00:13:49.255 fused_ordering(51) 00:13:49.255 fused_ordering(52) 00:13:49.255 fused_ordering(53) 00:13:49.255 fused_ordering(54) 00:13:49.255 fused_ordering(55) 00:13:49.255 fused_ordering(56) 00:13:49.255 fused_ordering(57) 00:13:49.255 fused_ordering(58) 00:13:49.255 fused_ordering(59) 00:13:49.255 fused_ordering(60) 00:13:49.255 fused_ordering(61) 00:13:49.255 fused_ordering(62) 00:13:49.255 fused_ordering(63) 00:13:49.255 fused_ordering(64) 00:13:49.255 fused_ordering(65) 00:13:49.255 fused_ordering(66) 00:13:49.255 fused_ordering(67) 00:13:49.255 fused_ordering(68) 00:13:49.255 fused_ordering(69) 00:13:49.255 fused_ordering(70) 00:13:49.255 fused_ordering(71) 00:13:49.255 fused_ordering(72) 00:13:49.255 fused_ordering(73) 00:13:49.255 fused_ordering(74) 00:13:49.255 fused_ordering(75) 00:13:49.255 fused_ordering(76) 00:13:49.255 fused_ordering(77) 00:13:49.255 fused_ordering(78) 00:13:49.255 fused_ordering(79) 00:13:49.255 fused_ordering(80) 00:13:49.255 fused_ordering(81) 00:13:49.255 fused_ordering(82) 00:13:49.255 fused_ordering(83) 00:13:49.255 fused_ordering(84) 00:13:49.255 fused_ordering(85) 00:13:49.255 fused_ordering(86) 00:13:49.255 fused_ordering(87) 00:13:49.255 fused_ordering(88) 00:13:49.255 fused_ordering(89) 00:13:49.255 fused_ordering(90) 00:13:49.255 fused_ordering(91) 00:13:49.255 fused_ordering(92) 00:13:49.255 fused_ordering(93) 00:13:49.255 fused_ordering(94) 00:13:49.255 fused_ordering(95) 00:13:49.255 fused_ordering(96) 00:13:49.255 fused_ordering(97) 00:13:49.255 fused_ordering(98) 00:13:49.255 fused_ordering(99) 00:13:49.255 fused_ordering(100) 00:13:49.255 fused_ordering(101) 00:13:49.255 fused_ordering(102) 00:13:49.255 fused_ordering(103) 00:13:49.255 fused_ordering(104) 00:13:49.255 fused_ordering(105) 00:13:49.255 fused_ordering(106) 00:13:49.255 fused_ordering(107) 00:13:49.255 fused_ordering(108) 00:13:49.255 fused_ordering(109) 00:13:49.255 fused_ordering(110) 00:13:49.255 fused_ordering(111) 00:13:49.255 fused_ordering(112) 00:13:49.255 fused_ordering(113) 00:13:49.255 fused_ordering(114) 00:13:49.255 fused_ordering(115) 00:13:49.255 fused_ordering(116) 00:13:49.255 fused_ordering(117) 00:13:49.255 fused_ordering(118) 00:13:49.255 fused_ordering(119) 00:13:49.255 fused_ordering(120) 00:13:49.255 fused_ordering(121) 00:13:49.255 fused_ordering(122) 00:13:49.255 fused_ordering(123) 00:13:49.255 fused_ordering(124) 00:13:49.255 fused_ordering(125) 00:13:49.255 fused_ordering(126) 00:13:49.255 fused_ordering(127) 00:13:49.255 fused_ordering(128) 00:13:49.255 fused_ordering(129) 00:13:49.255 fused_ordering(130) 00:13:49.255 fused_ordering(131) 00:13:49.255 fused_ordering(132) 00:13:49.255 fused_ordering(133) 00:13:49.255 fused_ordering(134) 00:13:49.255 fused_ordering(135) 00:13:49.255 fused_ordering(136) 00:13:49.255 fused_ordering(137) 00:13:49.255 fused_ordering(138) 00:13:49.255 fused_ordering(139) 00:13:49.255 fused_ordering(140) 00:13:49.255 fused_ordering(141) 00:13:49.255 fused_ordering(142) 00:13:49.255 fused_ordering(143) 00:13:49.255 fused_ordering(144) 00:13:49.255 fused_ordering(145) 00:13:49.255 fused_ordering(146) 00:13:49.255 fused_ordering(147) 00:13:49.255 fused_ordering(148) 00:13:49.255 fused_ordering(149) 00:13:49.255 fused_ordering(150) 00:13:49.255 fused_ordering(151) 00:13:49.255 fused_ordering(152) 00:13:49.255 fused_ordering(153) 00:13:49.255 fused_ordering(154) 00:13:49.255 fused_ordering(155) 00:13:49.255 fused_ordering(156) 00:13:49.255 fused_ordering(157) 00:13:49.255 fused_ordering(158) 00:13:49.255 fused_ordering(159) 00:13:49.255 fused_ordering(160) 00:13:49.255 fused_ordering(161) 00:13:49.255 fused_ordering(162) 00:13:49.255 fused_ordering(163) 00:13:49.255 fused_ordering(164) 00:13:49.255 fused_ordering(165) 00:13:49.255 fused_ordering(166) 00:13:49.255 fused_ordering(167) 00:13:49.255 fused_ordering(168) 00:13:49.255 fused_ordering(169) 00:13:49.255 fused_ordering(170) 00:13:49.255 fused_ordering(171) 00:13:49.255 fused_ordering(172) 00:13:49.255 fused_ordering(173) 00:13:49.256 fused_ordering(174) 00:13:49.256 fused_ordering(175) 00:13:49.256 fused_ordering(176) 00:13:49.256 fused_ordering(177) 00:13:49.256 fused_ordering(178) 00:13:49.256 fused_ordering(179) 00:13:49.256 fused_ordering(180) 00:13:49.256 fused_ordering(181) 00:13:49.256 fused_ordering(182) 00:13:49.256 fused_ordering(183) 00:13:49.256 fused_ordering(184) 00:13:49.256 fused_ordering(185) 00:13:49.256 fused_ordering(186) 00:13:49.256 fused_ordering(187) 00:13:49.256 fused_ordering(188) 00:13:49.256 fused_ordering(189) 00:13:49.256 fused_ordering(190) 00:13:49.256 fused_ordering(191) 00:13:49.256 fused_ordering(192) 00:13:49.256 fused_ordering(193) 00:13:49.256 fused_ordering(194) 00:13:49.256 fused_ordering(195) 00:13:49.256 fused_ordering(196) 00:13:49.256 fused_ordering(197) 00:13:49.256 fused_ordering(198) 00:13:49.256 fused_ordering(199) 00:13:49.256 fused_ordering(200) 00:13:49.256 fused_ordering(201) 00:13:49.256 fused_ordering(202) 00:13:49.256 fused_ordering(203) 00:13:49.256 fused_ordering(204) 00:13:49.256 fused_ordering(205) 00:13:49.822 fused_ordering(206) 00:13:49.822 fused_ordering(207) 00:13:49.822 fused_ordering(208) 00:13:49.822 fused_ordering(209) 00:13:49.822 fused_ordering(210) 00:13:49.822 fused_ordering(211) 00:13:49.822 fused_ordering(212) 00:13:49.822 fused_ordering(213) 00:13:49.822 fused_ordering(214) 00:13:49.822 fused_ordering(215) 00:13:49.822 fused_ordering(216) 00:13:49.822 fused_ordering(217) 00:13:49.822 fused_ordering(218) 00:13:49.822 fused_ordering(219) 00:13:49.822 fused_ordering(220) 00:13:49.822 fused_ordering(221) 00:13:49.822 fused_ordering(222) 00:13:49.822 fused_ordering(223) 00:13:49.822 fused_ordering(224) 00:13:49.822 fused_ordering(225) 00:13:49.822 fused_ordering(226) 00:13:49.822 fused_ordering(227) 00:13:49.822 fused_ordering(228) 00:13:49.822 fused_ordering(229) 00:13:49.822 fused_ordering(230) 00:13:49.822 fused_ordering(231) 00:13:49.822 fused_ordering(232) 00:13:49.822 fused_ordering(233) 00:13:49.822 fused_ordering(234) 00:13:49.822 fused_ordering(235) 00:13:49.822 fused_ordering(236) 00:13:49.822 fused_ordering(237) 00:13:49.822 fused_ordering(238) 00:13:49.822 fused_ordering(239) 00:13:49.822 fused_ordering(240) 00:13:49.822 fused_ordering(241) 00:13:49.822 fused_ordering(242) 00:13:49.822 fused_ordering(243) 00:13:49.822 fused_ordering(244) 00:13:49.822 fused_ordering(245) 00:13:49.822 fused_ordering(246) 00:13:49.822 fused_ordering(247) 00:13:49.822 fused_ordering(248) 00:13:49.822 fused_ordering(249) 00:13:49.822 fused_ordering(250) 00:13:49.822 fused_ordering(251) 00:13:49.822 fused_ordering(252) 00:13:49.822 fused_ordering(253) 00:13:49.822 fused_ordering(254) 00:13:49.822 fused_ordering(255) 00:13:49.822 fused_ordering(256) 00:13:49.822 fused_ordering(257) 00:13:49.822 fused_ordering(258) 00:13:49.822 fused_ordering(259) 00:13:49.822 fused_ordering(260) 00:13:49.822 fused_ordering(261) 00:13:49.822 fused_ordering(262) 00:13:49.822 fused_ordering(263) 00:13:49.822 fused_ordering(264) 00:13:49.822 fused_ordering(265) 00:13:49.822 fused_ordering(266) 00:13:49.822 fused_ordering(267) 00:13:49.822 fused_ordering(268) 00:13:49.822 fused_ordering(269) 00:13:49.822 fused_ordering(270) 00:13:49.822 fused_ordering(271) 00:13:49.822 fused_ordering(272) 00:13:49.822 fused_ordering(273) 00:13:49.822 fused_ordering(274) 00:13:49.822 fused_ordering(275) 00:13:49.822 fused_ordering(276) 00:13:49.822 fused_ordering(277) 00:13:49.822 fused_ordering(278) 00:13:49.822 fused_ordering(279) 00:13:49.822 fused_ordering(280) 00:13:49.822 fused_ordering(281) 00:13:49.822 fused_ordering(282) 00:13:49.822 fused_ordering(283) 00:13:49.822 fused_ordering(284) 00:13:49.822 fused_ordering(285) 00:13:49.822 fused_ordering(286) 00:13:49.822 fused_ordering(287) 00:13:49.822 fused_ordering(288) 00:13:49.822 fused_ordering(289) 00:13:49.822 fused_ordering(290) 00:13:49.822 fused_ordering(291) 00:13:49.822 fused_ordering(292) 00:13:49.822 fused_ordering(293) 00:13:49.822 fused_ordering(294) 00:13:49.822 fused_ordering(295) 00:13:49.822 fused_ordering(296) 00:13:49.822 fused_ordering(297) 00:13:49.822 fused_ordering(298) 00:13:49.822 fused_ordering(299) 00:13:49.822 fused_ordering(300) 00:13:49.822 fused_ordering(301) 00:13:49.822 fused_ordering(302) 00:13:49.822 fused_ordering(303) 00:13:49.822 fused_ordering(304) 00:13:49.822 fused_ordering(305) 00:13:49.822 fused_ordering(306) 00:13:49.822 fused_ordering(307) 00:13:49.822 fused_ordering(308) 00:13:49.822 fused_ordering(309) 00:13:49.822 fused_ordering(310) 00:13:49.822 fused_ordering(311) 00:13:49.822 fused_ordering(312) 00:13:49.822 fused_ordering(313) 00:13:49.822 fused_ordering(314) 00:13:49.822 fused_ordering(315) 00:13:49.822 fused_ordering(316) 00:13:49.822 fused_ordering(317) 00:13:49.822 fused_ordering(318) 00:13:49.822 fused_ordering(319) 00:13:49.822 fused_ordering(320) 00:13:49.822 fused_ordering(321) 00:13:49.822 fused_ordering(322) 00:13:49.822 fused_ordering(323) 00:13:49.822 fused_ordering(324) 00:13:49.822 fused_ordering(325) 00:13:49.822 fused_ordering(326) 00:13:49.822 fused_ordering(327) 00:13:49.822 fused_ordering(328) 00:13:49.822 fused_ordering(329) 00:13:49.822 fused_ordering(330) 00:13:49.822 fused_ordering(331) 00:13:49.822 fused_ordering(332) 00:13:49.822 fused_ordering(333) 00:13:49.822 fused_ordering(334) 00:13:49.822 fused_ordering(335) 00:13:49.822 fused_ordering(336) 00:13:49.823 fused_ordering(337) 00:13:49.823 fused_ordering(338) 00:13:49.823 fused_ordering(339) 00:13:49.823 fused_ordering(340) 00:13:49.823 fused_ordering(341) 00:13:49.823 fused_ordering(342) 00:13:49.823 fused_ordering(343) 00:13:49.823 fused_ordering(344) 00:13:49.823 fused_ordering(345) 00:13:49.823 fused_ordering(346) 00:13:49.823 fused_ordering(347) 00:13:49.823 fused_ordering(348) 00:13:49.823 fused_ordering(349) 00:13:49.823 fused_ordering(350) 00:13:49.823 fused_ordering(351) 00:13:49.823 fused_ordering(352) 00:13:49.823 fused_ordering(353) 00:13:49.823 fused_ordering(354) 00:13:49.823 fused_ordering(355) 00:13:49.823 fused_ordering(356) 00:13:49.823 fused_ordering(357) 00:13:49.823 fused_ordering(358) 00:13:49.823 fused_ordering(359) 00:13:49.823 fused_ordering(360) 00:13:49.823 fused_ordering(361) 00:13:49.823 fused_ordering(362) 00:13:49.823 fused_ordering(363) 00:13:49.823 fused_ordering(364) 00:13:49.823 fused_ordering(365) 00:13:49.823 fused_ordering(366) 00:13:49.823 fused_ordering(367) 00:13:49.823 fused_ordering(368) 00:13:49.823 fused_ordering(369) 00:13:49.823 fused_ordering(370) 00:13:49.823 fused_ordering(371) 00:13:49.823 fused_ordering(372) 00:13:49.823 fused_ordering(373) 00:13:49.823 fused_ordering(374) 00:13:49.823 fused_ordering(375) 00:13:49.823 fused_ordering(376) 00:13:49.823 fused_ordering(377) 00:13:49.823 fused_ordering(378) 00:13:49.823 fused_ordering(379) 00:13:49.823 fused_ordering(380) 00:13:49.823 fused_ordering(381) 00:13:49.823 fused_ordering(382) 00:13:49.823 fused_ordering(383) 00:13:49.823 fused_ordering(384) 00:13:49.823 fused_ordering(385) 00:13:49.823 fused_ordering(386) 00:13:49.823 fused_ordering(387) 00:13:49.823 fused_ordering(388) 00:13:49.823 fused_ordering(389) 00:13:49.823 fused_ordering(390) 00:13:49.823 fused_ordering(391) 00:13:49.823 fused_ordering(392) 00:13:49.823 fused_ordering(393) 00:13:49.823 fused_ordering(394) 00:13:49.823 fused_ordering(395) 00:13:49.823 fused_ordering(396) 00:13:49.823 fused_ordering(397) 00:13:49.823 fused_ordering(398) 00:13:49.823 fused_ordering(399) 00:13:49.823 fused_ordering(400) 00:13:49.823 fused_ordering(401) 00:13:49.823 fused_ordering(402) 00:13:49.823 fused_ordering(403) 00:13:49.823 fused_ordering(404) 00:13:49.823 fused_ordering(405) 00:13:49.823 fused_ordering(406) 00:13:49.823 fused_ordering(407) 00:13:49.823 fused_ordering(408) 00:13:49.823 fused_ordering(409) 00:13:49.823 fused_ordering(410) 00:13:50.081 fused_ordering(411) 00:13:50.081 fused_ordering(412) 00:13:50.081 fused_ordering(413) 00:13:50.081 fused_ordering(414) 00:13:50.081 fused_ordering(415) 00:13:50.081 fused_ordering(416) 00:13:50.081 fused_ordering(417) 00:13:50.081 fused_ordering(418) 00:13:50.081 fused_ordering(419) 00:13:50.081 fused_ordering(420) 00:13:50.081 fused_ordering(421) 00:13:50.081 fused_ordering(422) 00:13:50.081 fused_ordering(423) 00:13:50.081 fused_ordering(424) 00:13:50.081 fused_ordering(425) 00:13:50.081 fused_ordering(426) 00:13:50.081 fused_ordering(427) 00:13:50.081 fused_ordering(428) 00:13:50.081 fused_ordering(429) 00:13:50.081 fused_ordering(430) 00:13:50.081 fused_ordering(431) 00:13:50.081 fused_ordering(432) 00:13:50.081 fused_ordering(433) 00:13:50.081 fused_ordering(434) 00:13:50.081 fused_ordering(435) 00:13:50.081 fused_ordering(436) 00:13:50.081 fused_ordering(437) 00:13:50.081 fused_ordering(438) 00:13:50.081 fused_ordering(439) 00:13:50.081 fused_ordering(440) 00:13:50.081 fused_ordering(441) 00:13:50.081 fused_ordering(442) 00:13:50.081 fused_ordering(443) 00:13:50.081 fused_ordering(444) 00:13:50.081 fused_ordering(445) 00:13:50.081 fused_ordering(446) 00:13:50.081 fused_ordering(447) 00:13:50.081 fused_ordering(448) 00:13:50.081 fused_ordering(449) 00:13:50.081 fused_ordering(450) 00:13:50.081 fused_ordering(451) 00:13:50.081 fused_ordering(452) 00:13:50.081 fused_ordering(453) 00:13:50.081 fused_ordering(454) 00:13:50.081 fused_ordering(455) 00:13:50.081 fused_ordering(456) 00:13:50.081 fused_ordering(457) 00:13:50.081 fused_ordering(458) 00:13:50.081 fused_ordering(459) 00:13:50.081 fused_ordering(460) 00:13:50.081 fused_ordering(461) 00:13:50.081 fused_ordering(462) 00:13:50.081 fused_ordering(463) 00:13:50.081 fused_ordering(464) 00:13:50.081 fused_ordering(465) 00:13:50.081 fused_ordering(466) 00:13:50.081 fused_ordering(467) 00:13:50.081 fused_ordering(468) 00:13:50.081 fused_ordering(469) 00:13:50.081 fused_ordering(470) 00:13:50.081 fused_ordering(471) 00:13:50.081 fused_ordering(472) 00:13:50.081 fused_ordering(473) 00:13:50.081 fused_ordering(474) 00:13:50.081 fused_ordering(475) 00:13:50.081 fused_ordering(476) 00:13:50.081 fused_ordering(477) 00:13:50.081 fused_ordering(478) 00:13:50.081 fused_ordering(479) 00:13:50.081 fused_ordering(480) 00:13:50.081 fused_ordering(481) 00:13:50.081 fused_ordering(482) 00:13:50.081 fused_ordering(483) 00:13:50.081 fused_ordering(484) 00:13:50.081 fused_ordering(485) 00:13:50.081 fused_ordering(486) 00:13:50.081 fused_ordering(487) 00:13:50.081 fused_ordering(488) 00:13:50.081 fused_ordering(489) 00:13:50.081 fused_ordering(490) 00:13:50.081 fused_ordering(491) 00:13:50.081 fused_ordering(492) 00:13:50.081 fused_ordering(493) 00:13:50.081 fused_ordering(494) 00:13:50.081 fused_ordering(495) 00:13:50.081 fused_ordering(496) 00:13:50.081 fused_ordering(497) 00:13:50.081 fused_ordering(498) 00:13:50.081 fused_ordering(499) 00:13:50.081 fused_ordering(500) 00:13:50.081 fused_ordering(501) 00:13:50.081 fused_ordering(502) 00:13:50.081 fused_ordering(503) 00:13:50.081 fused_ordering(504) 00:13:50.081 fused_ordering(505) 00:13:50.081 fused_ordering(506) 00:13:50.081 fused_ordering(507) 00:13:50.081 fused_ordering(508) 00:13:50.081 fused_ordering(509) 00:13:50.081 fused_ordering(510) 00:13:50.081 fused_ordering(511) 00:13:50.081 fused_ordering(512) 00:13:50.081 fused_ordering(513) 00:13:50.081 fused_ordering(514) 00:13:50.081 fused_ordering(515) 00:13:50.081 fused_ordering(516) 00:13:50.081 fused_ordering(517) 00:13:50.081 fused_ordering(518) 00:13:50.081 fused_ordering(519) 00:13:50.081 fused_ordering(520) 00:13:50.081 fused_ordering(521) 00:13:50.081 fused_ordering(522) 00:13:50.081 fused_ordering(523) 00:13:50.081 fused_ordering(524) 00:13:50.081 fused_ordering(525) 00:13:50.081 fused_ordering(526) 00:13:50.081 fused_ordering(527) 00:13:50.081 fused_ordering(528) 00:13:50.081 fused_ordering(529) 00:13:50.082 fused_ordering(530) 00:13:50.082 fused_ordering(531) 00:13:50.082 fused_ordering(532) 00:13:50.082 fused_ordering(533) 00:13:50.082 fused_ordering(534) 00:13:50.082 fused_ordering(535) 00:13:50.082 fused_ordering(536) 00:13:50.082 fused_ordering(537) 00:13:50.082 fused_ordering(538) 00:13:50.082 fused_ordering(539) 00:13:50.082 fused_ordering(540) 00:13:50.082 fused_ordering(541) 00:13:50.082 fused_ordering(542) 00:13:50.082 fused_ordering(543) 00:13:50.082 fused_ordering(544) 00:13:50.082 fused_ordering(545) 00:13:50.082 fused_ordering(546) 00:13:50.082 fused_ordering(547) 00:13:50.082 fused_ordering(548) 00:13:50.082 fused_ordering(549) 00:13:50.082 fused_ordering(550) 00:13:50.082 fused_ordering(551) 00:13:50.082 fused_ordering(552) 00:13:50.082 fused_ordering(553) 00:13:50.082 fused_ordering(554) 00:13:50.082 fused_ordering(555) 00:13:50.082 fused_ordering(556) 00:13:50.082 fused_ordering(557) 00:13:50.082 fused_ordering(558) 00:13:50.082 fused_ordering(559) 00:13:50.082 fused_ordering(560) 00:13:50.082 fused_ordering(561) 00:13:50.082 fused_ordering(562) 00:13:50.082 fused_ordering(563) 00:13:50.082 fused_ordering(564) 00:13:50.082 fused_ordering(565) 00:13:50.082 fused_ordering(566) 00:13:50.082 fused_ordering(567) 00:13:50.082 fused_ordering(568) 00:13:50.082 fused_ordering(569) 00:13:50.082 fused_ordering(570) 00:13:50.082 fused_ordering(571) 00:13:50.082 fused_ordering(572) 00:13:50.082 fused_ordering(573) 00:13:50.082 fused_ordering(574) 00:13:50.082 fused_ordering(575) 00:13:50.082 fused_ordering(576) 00:13:50.082 fused_ordering(577) 00:13:50.082 fused_ordering(578) 00:13:50.082 fused_ordering(579) 00:13:50.082 fused_ordering(580) 00:13:50.082 fused_ordering(581) 00:13:50.082 fused_ordering(582) 00:13:50.082 fused_ordering(583) 00:13:50.082 fused_ordering(584) 00:13:50.082 fused_ordering(585) 00:13:50.082 fused_ordering(586) 00:13:50.082 fused_ordering(587) 00:13:50.082 fused_ordering(588) 00:13:50.082 fused_ordering(589) 00:13:50.082 fused_ordering(590) 00:13:50.082 fused_ordering(591) 00:13:50.082 fused_ordering(592) 00:13:50.082 fused_ordering(593) 00:13:50.082 fused_ordering(594) 00:13:50.082 fused_ordering(595) 00:13:50.082 fused_ordering(596) 00:13:50.082 fused_ordering(597) 00:13:50.082 fused_ordering(598) 00:13:50.082 fused_ordering(599) 00:13:50.082 fused_ordering(600) 00:13:50.082 fused_ordering(601) 00:13:50.082 fused_ordering(602) 00:13:50.082 fused_ordering(603) 00:13:50.082 fused_ordering(604) 00:13:50.082 fused_ordering(605) 00:13:50.082 fused_ordering(606) 00:13:50.082 fused_ordering(607) 00:13:50.082 fused_ordering(608) 00:13:50.082 fused_ordering(609) 00:13:50.082 fused_ordering(610) 00:13:50.082 fused_ordering(611) 00:13:50.082 fused_ordering(612) 00:13:50.082 fused_ordering(613) 00:13:50.082 fused_ordering(614) 00:13:50.082 fused_ordering(615) 00:13:51.016 fused_ordering(616) 00:13:51.016 fused_ordering(617) 00:13:51.016 fused_ordering(618) 00:13:51.016 fused_ordering(619) 00:13:51.016 fused_ordering(620) 00:13:51.016 fused_ordering(621) 00:13:51.016 fused_ordering(622) 00:13:51.016 fused_ordering(623) 00:13:51.016 fused_ordering(624) 00:13:51.016 fused_ordering(625) 00:13:51.016 fused_ordering(626) 00:13:51.016 fused_ordering(627) 00:13:51.016 fused_ordering(628) 00:13:51.016 fused_ordering(629) 00:13:51.016 fused_ordering(630) 00:13:51.016 fused_ordering(631) 00:13:51.016 fused_ordering(632) 00:13:51.016 fused_ordering(633) 00:13:51.016 fused_ordering(634) 00:13:51.016 fused_ordering(635) 00:13:51.016 fused_ordering(636) 00:13:51.016 fused_ordering(637) 00:13:51.016 fused_ordering(638) 00:13:51.016 fused_ordering(639) 00:13:51.016 fused_ordering(640) 00:13:51.016 fused_ordering(641) 00:13:51.016 fused_ordering(642) 00:13:51.016 fused_ordering(643) 00:13:51.016 fused_ordering(644) 00:13:51.016 fused_ordering(645) 00:13:51.016 fused_ordering(646) 00:13:51.016 fused_ordering(647) 00:13:51.016 fused_ordering(648) 00:13:51.016 fused_ordering(649) 00:13:51.016 fused_ordering(650) 00:13:51.016 fused_ordering(651) 00:13:51.016 fused_ordering(652) 00:13:51.016 fused_ordering(653) 00:13:51.016 fused_ordering(654) 00:13:51.016 fused_ordering(655) 00:13:51.016 fused_ordering(656) 00:13:51.016 fused_ordering(657) 00:13:51.016 fused_ordering(658) 00:13:51.016 fused_ordering(659) 00:13:51.016 fused_ordering(660) 00:13:51.016 fused_ordering(661) 00:13:51.016 fused_ordering(662) 00:13:51.016 fused_ordering(663) 00:13:51.016 fused_ordering(664) 00:13:51.016 fused_ordering(665) 00:13:51.016 fused_ordering(666) 00:13:51.016 fused_ordering(667) 00:13:51.016 fused_ordering(668) 00:13:51.016 fused_ordering(669) 00:13:51.016 fused_ordering(670) 00:13:51.016 fused_ordering(671) 00:13:51.016 fused_ordering(672) 00:13:51.016 fused_ordering(673) 00:13:51.016 fused_ordering(674) 00:13:51.016 fused_ordering(675) 00:13:51.016 fused_ordering(676) 00:13:51.016 fused_ordering(677) 00:13:51.016 fused_ordering(678) 00:13:51.016 fused_ordering(679) 00:13:51.016 fused_ordering(680) 00:13:51.016 fused_ordering(681) 00:13:51.016 fused_ordering(682) 00:13:51.016 fused_ordering(683) 00:13:51.016 fused_ordering(684) 00:13:51.016 fused_ordering(685) 00:13:51.016 fused_ordering(686) 00:13:51.016 fused_ordering(687) 00:13:51.016 fused_ordering(688) 00:13:51.016 fused_ordering(689) 00:13:51.016 fused_ordering(690) 00:13:51.016 fused_ordering(691) 00:13:51.016 fused_ordering(692) 00:13:51.016 fused_ordering(693) 00:13:51.016 fused_ordering(694) 00:13:51.016 fused_ordering(695) 00:13:51.016 fused_ordering(696) 00:13:51.016 fused_ordering(697) 00:13:51.016 fused_ordering(698) 00:13:51.016 fused_ordering(699) 00:13:51.016 fused_ordering(700) 00:13:51.016 fused_ordering(701) 00:13:51.016 fused_ordering(702) 00:13:51.016 fused_ordering(703) 00:13:51.016 fused_ordering(704) 00:13:51.016 fused_ordering(705) 00:13:51.016 fused_ordering(706) 00:13:51.016 fused_ordering(707) 00:13:51.016 fused_ordering(708) 00:13:51.016 fused_ordering(709) 00:13:51.016 fused_ordering(710) 00:13:51.016 fused_ordering(711) 00:13:51.016 fused_ordering(712) 00:13:51.016 fused_ordering(713) 00:13:51.016 fused_ordering(714) 00:13:51.016 fused_ordering(715) 00:13:51.016 fused_ordering(716) 00:13:51.016 fused_ordering(717) 00:13:51.016 fused_ordering(718) 00:13:51.016 fused_ordering(719) 00:13:51.016 fused_ordering(720) 00:13:51.016 fused_ordering(721) 00:13:51.016 fused_ordering(722) 00:13:51.016 fused_ordering(723) 00:13:51.016 fused_ordering(724) 00:13:51.016 fused_ordering(725) 00:13:51.016 fused_ordering(726) 00:13:51.016 fused_ordering(727) 00:13:51.016 fused_ordering(728) 00:13:51.016 fused_ordering(729) 00:13:51.016 fused_ordering(730) 00:13:51.016 fused_ordering(731) 00:13:51.016 fused_ordering(732) 00:13:51.016 fused_ordering(733) 00:13:51.016 fused_ordering(734) 00:13:51.016 fused_ordering(735) 00:13:51.016 fused_ordering(736) 00:13:51.016 fused_ordering(737) 00:13:51.016 fused_ordering(738) 00:13:51.016 fused_ordering(739) 00:13:51.016 fused_ordering(740) 00:13:51.016 fused_ordering(741) 00:13:51.016 fused_ordering(742) 00:13:51.016 fused_ordering(743) 00:13:51.016 fused_ordering(744) 00:13:51.016 fused_ordering(745) 00:13:51.016 fused_ordering(746) 00:13:51.016 fused_ordering(747) 00:13:51.016 fused_ordering(748) 00:13:51.016 fused_ordering(749) 00:13:51.016 fused_ordering(750) 00:13:51.016 fused_ordering(751) 00:13:51.016 fused_ordering(752) 00:13:51.016 fused_ordering(753) 00:13:51.016 fused_ordering(754) 00:13:51.016 fused_ordering(755) 00:13:51.016 fused_ordering(756) 00:13:51.016 fused_ordering(757) 00:13:51.016 fused_ordering(758) 00:13:51.016 fused_ordering(759) 00:13:51.016 fused_ordering(760) 00:13:51.016 fused_ordering(761) 00:13:51.016 fused_ordering(762) 00:13:51.016 fused_ordering(763) 00:13:51.016 fused_ordering(764) 00:13:51.016 fused_ordering(765) 00:13:51.016 fused_ordering(766) 00:13:51.016 fused_ordering(767) 00:13:51.016 fused_ordering(768) 00:13:51.016 fused_ordering(769) 00:13:51.016 fused_ordering(770) 00:13:51.016 fused_ordering(771) 00:13:51.016 fused_ordering(772) 00:13:51.016 fused_ordering(773) 00:13:51.016 fused_ordering(774) 00:13:51.016 fused_ordering(775) 00:13:51.016 fused_ordering(776) 00:13:51.016 fused_ordering(777) 00:13:51.016 fused_ordering(778) 00:13:51.016 fused_ordering(779) 00:13:51.016 fused_ordering(780) 00:13:51.016 fused_ordering(781) 00:13:51.016 fused_ordering(782) 00:13:51.016 fused_ordering(783) 00:13:51.016 fused_ordering(784) 00:13:51.016 fused_ordering(785) 00:13:51.016 fused_ordering(786) 00:13:51.016 fused_ordering(787) 00:13:51.016 fused_ordering(788) 00:13:51.016 fused_ordering(789) 00:13:51.016 fused_ordering(790) 00:13:51.016 fused_ordering(791) 00:13:51.016 fused_ordering(792) 00:13:51.016 fused_ordering(793) 00:13:51.016 fused_ordering(794) 00:13:51.016 fused_ordering(795) 00:13:51.016 fused_ordering(796) 00:13:51.016 fused_ordering(797) 00:13:51.016 fused_ordering(798) 00:13:51.016 fused_ordering(799) 00:13:51.016 fused_ordering(800) 00:13:51.016 fused_ordering(801) 00:13:51.016 fused_ordering(802) 00:13:51.016 fused_ordering(803) 00:13:51.016 fused_ordering(804) 00:13:51.016 fused_ordering(805) 00:13:51.016 fused_ordering(806) 00:13:51.016 fused_ordering(807) 00:13:51.016 fused_ordering(808) 00:13:51.016 fused_ordering(809) 00:13:51.016 fused_ordering(810) 00:13:51.016 fused_ordering(811) 00:13:51.016 fused_ordering(812) 00:13:51.016 fused_ordering(813) 00:13:51.016 fused_ordering(814) 00:13:51.016 fused_ordering(815) 00:13:51.016 fused_ordering(816) 00:13:51.016 fused_ordering(817) 00:13:51.016 fused_ordering(818) 00:13:51.016 fused_ordering(819) 00:13:51.016 fused_ordering(820) 00:13:51.583 fused_ordering(821) 00:13:51.583 fused_ordering(822) 00:13:51.583 fused_ordering(823) 00:13:51.583 fused_ordering(824) 00:13:51.583 fused_ordering(825) 00:13:51.583 fused_ordering(826) 00:13:51.583 fused_ordering(827) 00:13:51.583 fused_ordering(828) 00:13:51.583 fused_ordering(829) 00:13:51.583 fused_ordering(830) 00:13:51.583 fused_ordering(831) 00:13:51.583 fused_ordering(832) 00:13:51.583 fused_ordering(833) 00:13:51.583 fused_ordering(834) 00:13:51.583 fused_ordering(835) 00:13:51.583 fused_ordering(836) 00:13:51.583 fused_ordering(837) 00:13:51.583 fused_ordering(838) 00:13:51.583 fused_ordering(839) 00:13:51.583 fused_ordering(840) 00:13:51.583 fused_ordering(841) 00:13:51.583 fused_ordering(842) 00:13:51.583 fused_ordering(843) 00:13:51.583 fused_ordering(844) 00:13:51.583 fused_ordering(845) 00:13:51.583 fused_ordering(846) 00:13:51.583 fused_ordering(847) 00:13:51.583 fused_ordering(848) 00:13:51.583 fused_ordering(849) 00:13:51.583 fused_ordering(850) 00:13:51.583 fused_ordering(851) 00:13:51.583 fused_ordering(852) 00:13:51.583 fused_ordering(853) 00:13:51.583 fused_ordering(854) 00:13:51.583 fused_ordering(855) 00:13:51.583 fused_ordering(856) 00:13:51.583 fused_ordering(857) 00:13:51.583 fused_ordering(858) 00:13:51.583 fused_ordering(859) 00:13:51.583 fused_ordering(860) 00:13:51.583 fused_ordering(861) 00:13:51.583 fused_ordering(862) 00:13:51.583 fused_ordering(863) 00:13:51.583 fused_ordering(864) 00:13:51.583 fused_ordering(865) 00:13:51.583 fused_ordering(866) 00:13:51.583 fused_ordering(867) 00:13:51.583 fused_ordering(868) 00:13:51.583 fused_ordering(869) 00:13:51.583 fused_ordering(870) 00:13:51.583 fused_ordering(871) 00:13:51.583 fused_ordering(872) 00:13:51.583 fused_ordering(873) 00:13:51.583 fused_ordering(874) 00:13:51.583 fused_ordering(875) 00:13:51.583 fused_ordering(876) 00:13:51.583 fused_ordering(877) 00:13:51.583 fused_ordering(878) 00:13:51.583 fused_ordering(879) 00:13:51.583 fused_ordering(880) 00:13:51.583 fused_ordering(881) 00:13:51.583 fused_ordering(882) 00:13:51.583 fused_ordering(883) 00:13:51.583 fused_ordering(884) 00:13:51.583 fused_ordering(885) 00:13:51.583 fused_ordering(886) 00:13:51.583 fused_ordering(887) 00:13:51.583 fused_ordering(888) 00:13:51.583 fused_ordering(889) 00:13:51.583 fused_ordering(890) 00:13:51.583 fused_ordering(891) 00:13:51.583 fused_ordering(892) 00:13:51.583 fused_ordering(893) 00:13:51.583 fused_ordering(894) 00:13:51.583 fused_ordering(895) 00:13:51.583 fused_ordering(896) 00:13:51.583 fused_ordering(897) 00:13:51.583 fused_ordering(898) 00:13:51.583 fused_ordering(899) 00:13:51.583 fused_ordering(900) 00:13:51.583 fused_ordering(901) 00:13:51.583 fused_ordering(902) 00:13:51.583 fused_ordering(903) 00:13:51.583 fused_ordering(904) 00:13:51.583 fused_ordering(905) 00:13:51.583 fused_ordering(906) 00:13:51.583 fused_ordering(907) 00:13:51.583 fused_ordering(908) 00:13:51.583 fused_ordering(909) 00:13:51.583 fused_ordering(910) 00:13:51.583 fused_ordering(911) 00:13:51.583 fused_ordering(912) 00:13:51.583 fused_ordering(913) 00:13:51.583 fused_ordering(914) 00:13:51.583 fused_ordering(915) 00:13:51.583 fused_ordering(916) 00:13:51.583 fused_ordering(917) 00:13:51.583 fused_ordering(918) 00:13:51.583 fused_ordering(919) 00:13:51.583 fused_ordering(920) 00:13:51.583 fused_ordering(921) 00:13:51.583 fused_ordering(922) 00:13:51.583 fused_ordering(923) 00:13:51.583 fused_ordering(924) 00:13:51.583 fused_ordering(925) 00:13:51.583 fused_ordering(926) 00:13:51.583 fused_ordering(927) 00:13:51.583 fused_ordering(928) 00:13:51.583 fused_ordering(929) 00:13:51.583 fused_ordering(930) 00:13:51.583 fused_ordering(931) 00:13:51.583 fused_ordering(932) 00:13:51.583 fused_ordering(933) 00:13:51.583 fused_ordering(934) 00:13:51.583 fused_ordering(935) 00:13:51.583 fused_ordering(936) 00:13:51.583 fused_ordering(937) 00:13:51.583 fused_ordering(938) 00:13:51.583 fused_ordering(939) 00:13:51.583 fused_ordering(940) 00:13:51.583 fused_ordering(941) 00:13:51.583 fused_ordering(942) 00:13:51.583 fused_ordering(943) 00:13:51.583 fused_ordering(944) 00:13:51.583 fused_ordering(945) 00:13:51.583 fused_ordering(946) 00:13:51.583 fused_ordering(947) 00:13:51.583 fused_ordering(948) 00:13:51.583 fused_ordering(949) 00:13:51.583 fused_ordering(950) 00:13:51.583 fused_ordering(951) 00:13:51.583 fused_ordering(952) 00:13:51.583 fused_ordering(953) 00:13:51.583 fused_ordering(954) 00:13:51.583 fused_ordering(955) 00:13:51.583 fused_ordering(956) 00:13:51.583 fused_ordering(957) 00:13:51.583 fused_ordering(958) 00:13:51.583 fused_ordering(959) 00:13:51.583 fused_ordering(960) 00:13:51.583 fused_ordering(961) 00:13:51.583 fused_ordering(962) 00:13:51.583 fused_ordering(963) 00:13:51.583 fused_ordering(964) 00:13:51.583 fused_ordering(965) 00:13:51.583 fused_ordering(966) 00:13:51.583 fused_ordering(967) 00:13:51.583 fused_ordering(968) 00:13:51.583 fused_ordering(969) 00:13:51.583 fused_ordering(970) 00:13:51.583 fused_ordering(971) 00:13:51.583 fused_ordering(972) 00:13:51.583 fused_ordering(973) 00:13:51.583 fused_ordering(974) 00:13:51.583 fused_ordering(975) 00:13:51.583 fused_ordering(976) 00:13:51.583 fused_ordering(977) 00:13:51.583 fused_ordering(978) 00:13:51.583 fused_ordering(979) 00:13:51.583 fused_ordering(980) 00:13:51.583 fused_ordering(981) 00:13:51.583 fused_ordering(982) 00:13:51.583 fused_ordering(983) 00:13:51.583 fused_ordering(984) 00:13:51.583 fused_ordering(985) 00:13:51.583 fused_ordering(986) 00:13:51.583 fused_ordering(987) 00:13:51.583 fused_ordering(988) 00:13:51.583 fused_ordering(989) 00:13:51.583 fused_ordering(990) 00:13:51.583 fused_ordering(991) 00:13:51.583 fused_ordering(992) 00:13:51.583 fused_ordering(993) 00:13:51.583 fused_ordering(994) 00:13:51.583 fused_ordering(995) 00:13:51.583 fused_ordering(996) 00:13:51.583 fused_ordering(997) 00:13:51.583 fused_ordering(998) 00:13:51.583 fused_ordering(999) 00:13:51.583 fused_ordering(1000) 00:13:51.583 fused_ordering(1001) 00:13:51.583 fused_ordering(1002) 00:13:51.583 fused_ordering(1003) 00:13:51.583 fused_ordering(1004) 00:13:51.583 fused_ordering(1005) 00:13:51.583 fused_ordering(1006) 00:13:51.583 fused_ordering(1007) 00:13:51.583 fused_ordering(1008) 00:13:51.583 fused_ordering(1009) 00:13:51.583 fused_ordering(1010) 00:13:51.583 fused_ordering(1011) 00:13:51.583 fused_ordering(1012) 00:13:51.583 fused_ordering(1013) 00:13:51.583 fused_ordering(1014) 00:13:51.583 fused_ordering(1015) 00:13:51.583 fused_ordering(1016) 00:13:51.583 fused_ordering(1017) 00:13:51.583 fused_ordering(1018) 00:13:51.583 fused_ordering(1019) 00:13:51.583 fused_ordering(1020) 00:13:51.583 fused_ordering(1021) 00:13:51.583 fused_ordering(1022) 00:13:51.583 fused_ordering(1023) 00:13:51.583 10:02:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:13:51.583 10:02:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:13:51.583 10:02:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:51.583 10:02:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:13:51.583 10:02:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:51.583 10:02:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:13:51.584 10:02:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:51.584 10:02:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:51.584 rmmod nvme_tcp 00:13:51.842 rmmod nvme_fabrics 00:13:51.842 rmmod nvme_keyring 00:13:51.842 10:02:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:51.842 10:02:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:13:51.842 10:02:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:13:51.842 10:02:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 406081 ']' 00:13:51.842 10:02:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 406081 00:13:51.842 10:02:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # '[' -z 406081 ']' 00:13:51.842 10:02:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # kill -0 406081 00:13:51.842 10:02:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # uname 00:13:51.842 10:02:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:51.842 10:02:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 406081 00:13:51.842 10:02:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:13:51.842 10:02:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:13:51.842 10:02:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@968 -- # echo 'killing process with pid 406081' 00:13:51.842 killing process with pid 406081 00:13:51.842 10:02:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@969 -- # kill 406081 00:13:51.842 10:02:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@974 -- # wait 406081 00:13:52.100 10:02:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:52.100 10:02:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:52.100 10:02:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:52.100 10:02:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:52.100 10:02:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:52.100 10:02:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:52.100 10:02:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:52.100 10:02:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:54.005 10:02:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:54.263 00:13:54.263 real 0m8.578s 00:13:54.263 user 0m5.651s 00:13:54.263 sys 0m4.253s 00:13:54.263 10:02:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:54.263 10:02:39 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:54.263 ************************************ 00:13:54.263 END TEST nvmf_fused_ordering 00:13:54.263 ************************************ 00:13:54.263 10:02:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:13:54.263 10:02:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:54.264 10:02:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:54.264 10:02:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:54.264 ************************************ 00:13:54.264 START TEST nvmf_ns_masking 00:13:54.264 ************************************ 00:13:54.264 10:02:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1125 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:13:54.264 * Looking for test storage... 00:13:54.264 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:54.264 10:02:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:54.264 10:02:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:13:54.264 10:02:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:54.264 10:02:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:54.264 10:02:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:54.264 10:02:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:54.264 10:02:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:54.264 10:02:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:54.264 10:02:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:54.264 10:02:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:54.264 10:02:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:54.264 10:02:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:54.264 10:02:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:13:54.264 10:02:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:13:54.264 10:02:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:54.264 10:02:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:54.264 10:02:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:54.264 10:02:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:54.264 10:02:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:54.264 10:02:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:54.264 10:02:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:54.264 10:02:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:54.264 10:02:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:54.264 10:02:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:54.264 10:02:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:54.264 10:02:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:13:54.264 10:02:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:54.264 10:02:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:13:54.264 10:02:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:54.264 10:02:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:54.264 10:02:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:54.264 10:02:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:54.264 10:02:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:54.264 10:02:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:54.264 10:02:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:54.264 10:02:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:54.264 10:02:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:54.264 10:02:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:13:54.264 10:02:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:13:54.264 10:02:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:13:54.264 10:02:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=fc6e024f-cc80-4c85-b019-a9062804ab97 00:13:54.264 10:02:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:13:54.264 10:02:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=a67ad99f-649d-4202-8179-99cbfa81c866 00:13:54.264 10:02:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:13:54.264 10:02:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:13:54.264 10:02:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:13:54.264 10:02:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:13:54.264 10:02:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=057f04d9-864f-428f-b7e1-2d8945063e8b 00:13:54.264 10:02:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:13:54.264 10:02:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:54.264 10:02:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:54.264 10:02:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:54.264 10:02:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:54.264 10:02:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:54.264 10:02:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:54.264 10:02:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:54.264 10:02:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:54.264 10:02:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:54.264 10:02:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:54.264 10:02:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:13:54.264 10:02:39 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:56.799 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:56.799 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:13:56.799 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:56.799 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:56.799 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:56.799 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:56.799 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:56.799 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:13:56.799 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:56.799 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:13:56.799 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:13:56.799 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:13:56.799 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:13:56.799 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:13:56.799 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:13:56.799 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:56.799 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:56.799 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:56.799 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:56.799 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:56.799 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:56.799 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:56.799 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:56.799 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:56.799 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:56.799 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:56.799 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:56.799 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:56.799 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:56.799 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:56.799 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:56.799 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:56.799 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:56.799 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:13:56.799 Found 0000:84:00.0 (0x8086 - 0x159b) 00:13:56.800 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:56.800 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:56.800 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:56.800 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:56.800 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:56.800 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:56.800 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:13:56.800 Found 0000:84:00.1 (0x8086 - 0x159b) 00:13:56.800 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:56.800 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:56.800 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:56.800 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:56.800 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:56.800 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:56.800 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:56.800 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:56.800 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:56.800 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:56.800 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:56.800 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:56.800 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:56.800 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:56.800 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:56.800 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:13:56.800 Found net devices under 0000:84:00.0: cvl_0_0 00:13:56.800 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:56.800 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:56.800 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:56.800 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:56.800 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:56.800 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:56.800 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:56.800 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:56.800 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:13:56.800 Found net devices under 0000:84:00.1: cvl_0_1 00:13:56.800 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:56.800 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:56.800 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:13:56.800 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:56.800 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:56.800 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:56.800 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:56.800 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:56.800 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:56.800 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:56.800 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:56.800 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:56.800 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:56.800 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:56.800 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:56.800 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:56.800 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:56.800 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:56.800 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:56.800 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:56.800 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:56.800 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:56.800 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:56.800 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:56.800 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:56.800 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:56.800 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:56.800 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.258 ms 00:13:56.800 00:13:56.800 --- 10.0.0.2 ping statistics --- 00:13:56.800 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:56.800 rtt min/avg/max/mdev = 0.258/0.258/0.258/0.000 ms 00:13:56.800 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:56.800 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:56.800 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.142 ms 00:13:56.800 00:13:56.800 --- 10.0.0.1 ping statistics --- 00:13:56.800 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:56.800 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:13:56.800 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:56.800 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:13:56.800 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:56.800 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:56.800 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:56.800 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:56.800 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:56.800 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:56.800 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:56.800 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:13:56.800 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:56.800 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:56.800 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:56.800 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=408481 00:13:56.800 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:13:56.800 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 408481 00:13:56.800 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 408481 ']' 00:13:56.800 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:56.800 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:56.800 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:56.800 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:56.800 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:56.800 10:02:41 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:57.059 [2024-07-25 10:02:42.001843] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:13:57.059 [2024-07-25 10:02:42.001951] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:57.059 EAL: No free 2048 kB hugepages reported on node 1 00:13:57.059 [2024-07-25 10:02:42.102309] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:57.317 [2024-07-25 10:02:42.227657] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:57.317 [2024-07-25 10:02:42.227716] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:57.317 [2024-07-25 10:02:42.227733] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:57.317 [2024-07-25 10:02:42.227747] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:57.317 [2024-07-25 10:02:42.227758] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:57.317 [2024-07-25 10:02:42.227791] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:57.317 10:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:57.317 10:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:13:57.317 10:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:57.317 10:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:57.317 10:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:57.317 10:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:57.317 10:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:57.575 [2024-07-25 10:02:42.654460] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:57.575 10:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:13:57.575 10:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:13:57.575 10:02:42 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:58.179 Malloc1 00:13:58.179 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:58.436 Malloc2 00:13:58.436 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:58.693 10:02:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:13:58.951 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:59.516 [2024-07-25 10:02:44.450354] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:59.516 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:13:59.516 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 057f04d9-864f-428f-b7e1-2d8945063e8b -a 10.0.0.2 -s 4420 -i 4 00:13:59.516 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:13:59.516 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:13:59.516 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:59.516 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:59.516 10:02:44 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:14:02.044 10:02:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:02.044 10:02:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:02.044 10:02:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:02.044 10:02:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:02.044 10:02:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:02.044 10:02:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:14:02.044 10:02:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:02.044 10:02:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:02.044 10:02:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:02.044 10:02:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:02.044 10:02:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:14:02.044 10:02:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:02.044 10:02:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:02.044 [ 0]:0x1 00:14:02.044 10:02:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:02.044 10:02:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:02.044 10:02:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=1d48c7d4d4674dd6a592dd4a14a2804c 00:14:02.044 10:02:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 1d48c7d4d4674dd6a592dd4a14a2804c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:02.044 10:02:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:14:02.044 10:02:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:14:02.044 10:02:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:02.044 10:02:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:02.044 [ 0]:0x1 00:14:02.044 10:02:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:02.044 10:02:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:02.044 10:02:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=1d48c7d4d4674dd6a592dd4a14a2804c 00:14:02.044 10:02:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 1d48c7d4d4674dd6a592dd4a14a2804c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:02.044 10:02:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:14:02.044 10:02:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:02.044 10:02:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:02.044 [ 1]:0x2 00:14:02.302 10:02:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:02.302 10:02:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:02.302 10:02:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=54b6f824a059475eb8bf6d8bf6b6d14a 00:14:02.302 10:02:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 54b6f824a059475eb8bf6d8bf6b6d14a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:02.302 10:02:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:14:02.302 10:02:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:02.302 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:02.302 10:02:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:02.868 10:02:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:14:03.433 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:14:03.433 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 057f04d9-864f-428f-b7e1-2d8945063e8b -a 10.0.0.2 -s 4420 -i 4 00:14:03.433 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:14:03.433 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:14:03.433 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:03.433 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:14:03.433 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:14:03.433 10:02:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:14:05.961 10:02:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:05.961 10:02:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:05.961 10:02:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:05.961 10:02:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:05.961 10:02:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:05.961 10:02:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:14:05.961 10:02:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:05.961 10:02:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:05.961 10:02:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:05.961 10:02:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:05.961 10:02:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:14:05.961 10:02:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:05.961 10:02:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:14:05.961 10:02:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:14:05.961 10:02:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:05.961 10:02:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:14:05.961 10:02:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:05.961 10:02:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:14:05.961 10:02:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:05.961 10:02:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:05.961 10:02:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:05.961 10:02:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:05.961 10:02:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:05.961 10:02:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:05.961 10:02:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:05.961 10:02:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:05.961 10:02:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:05.961 10:02:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:05.961 10:02:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:14:05.961 10:02:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:05.961 10:02:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:05.961 [ 0]:0x2 00:14:05.961 10:02:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:05.961 10:02:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:05.961 10:02:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=54b6f824a059475eb8bf6d8bf6b6d14a 00:14:05.961 10:02:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 54b6f824a059475eb8bf6d8bf6b6d14a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:05.961 10:02:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:06.219 10:02:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:14:06.219 10:02:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:06.219 10:02:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:06.219 [ 0]:0x1 00:14:06.219 10:02:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:06.219 10:02:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:06.219 10:02:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=1d48c7d4d4674dd6a592dd4a14a2804c 00:14:06.219 10:02:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 1d48c7d4d4674dd6a592dd4a14a2804c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:06.219 10:02:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:14:06.219 10:02:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:06.219 10:02:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:06.219 [ 1]:0x2 00:14:06.219 10:02:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:06.219 10:02:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:06.219 10:02:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=54b6f824a059475eb8bf6d8bf6b6d14a 00:14:06.219 10:02:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 54b6f824a059475eb8bf6d8bf6b6d14a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:06.219 10:02:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:06.784 10:02:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:14:06.784 10:02:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:06.784 10:02:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:14:06.784 10:02:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:14:06.784 10:02:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:06.785 10:02:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:14:06.785 10:02:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:06.785 10:02:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:14:06.785 10:02:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:06.785 10:02:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:06.785 10:02:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:06.785 10:02:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:06.785 10:02:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:06.785 10:02:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:06.785 10:02:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:06.785 10:02:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:06.785 10:02:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:06.785 10:02:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:06.785 10:02:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:14:06.785 10:02:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:06.785 10:02:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:06.785 [ 0]:0x2 00:14:06.785 10:02:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:06.785 10:02:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:06.785 10:02:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=54b6f824a059475eb8bf6d8bf6b6d14a 00:14:06.785 10:02:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 54b6f824a059475eb8bf6d8bf6b6d14a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:06.785 10:02:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:14:06.785 10:02:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:06.785 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:06.785 10:02:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:07.351 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:14:07.351 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 057f04d9-864f-428f-b7e1-2d8945063e8b -a 10.0.0.2 -s 4420 -i 4 00:14:07.351 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:07.351 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:14:07.351 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:07.351 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:14:07.351 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:14:07.351 10:02:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:14:09.880 10:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:09.880 10:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:09.880 10:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:09.880 10:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:14:09.880 10:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:09.880 10:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:14:09.880 10:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:09.881 10:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:09.881 10:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:09.881 10:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:09.881 10:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:14:09.881 10:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:09.881 10:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:09.881 [ 0]:0x1 00:14:09.881 10:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:09.881 10:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:09.881 10:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=1d48c7d4d4674dd6a592dd4a14a2804c 00:14:09.881 10:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 1d48c7d4d4674dd6a592dd4a14a2804c != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:09.881 10:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:14:09.881 10:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:09.881 10:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:09.881 [ 1]:0x2 00:14:09.881 10:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:09.881 10:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:09.881 10:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=54b6f824a059475eb8bf6d8bf6b6d14a 00:14:09.881 10:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 54b6f824a059475eb8bf6d8bf6b6d14a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:09.881 10:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:09.881 10:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:14:09.881 10:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:09.881 10:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:14:09.881 10:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:14:09.881 10:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:09.881 10:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:14:09.881 10:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:09.881 10:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:14:09.881 10:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:09.881 10:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:09.881 10:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:09.881 10:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:09.881 10:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:09.881 10:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:09.881 10:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:09.881 10:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:09.881 10:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:09.881 10:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:09.881 10:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:14:09.881 10:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:09.881 10:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:09.881 [ 0]:0x2 00:14:09.881 10:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:09.881 10:02:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:09.881 10:02:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=54b6f824a059475eb8bf6d8bf6b6d14a 00:14:09.881 10:02:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 54b6f824a059475eb8bf6d8bf6b6d14a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:09.881 10:02:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:09.881 10:02:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:09.881 10:02:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:09.881 10:02:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:09.881 10:02:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:09.881 10:02:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:09.881 10:02:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:09.881 10:02:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:09.881 10:02:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:09.881 10:02:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:09.881 10:02:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:09.881 10:02:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:10.447 [2024-07-25 10:02:55.507724] nvmf_rpc.c:1798:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:14:10.447 request: 00:14:10.447 { 00:14:10.447 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:10.447 "nsid": 2, 00:14:10.447 "host": "nqn.2016-06.io.spdk:host1", 00:14:10.447 "method": "nvmf_ns_remove_host", 00:14:10.447 "req_id": 1 00:14:10.447 } 00:14:10.447 Got JSON-RPC error response 00:14:10.447 response: 00:14:10.447 { 00:14:10.447 "code": -32602, 00:14:10.447 "message": "Invalid parameters" 00:14:10.447 } 00:14:10.447 10:02:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:10.447 10:02:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:10.447 10:02:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:10.447 10:02:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:10.447 10:02:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:14:10.447 10:02:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:14:10.447 10:02:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:14:10.448 10:02:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:14:10.448 10:02:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:10.448 10:02:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:14:10.448 10:02:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:10.448 10:02:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:14:10.448 10:02:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:10.448 10:02:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:10.448 10:02:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:10.448 10:02:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:10.448 10:02:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:10.448 10:02:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:10.448 10:02:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:14:10.448 10:02:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:10.448 10:02:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:10.448 10:02:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:10.448 10:02:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:14:10.448 10:02:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:10.448 10:02:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:10.448 [ 0]:0x2 00:14:10.448 10:02:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:10.448 10:02:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:10.706 10:02:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=54b6f824a059475eb8bf6d8bf6b6d14a 00:14:10.706 10:02:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 54b6f824a059475eb8bf6d8bf6b6d14a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:10.706 10:02:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:14:10.706 10:02:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:10.706 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:10.706 10:02:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=410200 00:14:10.706 10:02:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:14:10.706 10:02:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:14:10.706 10:02:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 410200 /var/tmp/host.sock 00:14:10.706 10:02:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 410200 ']' 00:14:10.706 10:02:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:14:10.706 10:02:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:10.706 10:02:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:14:10.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:14:10.706 10:02:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:10.706 10:02:55 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:10.706 [2024-07-25 10:02:55.763209] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:14:10.706 [2024-07-25 10:02:55.763310] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid410200 ] 00:14:10.706 EAL: No free 2048 kB hugepages reported on node 1 00:14:10.706 [2024-07-25 10:02:55.834195] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:10.965 [2024-07-25 10:02:55.955973] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:11.224 10:02:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:11.224 10:02:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:14:11.224 10:02:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:11.482 10:02:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:12.047 10:02:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid fc6e024f-cc80-4c85-b019-a9062804ab97 00:14:12.047 10:02:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:14:12.047 10:02:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g FC6E024FCC804C85B019A9062804AB97 -i 00:14:12.613 10:02:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid a67ad99f-649d-4202-8179-99cbfa81c866 00:14:12.613 10:02:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:14:12.613 10:02:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g A67AD99F649D4202817999CBFA81C866 -i 00:14:12.871 10:02:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:13.129 10:02:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:14:13.387 10:02:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:13.387 10:02:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:13.959 nvme0n1 00:14:13.959 10:02:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:13.959 10:02:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:14.584 nvme1n2 00:14:14.584 10:02:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:14:14.584 10:02:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:14:14.584 10:02:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:14:14.584 10:02:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:14:14.584 10:02:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:14:14.842 10:02:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:14:14.842 10:02:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:14:14.842 10:02:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:14:14.842 10:02:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:14:15.408 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ fc6e024f-cc80-4c85-b019-a9062804ab97 == \f\c\6\e\0\2\4\f\-\c\c\8\0\-\4\c\8\5\-\b\0\1\9\-\a\9\0\6\2\8\0\4\a\b\9\7 ]] 00:14:15.408 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:14:15.408 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:14:15.408 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:14:15.666 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ a67ad99f-649d-4202-8179-99cbfa81c866 == \a\6\7\a\d\9\9\f\-\6\4\9\d\-\4\2\0\2\-\8\1\7\9\-\9\9\c\b\f\a\8\1\c\8\6\6 ]] 00:14:15.666 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 410200 00:14:15.666 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 410200 ']' 00:14:15.666 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 410200 00:14:15.666 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:14:15.666 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:15.666 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 410200 00:14:15.666 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:14:15.666 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:14:15.666 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 410200' 00:14:15.666 killing process with pid 410200 00:14:15.666 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 410200 00:14:15.666 10:03:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 410200 00:14:16.232 10:03:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:16.490 10:03:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:14:16.490 10:03:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:14:16.490 10:03:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:16.490 10:03:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:14:16.490 10:03:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:16.490 10:03:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:14:16.490 10:03:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:16.490 10:03:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:16.490 rmmod nvme_tcp 00:14:16.490 rmmod nvme_fabrics 00:14:16.490 rmmod nvme_keyring 00:14:16.490 10:03:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:16.490 10:03:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:14:16.490 10:03:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:14:16.490 10:03:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 408481 ']' 00:14:16.490 10:03:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 408481 00:14:16.490 10:03:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 408481 ']' 00:14:16.490 10:03:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 408481 00:14:16.490 10:03:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:14:16.490 10:03:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:16.490 10:03:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 408481 00:14:16.490 10:03:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:16.490 10:03:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:16.490 10:03:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 408481' 00:14:16.490 killing process with pid 408481 00:14:16.490 10:03:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 408481 00:14:16.490 10:03:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 408481 00:14:16.749 10:03:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:16.749 10:03:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:16.749 10:03:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:16.749 10:03:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:16.749 10:03:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:16.749 10:03:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:16.749 10:03:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:16.749 10:03:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:19.283 10:03:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:19.283 00:14:19.283 real 0m24.712s 00:14:19.283 user 0m34.861s 00:14:19.283 sys 0m5.049s 00:14:19.283 10:03:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:19.283 10:03:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:19.283 ************************************ 00:14:19.283 END TEST nvmf_ns_masking 00:14:19.283 ************************************ 00:14:19.283 10:03:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:14:19.283 10:03:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:19.283 10:03:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:19.283 10:03:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:19.283 10:03:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:19.283 ************************************ 00:14:19.283 START TEST nvmf_nvme_cli 00:14:19.283 ************************************ 00:14:19.283 10:03:03 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:19.283 * Looking for test storage... 00:14:19.283 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:19.283 10:03:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:19.283 10:03:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:14:19.283 10:03:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:19.283 10:03:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:19.283 10:03:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:19.283 10:03:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:19.283 10:03:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:19.283 10:03:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:19.284 10:03:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:19.284 10:03:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:19.284 10:03:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:19.284 10:03:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:19.284 10:03:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:14:19.284 10:03:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:14:19.284 10:03:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:19.284 10:03:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:19.284 10:03:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:19.284 10:03:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:19.284 10:03:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:19.284 10:03:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:19.284 10:03:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:19.284 10:03:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:19.284 10:03:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:19.284 10:03:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:19.284 10:03:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:19.284 10:03:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:14:19.284 10:03:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:19.284 10:03:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:14:19.284 10:03:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:19.284 10:03:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:19.284 10:03:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:19.284 10:03:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:19.284 10:03:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:19.284 10:03:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:19.284 10:03:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:19.284 10:03:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:19.284 10:03:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:19.284 10:03:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:19.284 10:03:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:14:19.284 10:03:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:14:19.284 10:03:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:19.284 10:03:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:19.284 10:03:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:19.284 10:03:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:19.284 10:03:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:19.284 10:03:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:19.284 10:03:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:19.284 10:03:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:19.284 10:03:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:19.284 10:03:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:19.284 10:03:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:14:19.284 10:03:04 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:21.185 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:21.185 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:14:21.185 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:21.185 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:21.185 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:21.185 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:21.185 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:21.185 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:14:21.185 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:21.185 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:14:21.185 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:14:21.185 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:14:21.185 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:14:21.185 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:14:21.185 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:14:21.185 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:21.185 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:21.185 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:21.185 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:21.185 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:21.185 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:21.185 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:21.185 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:21.185 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:21.185 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:21.185 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:21.185 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:21.185 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:21.185 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:21.186 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:21.186 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:21.186 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:21.186 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:21.186 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:14:21.186 Found 0000:84:00.0 (0x8086 - 0x159b) 00:14:21.186 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:21.186 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:21.186 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:21.186 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:21.186 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:21.186 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:21.186 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:14:21.186 Found 0000:84:00.1 (0x8086 - 0x159b) 00:14:21.186 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:21.186 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:21.186 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:21.186 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:21.186 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:21.186 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:21.186 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:21.186 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:21.186 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:21.186 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:21.186 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:21.186 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:21.186 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:21.186 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:21.186 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:21.186 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:14:21.186 Found net devices under 0000:84:00.0: cvl_0_0 00:14:21.186 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:21.186 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:21.186 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:21.186 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:21.186 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:21.186 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:21.186 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:21.186 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:21.186 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:14:21.186 Found net devices under 0000:84:00.1: cvl_0_1 00:14:21.186 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:21.186 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:21.186 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:14:21.186 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:21.186 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:21.186 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:21.186 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:21.186 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:21.186 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:21.186 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:21.186 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:21.186 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:21.186 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:21.186 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:21.186 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:21.186 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:21.186 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:21.186 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:21.186 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:21.443 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:21.443 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:21.443 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:21.443 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:21.443 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:21.443 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:21.443 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:21.443 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:21.443 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.239 ms 00:14:21.443 00:14:21.443 --- 10.0.0.2 ping statistics --- 00:14:21.443 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:21.443 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:14:21.443 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:21.443 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:21.443 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.179 ms 00:14:21.443 00:14:21.443 --- 10.0.0.1 ping statistics --- 00:14:21.443 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:21.443 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:14:21.443 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:21.443 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:14:21.443 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:21.443 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:21.443 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:21.443 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:21.443 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:21.443 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:21.443 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:21.443 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:14:21.443 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:21.443 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:21.443 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:21.443 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=413064 00:14:21.443 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:21.443 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 413064 00:14:21.443 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@831 -- # '[' -z 413064 ']' 00:14:21.443 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:21.443 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:21.443 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:21.443 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:21.443 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:21.443 10:03:06 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:21.443 [2024-07-25 10:03:06.537292] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:14:21.443 [2024-07-25 10:03:06.537397] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:21.443 EAL: No free 2048 kB hugepages reported on node 1 00:14:21.443 [2024-07-25 10:03:06.606612] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:21.700 [2024-07-25 10:03:06.731614] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:21.700 [2024-07-25 10:03:06.731669] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:21.700 [2024-07-25 10:03:06.731701] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:21.700 [2024-07-25 10:03:06.731733] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:21.700 [2024-07-25 10:03:06.731751] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:21.700 [2024-07-25 10:03:06.731830] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:21.700 [2024-07-25 10:03:06.731887] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:21.700 [2024-07-25 10:03:06.731944] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:21.700 [2024-07-25 10:03:06.731951] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:22.630 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:22.630 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # return 0 00:14:22.630 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:22.630 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:22.630 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:22.630 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:22.630 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:22.630 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.630 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:22.630 [2024-07-25 10:03:07.568513] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:22.630 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.630 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:22.630 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.630 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:22.630 Malloc0 00:14:22.630 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.630 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:22.630 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.630 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:22.630 Malloc1 00:14:22.630 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.630 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:14:22.630 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.631 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:22.631 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.631 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:22.631 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.631 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:22.631 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.631 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:22.631 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.631 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:22.631 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.631 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:22.631 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.631 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:22.631 [2024-07-25 10:03:07.655591] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:22.631 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.631 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:22.631 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.631 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:22.631 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.631 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -a 10.0.0.2 -s 4420 00:14:22.888 00:14:22.888 Discovery Log Number of Records 2, Generation counter 2 00:14:22.888 =====Discovery Log Entry 0====== 00:14:22.888 trtype: tcp 00:14:22.888 adrfam: ipv4 00:14:22.888 subtype: current discovery subsystem 00:14:22.888 treq: not required 00:14:22.888 portid: 0 00:14:22.888 trsvcid: 4420 00:14:22.888 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:22.888 traddr: 10.0.0.2 00:14:22.888 eflags: explicit discovery connections, duplicate discovery information 00:14:22.888 sectype: none 00:14:22.888 =====Discovery Log Entry 1====== 00:14:22.888 trtype: tcp 00:14:22.888 adrfam: ipv4 00:14:22.888 subtype: nvme subsystem 00:14:22.888 treq: not required 00:14:22.888 portid: 0 00:14:22.888 trsvcid: 4420 00:14:22.888 subnqn: nqn.2016-06.io.spdk:cnode1 00:14:22.888 traddr: 10.0.0.2 00:14:22.888 eflags: none 00:14:22.888 sectype: none 00:14:22.888 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:14:22.888 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:14:22.888 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:14:22.888 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:22.888 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:14:22.888 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:14:22.888 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:22.888 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:14:22.888 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:22.888 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:14:22.888 10:03:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:23.453 10:03:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:23.453 10:03:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:14:23.453 10:03:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:23.453 10:03:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:14:23.453 10:03:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:14:23.453 10:03:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:14:25.978 10:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:25.978 10:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:25.978 10:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:25.978 10:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:14:25.978 10:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:25.978 10:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:14:25.978 10:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:14:25.978 10:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:14:25.978 10:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:25.978 10:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:14:25.978 10:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:14:25.978 10:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:25.978 10:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:14:25.978 10:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:25.978 10:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:25.978 10:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:14:25.978 10:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:25.978 10:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:25.978 10:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:14:25.978 10:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:25.978 10:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:14:25.978 /dev/nvme0n1 ]] 00:14:25.978 10:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:14:25.978 10:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:14:25.978 10:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:14:25.978 10:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:25.978 10:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:14:25.978 10:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:14:25.978 10:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:25.978 10:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:14:25.978 10:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:25.978 10:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:25.978 10:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:14:25.978 10:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:25.978 10:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:25.978 10:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:14:25.978 10:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:25.978 10:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:14:25.978 10:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:25.978 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:25.978 10:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:25.978 10:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:14:25.978 10:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:25.978 10:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:25.978 10:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:25.978 10:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:25.978 10:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:14:25.978 10:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:14:25.978 10:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:25.978 10:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.978 10:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:25.978 10:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.978 10:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:14:25.978 10:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:14:25.978 10:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:25.979 10:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:14:25.979 10:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:25.979 10:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:14:25.979 10:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:25.979 10:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:25.979 rmmod nvme_tcp 00:14:25.979 rmmod nvme_fabrics 00:14:25.979 rmmod nvme_keyring 00:14:25.979 10:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:25.979 10:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:14:25.979 10:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:14:25.979 10:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 413064 ']' 00:14:25.979 10:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 413064 00:14:25.979 10:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@950 -- # '[' -z 413064 ']' 00:14:25.979 10:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # kill -0 413064 00:14:25.979 10:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # uname 00:14:25.979 10:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:14:25.979 10:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 413064 00:14:25.979 10:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:14:25.979 10:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:14:25.979 10:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@968 -- # echo 'killing process with pid 413064' 00:14:25.979 killing process with pid 413064 00:14:25.979 10:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@969 -- # kill 413064 00:14:25.979 10:03:10 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@974 -- # wait 413064 00:14:25.979 10:03:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:25.979 10:03:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:25.979 10:03:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:25.979 10:03:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:25.979 10:03:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:25.979 10:03:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:25.979 10:03:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:25.979 10:03:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:28.511 10:03:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:28.511 00:14:28.511 real 0m9.202s 00:14:28.511 user 0m18.129s 00:14:28.511 sys 0m2.529s 00:14:28.511 10:03:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1126 -- # xtrace_disable 00:14:28.511 10:03:13 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:28.511 ************************************ 00:14:28.511 END TEST nvmf_nvme_cli 00:14:28.511 ************************************ 00:14:28.511 10:03:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:14:28.511 10:03:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:28.511 10:03:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:14:28.511 10:03:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:14:28.511 10:03:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:28.511 ************************************ 00:14:28.511 START TEST nvmf_vfio_user 00:14:28.511 ************************************ 00:14:28.511 10:03:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:28.511 * Looking for test storage... 00:14:28.511 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:28.511 10:03:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:28.511 10:03:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:14:28.511 10:03:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:28.511 10:03:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:28.511 10:03:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:28.511 10:03:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:28.511 10:03:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:28.511 10:03:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:28.511 10:03:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:28.511 10:03:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:28.511 10:03:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:28.511 10:03:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:28.511 10:03:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:14:28.511 10:03:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:14:28.511 10:03:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:28.511 10:03:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:28.511 10:03:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:28.511 10:03:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:28.511 10:03:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:28.511 10:03:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:28.511 10:03:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:28.511 10:03:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:28.511 10:03:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:28.511 10:03:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:28.511 10:03:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:28.511 10:03:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:14:28.511 10:03:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:28.511 10:03:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:14:28.511 10:03:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:28.511 10:03:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:28.511 10:03:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:28.511 10:03:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:28.511 10:03:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:28.511 10:03:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:28.511 10:03:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:28.511 10:03:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:28.511 10:03:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:28.511 10:03:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:28.511 10:03:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:14:28.511 10:03:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:28.511 10:03:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:14:28.511 10:03:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:14:28.511 10:03:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:14:28.511 10:03:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:14:28.511 10:03:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:14:28.511 10:03:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:14:28.512 10:03:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=414499 00:14:28.512 10:03:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:14:28.512 10:03:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 414499' 00:14:28.512 Process pid: 414499 00:14:28.512 10:03:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:28.512 10:03:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 414499 00:14:28.512 10:03:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 414499 ']' 00:14:28.512 10:03:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:28.512 10:03:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:14:28.512 10:03:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:28.512 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:28.512 10:03:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:14:28.512 10:03:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:28.512 [2024-07-25 10:03:13.408162] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:14:28.512 [2024-07-25 10:03:13.408268] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:28.512 EAL: No free 2048 kB hugepages reported on node 1 00:14:28.512 [2024-07-25 10:03:13.483102] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:28.512 [2024-07-25 10:03:13.607758] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:28.512 [2024-07-25 10:03:13.607819] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:28.512 [2024-07-25 10:03:13.607845] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:28.512 [2024-07-25 10:03:13.607868] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:28.512 [2024-07-25 10:03:13.607885] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:28.512 [2024-07-25 10:03:13.607986] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:28.512 [2024-07-25 10:03:13.608042] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:28.512 [2024-07-25 10:03:13.608098] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:28.512 [2024-07-25 10:03:13.608107] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:28.769 10:03:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:14:28.769 10:03:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:14:28.769 10:03:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:14:29.701 10:03:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:14:30.266 10:03:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:14:30.266 10:03:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:14:30.266 10:03:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:30.266 10:03:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:14:30.266 10:03:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:30.524 Malloc1 00:14:30.524 10:03:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:14:31.143 10:03:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:14:31.402 10:03:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:14:31.660 10:03:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:31.660 10:03:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:14:31.660 10:03:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:32.225 Malloc2 00:14:32.225 10:03:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:14:32.483 10:03:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:14:32.740 10:03:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:14:32.998 10:03:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:14:32.998 10:03:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:14:32.998 10:03:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:32.998 10:03:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:14:32.998 10:03:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:14:32.998 10:03:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:14:32.998 [2024-07-25 10:03:18.157451] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:14:32.998 [2024-07-25 10:03:18.157509] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid415059 ] 00:14:33.257 EAL: No free 2048 kB hugepages reported on node 1 00:14:33.257 [2024-07-25 10:03:18.194723] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:14:33.257 [2024-07-25 10:03:18.203809] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:33.258 [2024-07-25 10:03:18.203838] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f670b8df000 00:14:33.258 [2024-07-25 10:03:18.204797] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:33.258 [2024-07-25 10:03:18.205790] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:33.258 [2024-07-25 10:03:18.206794] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:33.258 [2024-07-25 10:03:18.207816] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:33.258 [2024-07-25 10:03:18.208805] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:33.258 [2024-07-25 10:03:18.209814] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:33.258 [2024-07-25 10:03:18.210818] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:33.258 [2024-07-25 10:03:18.211822] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:33.258 [2024-07-25 10:03:18.212829] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:33.258 [2024-07-25 10:03:18.212848] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f670b8d4000 00:14:33.258 [2024-07-25 10:03:18.213965] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:33.258 [2024-07-25 10:03:18.229659] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:14:33.258 [2024-07-25 10:03:18.229704] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:14:33.258 [2024-07-25 10:03:18.231930] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:33.258 [2024-07-25 10:03:18.231983] nvme_pcie_common.c: 133:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:14:33.258 [2024-07-25 10:03:18.232074] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:14:33.258 [2024-07-25 10:03:18.232100] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:14:33.258 [2024-07-25 10:03:18.232110] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:14:33.258 [2024-07-25 10:03:18.232927] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:14:33.258 [2024-07-25 10:03:18.232952] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:14:33.258 [2024-07-25 10:03:18.232965] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:14:33.258 [2024-07-25 10:03:18.233930] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:33.258 [2024-07-25 10:03:18.233949] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:14:33.258 [2024-07-25 10:03:18.233962] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:14:33.258 [2024-07-25 10:03:18.234937] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:14:33.258 [2024-07-25 10:03:18.234956] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:33.258 [2024-07-25 10:03:18.235941] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:14:33.258 [2024-07-25 10:03:18.235960] nvme_ctrlr.c:3873:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:14:33.258 [2024-07-25 10:03:18.235969] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:14:33.258 [2024-07-25 10:03:18.235980] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:33.258 [2024-07-25 10:03:18.236089] nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:14:33.258 [2024-07-25 10:03:18.236097] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:33.258 [2024-07-25 10:03:18.236105] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:14:33.258 [2024-07-25 10:03:18.236949] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:14:33.258 [2024-07-25 10:03:18.237951] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:14:33.258 [2024-07-25 10:03:18.238956] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:33.258 [2024-07-25 10:03:18.239953] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:33.258 [2024-07-25 10:03:18.240046] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:33.258 [2024-07-25 10:03:18.240974] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:14:33.258 [2024-07-25 10:03:18.240996] nvme_ctrlr.c:3908:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:33.258 [2024-07-25 10:03:18.241005] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:14:33.258 [2024-07-25 10:03:18.241029] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:14:33.258 [2024-07-25 10:03:18.241043] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:14:33.258 [2024-07-25 10:03:18.241067] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:33.258 [2024-07-25 10:03:18.241076] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:33.258 [2024-07-25 10:03:18.241083] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:33.258 [2024-07-25 10:03:18.241100] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:33.258 [2024-07-25 10:03:18.241152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:14:33.258 [2024-07-25 10:03:18.241166] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:14:33.258 [2024-07-25 10:03:18.241178] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:14:33.258 [2024-07-25 10:03:18.241186] nvme_ctrlr.c:2064:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:14:33.258 [2024-07-25 10:03:18.241193] nvme_ctrlr.c:2075:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:14:33.258 [2024-07-25 10:03:18.241201] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:14:33.258 [2024-07-25 10:03:18.241208] nvme_ctrlr.c:2103:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:14:33.258 [2024-07-25 10:03:18.241215] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:14:33.258 [2024-07-25 10:03:18.241227] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:14:33.258 [2024-07-25 10:03:18.241245] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:14:33.258 [2024-07-25 10:03:18.241260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:14:33.258 [2024-07-25 10:03:18.241280] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:33.258 [2024-07-25 10:03:18.241292] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:33.258 [2024-07-25 10:03:18.241304] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:33.258 [2024-07-25 10:03:18.241315] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:33.258 [2024-07-25 10:03:18.241323] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:14:33.258 [2024-07-25 10:03:18.241339] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:33.258 [2024-07-25 10:03:18.241352] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:14:33.258 [2024-07-25 10:03:18.241364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:14:33.258 [2024-07-25 10:03:18.241373] nvme_ctrlr.c:3014:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:14:33.258 [2024-07-25 10:03:18.241381] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:33.258 [2024-07-25 10:03:18.241395] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:14:33.258 [2024-07-25 10:03:18.241405] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:14:33.258 [2024-07-25 10:03:18.241445] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:33.258 [2024-07-25 10:03:18.241461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:14:33.258 [2024-07-25 10:03:18.241545] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:14:33.258 [2024-07-25 10:03:18.241562] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:14:33.259 [2024-07-25 10:03:18.241580] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:14:33.259 [2024-07-25 10:03:18.241589] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:14:33.259 [2024-07-25 10:03:18.241595] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:33.259 [2024-07-25 10:03:18.241605] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:14:33.259 [2024-07-25 10:03:18.241624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:14:33.259 [2024-07-25 10:03:18.241640] nvme_ctrlr.c:4697:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:14:33.259 [2024-07-25 10:03:18.241656] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:14:33.259 [2024-07-25 10:03:18.241671] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:14:33.259 [2024-07-25 10:03:18.241683] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:33.259 [2024-07-25 10:03:18.241692] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:33.259 [2024-07-25 10:03:18.241698] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:33.259 [2024-07-25 10:03:18.241707] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:33.259 [2024-07-25 10:03:18.241735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:14:33.259 [2024-07-25 10:03:18.241756] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:33.259 [2024-07-25 10:03:18.241771] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:33.259 [2024-07-25 10:03:18.241784] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:33.259 [2024-07-25 10:03:18.241807] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:33.259 [2024-07-25 10:03:18.241813] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:33.259 [2024-07-25 10:03:18.241822] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:33.259 [2024-07-25 10:03:18.241836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:14:33.259 [2024-07-25 10:03:18.241850] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:33.259 [2024-07-25 10:03:18.241877] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:14:33.259 [2024-07-25 10:03:18.241889] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:14:33.259 [2024-07-25 10:03:18.241901] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:14:33.259 [2024-07-25 10:03:18.241910] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:33.259 [2024-07-25 10:03:18.241918] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:14:33.259 [2024-07-25 10:03:18.241929] nvme_ctrlr.c:3114:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:14:33.259 [2024-07-25 10:03:18.241937] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:14:33.259 [2024-07-25 10:03:18.241945] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:14:33.259 [2024-07-25 10:03:18.241985] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:14:33.259 [2024-07-25 10:03:18.242003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:14:33.259 [2024-07-25 10:03:18.242021] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:14:33.259 [2024-07-25 10:03:18.242033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:14:33.259 [2024-07-25 10:03:18.242049] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:14:33.259 [2024-07-25 10:03:18.242061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:14:33.259 [2024-07-25 10:03:18.242076] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:33.259 [2024-07-25 10:03:18.242088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:14:33.259 [2024-07-25 10:03:18.242109] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:14:33.259 [2024-07-25 10:03:18.242119] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:14:33.259 [2024-07-25 10:03:18.242125] nvme_pcie_common.c:1239:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:14:33.259 [2024-07-25 10:03:18.242131] nvme_pcie_common.c:1255:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:14:33.259 [2024-07-25 10:03:18.242137] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:14:33.259 [2024-07-25 10:03:18.242146] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:14:33.259 [2024-07-25 10:03:18.242157] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:14:33.259 [2024-07-25 10:03:18.242165] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:14:33.259 [2024-07-25 10:03:18.242171] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:33.259 [2024-07-25 10:03:18.242180] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:14:33.259 [2024-07-25 10:03:18.242191] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:14:33.259 [2024-07-25 10:03:18.242199] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:33.259 [2024-07-25 10:03:18.242205] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:33.259 [2024-07-25 10:03:18.242213] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:33.259 [2024-07-25 10:03:18.242225] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:14:33.259 [2024-07-25 10:03:18.242233] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:14:33.259 [2024-07-25 10:03:18.242239] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:33.259 [2024-07-25 10:03:18.242251] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:14:33.259 [2024-07-25 10:03:18.242263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:14:33.259 [2024-07-25 10:03:18.242282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:14:33.259 [2024-07-25 10:03:18.242302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:14:33.259 [2024-07-25 10:03:18.242314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:14:33.259 ===================================================== 00:14:33.259 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:33.259 ===================================================== 00:14:33.259 Controller Capabilities/Features 00:14:33.259 ================================ 00:14:33.259 Vendor ID: 4e58 00:14:33.259 Subsystem Vendor ID: 4e58 00:14:33.259 Serial Number: SPDK1 00:14:33.259 Model Number: SPDK bdev Controller 00:14:33.259 Firmware Version: 24.09 00:14:33.259 Recommended Arb Burst: 6 00:14:33.259 IEEE OUI Identifier: 8d 6b 50 00:14:33.259 Multi-path I/O 00:14:33.259 May have multiple subsystem ports: Yes 00:14:33.259 May have multiple controllers: Yes 00:14:33.259 Associated with SR-IOV VF: No 00:14:33.259 Max Data Transfer Size: 131072 00:14:33.259 Max Number of Namespaces: 32 00:14:33.259 Max Number of I/O Queues: 127 00:14:33.259 NVMe Specification Version (VS): 1.3 00:14:33.259 NVMe Specification Version (Identify): 1.3 00:14:33.259 Maximum Queue Entries: 256 00:14:33.259 Contiguous Queues Required: Yes 00:14:33.259 Arbitration Mechanisms Supported 00:14:33.259 Weighted Round Robin: Not Supported 00:14:33.259 Vendor Specific: Not Supported 00:14:33.259 Reset Timeout: 15000 ms 00:14:33.259 Doorbell Stride: 4 bytes 00:14:33.259 NVM Subsystem Reset: Not Supported 00:14:33.259 Command Sets Supported 00:14:33.259 NVM Command Set: Supported 00:14:33.259 Boot Partition: Not Supported 00:14:33.259 Memory Page Size Minimum: 4096 bytes 00:14:33.259 Memory Page Size Maximum: 4096 bytes 00:14:33.259 Persistent Memory Region: Not Supported 00:14:33.260 Optional Asynchronous Events Supported 00:14:33.260 Namespace Attribute Notices: Supported 00:14:33.260 Firmware Activation Notices: Not Supported 00:14:33.260 ANA Change Notices: Not Supported 00:14:33.260 PLE Aggregate Log Change Notices: Not Supported 00:14:33.260 LBA Status Info Alert Notices: Not Supported 00:14:33.260 EGE Aggregate Log Change Notices: Not Supported 00:14:33.260 Normal NVM Subsystem Shutdown event: Not Supported 00:14:33.260 Zone Descriptor Change Notices: Not Supported 00:14:33.260 Discovery Log Change Notices: Not Supported 00:14:33.260 Controller Attributes 00:14:33.260 128-bit Host Identifier: Supported 00:14:33.260 Non-Operational Permissive Mode: Not Supported 00:14:33.260 NVM Sets: Not Supported 00:14:33.260 Read Recovery Levels: Not Supported 00:14:33.260 Endurance Groups: Not Supported 00:14:33.260 Predictable Latency Mode: Not Supported 00:14:33.260 Traffic Based Keep ALive: Not Supported 00:14:33.260 Namespace Granularity: Not Supported 00:14:33.260 SQ Associations: Not Supported 00:14:33.260 UUID List: Not Supported 00:14:33.260 Multi-Domain Subsystem: Not Supported 00:14:33.260 Fixed Capacity Management: Not Supported 00:14:33.260 Variable Capacity Management: Not Supported 00:14:33.260 Delete Endurance Group: Not Supported 00:14:33.260 Delete NVM Set: Not Supported 00:14:33.260 Extended LBA Formats Supported: Not Supported 00:14:33.260 Flexible Data Placement Supported: Not Supported 00:14:33.260 00:14:33.260 Controller Memory Buffer Support 00:14:33.260 ================================ 00:14:33.260 Supported: No 00:14:33.260 00:14:33.260 Persistent Memory Region Support 00:14:33.260 ================================ 00:14:33.260 Supported: No 00:14:33.260 00:14:33.260 Admin Command Set Attributes 00:14:33.260 ============================ 00:14:33.260 Security Send/Receive: Not Supported 00:14:33.260 Format NVM: Not Supported 00:14:33.260 Firmware Activate/Download: Not Supported 00:14:33.260 Namespace Management: Not Supported 00:14:33.260 Device Self-Test: Not Supported 00:14:33.260 Directives: Not Supported 00:14:33.260 NVMe-MI: Not Supported 00:14:33.260 Virtualization Management: Not Supported 00:14:33.260 Doorbell Buffer Config: Not Supported 00:14:33.260 Get LBA Status Capability: Not Supported 00:14:33.260 Command & Feature Lockdown Capability: Not Supported 00:14:33.260 Abort Command Limit: 4 00:14:33.260 Async Event Request Limit: 4 00:14:33.260 Number of Firmware Slots: N/A 00:14:33.260 Firmware Slot 1 Read-Only: N/A 00:14:33.260 Firmware Activation Without Reset: N/A 00:14:33.260 Multiple Update Detection Support: N/A 00:14:33.260 Firmware Update Granularity: No Information Provided 00:14:33.260 Per-Namespace SMART Log: No 00:14:33.260 Asymmetric Namespace Access Log Page: Not Supported 00:14:33.260 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:14:33.260 Command Effects Log Page: Supported 00:14:33.260 Get Log Page Extended Data: Supported 00:14:33.260 Telemetry Log Pages: Not Supported 00:14:33.260 Persistent Event Log Pages: Not Supported 00:14:33.260 Supported Log Pages Log Page: May Support 00:14:33.260 Commands Supported & Effects Log Page: Not Supported 00:14:33.260 Feature Identifiers & Effects Log Page:May Support 00:14:33.260 NVMe-MI Commands & Effects Log Page: May Support 00:14:33.260 Data Area 4 for Telemetry Log: Not Supported 00:14:33.260 Error Log Page Entries Supported: 128 00:14:33.260 Keep Alive: Supported 00:14:33.260 Keep Alive Granularity: 10000 ms 00:14:33.260 00:14:33.260 NVM Command Set Attributes 00:14:33.260 ========================== 00:14:33.260 Submission Queue Entry Size 00:14:33.260 Max: 64 00:14:33.260 Min: 64 00:14:33.260 Completion Queue Entry Size 00:14:33.260 Max: 16 00:14:33.260 Min: 16 00:14:33.260 Number of Namespaces: 32 00:14:33.260 Compare Command: Supported 00:14:33.260 Write Uncorrectable Command: Not Supported 00:14:33.260 Dataset Management Command: Supported 00:14:33.260 Write Zeroes Command: Supported 00:14:33.260 Set Features Save Field: Not Supported 00:14:33.260 Reservations: Not Supported 00:14:33.260 Timestamp: Not Supported 00:14:33.260 Copy: Supported 00:14:33.260 Volatile Write Cache: Present 00:14:33.260 Atomic Write Unit (Normal): 1 00:14:33.260 Atomic Write Unit (PFail): 1 00:14:33.260 Atomic Compare & Write Unit: 1 00:14:33.260 Fused Compare & Write: Supported 00:14:33.260 Scatter-Gather List 00:14:33.260 SGL Command Set: Supported (Dword aligned) 00:14:33.260 SGL Keyed: Not Supported 00:14:33.260 SGL Bit Bucket Descriptor: Not Supported 00:14:33.260 SGL Metadata Pointer: Not Supported 00:14:33.260 Oversized SGL: Not Supported 00:14:33.260 SGL Metadata Address: Not Supported 00:14:33.260 SGL Offset: Not Supported 00:14:33.260 Transport SGL Data Block: Not Supported 00:14:33.260 Replay Protected Memory Block: Not Supported 00:14:33.260 00:14:33.260 Firmware Slot Information 00:14:33.260 ========================= 00:14:33.260 Active slot: 1 00:14:33.260 Slot 1 Firmware Revision: 24.09 00:14:33.260 00:14:33.260 00:14:33.260 Commands Supported and Effects 00:14:33.260 ============================== 00:14:33.260 Admin Commands 00:14:33.260 -------------- 00:14:33.260 Get Log Page (02h): Supported 00:14:33.260 Identify (06h): Supported 00:14:33.260 Abort (08h): Supported 00:14:33.260 Set Features (09h): Supported 00:14:33.260 Get Features (0Ah): Supported 00:14:33.260 Asynchronous Event Request (0Ch): Supported 00:14:33.260 Keep Alive (18h): Supported 00:14:33.260 I/O Commands 00:14:33.260 ------------ 00:14:33.260 Flush (00h): Supported LBA-Change 00:14:33.260 Write (01h): Supported LBA-Change 00:14:33.260 Read (02h): Supported 00:14:33.260 Compare (05h): Supported 00:14:33.260 Write Zeroes (08h): Supported LBA-Change 00:14:33.260 Dataset Management (09h): Supported LBA-Change 00:14:33.260 Copy (19h): Supported LBA-Change 00:14:33.260 00:14:33.260 Error Log 00:14:33.260 ========= 00:14:33.260 00:14:33.260 Arbitration 00:14:33.260 =========== 00:14:33.260 Arbitration Burst: 1 00:14:33.260 00:14:33.260 Power Management 00:14:33.260 ================ 00:14:33.260 Number of Power States: 1 00:14:33.260 Current Power State: Power State #0 00:14:33.260 Power State #0: 00:14:33.260 Max Power: 0.00 W 00:14:33.260 Non-Operational State: Operational 00:14:33.260 Entry Latency: Not Reported 00:14:33.260 Exit Latency: Not Reported 00:14:33.260 Relative Read Throughput: 0 00:14:33.260 Relative Read Latency: 0 00:14:33.260 Relative Write Throughput: 0 00:14:33.260 Relative Write Latency: 0 00:14:33.260 Idle Power: Not Reported 00:14:33.260 Active Power: Not Reported 00:14:33.260 Non-Operational Permissive Mode: Not Supported 00:14:33.260 00:14:33.260 Health Information 00:14:33.260 ================== 00:14:33.260 Critical Warnings: 00:14:33.260 Available Spare Space: OK 00:14:33.260 Temperature: OK 00:14:33.260 Device Reliability: OK 00:14:33.260 Read Only: No 00:14:33.260 Volatile Memory Backup: OK 00:14:33.260 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:33.260 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:33.260 Available Spare: 0% 00:14:33.260 Available Sp[2024-07-25 10:03:18.242459] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:14:33.260 [2024-07-25 10:03:18.242477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:14:33.260 [2024-07-25 10:03:18.242518] nvme_ctrlr.c:4361:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:14:33.260 [2024-07-25 10:03:18.242536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.260 [2024-07-25 10:03:18.242547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.260 [2024-07-25 10:03:18.242557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.260 [2024-07-25 10:03:18.242567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.260 [2024-07-25 10:03:18.245442] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:33.260 [2024-07-25 10:03:18.245465] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:14:33.260 [2024-07-25 10:03:18.246000] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:33.261 [2024-07-25 10:03:18.246079] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:14:33.261 [2024-07-25 10:03:18.246098] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:14:33.261 [2024-07-25 10:03:18.247014] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:14:33.261 [2024-07-25 10:03:18.247039] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:14:33.261 [2024-07-25 10:03:18.247093] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:14:33.261 [2024-07-25 10:03:18.249050] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:33.261 are Threshold: 0% 00:14:33.261 Life Percentage Used: 0% 00:14:33.261 Data Units Read: 0 00:14:33.261 Data Units Written: 0 00:14:33.261 Host Read Commands: 0 00:14:33.261 Host Write Commands: 0 00:14:33.261 Controller Busy Time: 0 minutes 00:14:33.261 Power Cycles: 0 00:14:33.261 Power On Hours: 0 hours 00:14:33.261 Unsafe Shutdowns: 0 00:14:33.261 Unrecoverable Media Errors: 0 00:14:33.261 Lifetime Error Log Entries: 0 00:14:33.261 Warning Temperature Time: 0 minutes 00:14:33.261 Critical Temperature Time: 0 minutes 00:14:33.261 00:14:33.261 Number of Queues 00:14:33.261 ================ 00:14:33.261 Number of I/O Submission Queues: 127 00:14:33.261 Number of I/O Completion Queues: 127 00:14:33.261 00:14:33.261 Active Namespaces 00:14:33.261 ================= 00:14:33.261 Namespace ID:1 00:14:33.261 Error Recovery Timeout: Unlimited 00:14:33.261 Command Set Identifier: NVM (00h) 00:14:33.261 Deallocate: Supported 00:14:33.261 Deallocated/Unwritten Error: Not Supported 00:14:33.261 Deallocated Read Value: Unknown 00:14:33.261 Deallocate in Write Zeroes: Not Supported 00:14:33.261 Deallocated Guard Field: 0xFFFF 00:14:33.261 Flush: Supported 00:14:33.261 Reservation: Supported 00:14:33.261 Namespace Sharing Capabilities: Multiple Controllers 00:14:33.261 Size (in LBAs): 131072 (0GiB) 00:14:33.261 Capacity (in LBAs): 131072 (0GiB) 00:14:33.261 Utilization (in LBAs): 131072 (0GiB) 00:14:33.261 NGUID: 752FA51337B04413BA97648DC6F7037A 00:14:33.261 UUID: 752fa513-37b0-4413-ba97-648dc6f7037a 00:14:33.261 Thin Provisioning: Not Supported 00:14:33.261 Per-NS Atomic Units: Yes 00:14:33.261 Atomic Boundary Size (Normal): 0 00:14:33.261 Atomic Boundary Size (PFail): 0 00:14:33.261 Atomic Boundary Offset: 0 00:14:33.261 Maximum Single Source Range Length: 65535 00:14:33.261 Maximum Copy Length: 65535 00:14:33.261 Maximum Source Range Count: 1 00:14:33.261 NGUID/EUI64 Never Reused: No 00:14:33.261 Namespace Write Protected: No 00:14:33.261 Number of LBA Formats: 1 00:14:33.261 Current LBA Format: LBA Format #00 00:14:33.261 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:33.261 00:14:33.261 10:03:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:14:33.261 EAL: No free 2048 kB hugepages reported on node 1 00:14:33.519 [2024-07-25 10:03:18.499310] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:38.782 Initializing NVMe Controllers 00:14:38.782 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:38.782 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:14:38.782 Initialization complete. Launching workers. 00:14:38.782 ======================================================== 00:14:38.782 Latency(us) 00:14:38.782 Device Information : IOPS MiB/s Average min max 00:14:38.782 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 33806.00 132.05 3786.16 1161.20 7636.56 00:14:38.782 ======================================================== 00:14:38.782 Total : 33806.00 132.05 3786.16 1161.20 7636.56 00:14:38.782 00:14:38.782 [2024-07-25 10:03:23.522252] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:38.782 10:03:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:14:38.782 EAL: No free 2048 kB hugepages reported on node 1 00:14:38.782 [2024-07-25 10:03:23.818551] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:44.044 Initializing NVMe Controllers 00:14:44.044 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:44.044 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:14:44.044 Initialization complete. Launching workers. 00:14:44.044 ======================================================== 00:14:44.044 Latency(us) 00:14:44.044 Device Information : IOPS MiB/s Average min max 00:14:44.044 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16076.80 62.80 7968.52 4986.44 9004.45 00:14:44.044 ======================================================== 00:14:44.044 Total : 16076.80 62.80 7968.52 4986.44 9004.45 00:14:44.044 00:14:44.044 [2024-07-25 10:03:28.855380] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:44.044 10:03:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:14:44.044 EAL: No free 2048 kB hugepages reported on node 1 00:14:44.044 [2024-07-25 10:03:29.065405] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:49.305 [2024-07-25 10:03:34.144889] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:49.305 Initializing NVMe Controllers 00:14:49.305 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:49.305 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:49.305 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:14:49.305 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:14:49.305 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:14:49.305 Initialization complete. Launching workers. 00:14:49.305 Starting thread on core 2 00:14:49.305 Starting thread on core 3 00:14:49.305 Starting thread on core 1 00:14:49.305 10:03:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:14:49.306 EAL: No free 2048 kB hugepages reported on node 1 00:14:49.306 [2024-07-25 10:03:34.455603] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:52.588 [2024-07-25 10:03:37.539980] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:52.588 Initializing NVMe Controllers 00:14:52.588 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:52.588 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:52.588 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:14:52.588 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:14:52.588 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:14:52.588 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:14:52.588 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:14:52.588 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:14:52.588 Initialization complete. Launching workers. 00:14:52.588 Starting thread on core 1 with urgent priority queue 00:14:52.588 Starting thread on core 2 with urgent priority queue 00:14:52.588 Starting thread on core 3 with urgent priority queue 00:14:52.588 Starting thread on core 0 with urgent priority queue 00:14:52.588 SPDK bdev Controller (SPDK1 ) core 0: 2008.00 IO/s 49.80 secs/100000 ios 00:14:52.588 SPDK bdev Controller (SPDK1 ) core 1: 2134.33 IO/s 46.85 secs/100000 ios 00:14:52.588 SPDK bdev Controller (SPDK1 ) core 2: 1921.00 IO/s 52.06 secs/100000 ios 00:14:52.588 SPDK bdev Controller (SPDK1 ) core 3: 2122.67 IO/s 47.11 secs/100000 ios 00:14:52.588 ======================================================== 00:14:52.588 00:14:52.588 10:03:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:14:52.588 EAL: No free 2048 kB hugepages reported on node 1 00:14:52.845 [2024-07-25 10:03:37.830335] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:52.845 Initializing NVMe Controllers 00:14:52.845 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:52.845 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:52.845 Namespace ID: 1 size: 0GB 00:14:52.845 Initialization complete. 00:14:52.845 INFO: using host memory buffer for IO 00:14:52.845 Hello world! 00:14:52.845 [2024-07-25 10:03:37.862874] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:52.845 10:03:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:14:52.845 EAL: No free 2048 kB hugepages reported on node 1 00:14:53.102 [2024-07-25 10:03:38.150907] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:54.036 Initializing NVMe Controllers 00:14:54.036 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:54.036 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:54.036 Initialization complete. Launching workers. 00:14:54.036 submit (in ns) avg, min, max = 8053.4, 3522.2, 5000284.4 00:14:54.036 complete (in ns) avg, min, max = 25128.1, 2072.2, 4017238.9 00:14:54.036 00:14:54.036 Submit histogram 00:14:54.036 ================ 00:14:54.036 Range in us Cumulative Count 00:14:54.036 3.508 - 3.532: 0.0992% ( 13) 00:14:54.036 3.532 - 3.556: 0.6260% ( 69) 00:14:54.036 3.556 - 3.579: 2.0307% ( 184) 00:14:54.036 3.579 - 3.603: 5.5653% ( 463) 00:14:54.036 3.603 - 3.627: 11.2375% ( 743) 00:14:54.036 3.627 - 3.650: 20.0702% ( 1157) 00:14:54.036 3.650 - 3.674: 29.0709% ( 1179) 00:14:54.036 3.674 - 3.698: 38.5373% ( 1240) 00:14:54.036 3.698 - 3.721: 46.1715% ( 1000) 00:14:54.036 3.721 - 3.745: 52.0727% ( 773) 00:14:54.036 3.745 - 3.769: 56.2257% ( 544) 00:14:54.036 3.769 - 3.793: 60.1725% ( 517) 00:14:54.036 3.793 - 3.816: 63.4705% ( 432) 00:14:54.036 3.816 - 3.840: 67.1502% ( 482) 00:14:54.036 3.840 - 3.864: 70.9520% ( 498) 00:14:54.036 3.864 - 3.887: 74.6851% ( 489) 00:14:54.036 3.887 - 3.911: 78.9221% ( 555) 00:14:54.036 3.911 - 3.935: 82.7086% ( 496) 00:14:54.036 3.935 - 3.959: 85.4035% ( 353) 00:14:54.036 3.959 - 3.982: 87.2738% ( 245) 00:14:54.036 3.982 - 4.006: 88.7778% ( 197) 00:14:54.036 4.006 - 4.030: 90.4420% ( 218) 00:14:54.036 4.030 - 4.053: 91.6864% ( 163) 00:14:54.036 4.053 - 4.077: 92.7094% ( 134) 00:14:54.036 4.077 - 4.101: 93.6407% ( 122) 00:14:54.036 4.101 - 4.124: 94.2820% ( 84) 00:14:54.036 4.124 - 4.148: 94.8698% ( 77) 00:14:54.036 4.148 - 4.172: 95.4042% ( 70) 00:14:54.036 4.172 - 4.196: 95.7020% ( 39) 00:14:54.036 4.196 - 4.219: 96.0150% ( 41) 00:14:54.036 4.219 - 4.243: 96.1829% ( 22) 00:14:54.036 4.243 - 4.267: 96.3509% ( 22) 00:14:54.036 4.267 - 4.290: 96.4272% ( 10) 00:14:54.036 4.290 - 4.314: 96.5723% ( 19) 00:14:54.036 4.314 - 4.338: 96.6791% ( 14) 00:14:54.036 4.338 - 4.361: 96.7860% ( 14) 00:14:54.036 4.361 - 4.385: 96.8547% ( 9) 00:14:54.036 4.385 - 4.409: 96.9234% ( 9) 00:14:54.036 4.409 - 4.433: 96.9692% ( 6) 00:14:54.037 4.433 - 4.456: 96.9769% ( 1) 00:14:54.037 4.456 - 4.480: 97.0227% ( 6) 00:14:54.037 4.480 - 4.504: 97.0532% ( 4) 00:14:54.037 4.527 - 4.551: 97.0761% ( 3) 00:14:54.037 4.575 - 4.599: 97.0837% ( 1) 00:14:54.037 4.599 - 4.622: 97.0990% ( 2) 00:14:54.037 4.622 - 4.646: 97.1219% ( 3) 00:14:54.037 4.670 - 4.693: 97.1525% ( 4) 00:14:54.037 4.693 - 4.717: 97.1601% ( 1) 00:14:54.037 4.717 - 4.741: 97.1677% ( 1) 00:14:54.037 4.741 - 4.764: 97.1906% ( 3) 00:14:54.037 4.764 - 4.788: 97.1983% ( 1) 00:14:54.037 4.788 - 4.812: 97.2212% ( 3) 00:14:54.037 4.812 - 4.836: 97.2441% ( 3) 00:14:54.037 4.836 - 4.859: 97.2899% ( 6) 00:14:54.037 4.859 - 4.883: 97.3357% ( 6) 00:14:54.037 4.883 - 4.907: 97.3662% ( 4) 00:14:54.037 4.907 - 4.930: 97.3967% ( 4) 00:14:54.037 4.930 - 4.954: 97.4197% ( 3) 00:14:54.037 4.954 - 4.978: 97.4655% ( 6) 00:14:54.037 4.978 - 5.001: 97.4960% ( 4) 00:14:54.037 5.001 - 5.025: 97.5571% ( 8) 00:14:54.037 5.025 - 5.049: 97.6105% ( 7) 00:14:54.037 5.049 - 5.073: 97.6334% ( 3) 00:14:54.037 5.073 - 5.096: 97.6639% ( 4) 00:14:54.037 5.096 - 5.120: 97.6868% ( 3) 00:14:54.037 5.144 - 5.167: 97.7021% ( 2) 00:14:54.037 5.167 - 5.191: 97.7097% ( 1) 00:14:54.037 5.191 - 5.215: 97.7174% ( 1) 00:14:54.037 5.215 - 5.239: 97.7327% ( 2) 00:14:54.037 5.239 - 5.262: 97.7479% ( 2) 00:14:54.037 5.262 - 5.286: 97.7785% ( 4) 00:14:54.037 5.286 - 5.310: 97.8090% ( 4) 00:14:54.037 5.310 - 5.333: 97.8166% ( 1) 00:14:54.037 5.357 - 5.381: 97.8243% ( 1) 00:14:54.037 5.381 - 5.404: 97.8472% ( 3) 00:14:54.037 5.404 - 5.428: 97.8548% ( 1) 00:14:54.037 5.428 - 5.452: 97.8624% ( 1) 00:14:54.037 5.499 - 5.523: 97.8777% ( 2) 00:14:54.037 5.523 - 5.547: 97.8930% ( 2) 00:14:54.037 5.547 - 5.570: 97.9006% ( 1) 00:14:54.037 5.570 - 5.594: 97.9082% ( 1) 00:14:54.037 5.594 - 5.618: 97.9159% ( 1) 00:14:54.037 5.665 - 5.689: 97.9235% ( 1) 00:14:54.037 5.689 - 5.713: 97.9311% ( 1) 00:14:54.037 5.831 - 5.855: 97.9388% ( 1) 00:14:54.037 5.902 - 5.926: 97.9464% ( 1) 00:14:54.037 6.068 - 6.116: 97.9617% ( 2) 00:14:54.037 6.116 - 6.163: 97.9693% ( 1) 00:14:54.037 6.210 - 6.258: 97.9769% ( 1) 00:14:54.037 6.258 - 6.305: 97.9846% ( 1) 00:14:54.037 6.495 - 6.542: 97.9922% ( 1) 00:14:54.037 6.542 - 6.590: 97.9998% ( 1) 00:14:54.037 6.637 - 6.684: 98.0151% ( 2) 00:14:54.037 6.684 - 6.732: 98.0227% ( 1) 00:14:54.037 6.732 - 6.779: 98.0380% ( 2) 00:14:54.037 6.779 - 6.827: 98.0457% ( 1) 00:14:54.037 7.111 - 7.159: 98.0533% ( 1) 00:14:54.037 7.159 - 7.206: 98.0609% ( 1) 00:14:54.037 7.253 - 7.301: 98.0686% ( 1) 00:14:54.037 7.301 - 7.348: 98.0762% ( 1) 00:14:54.037 7.348 - 7.396: 98.0838% ( 1) 00:14:54.037 7.396 - 7.443: 98.0991% ( 2) 00:14:54.037 7.443 - 7.490: 98.1067% ( 1) 00:14:54.037 7.538 - 7.585: 98.1144% ( 1) 00:14:54.037 7.727 - 7.775: 98.1220% ( 1) 00:14:54.037 7.775 - 7.822: 98.1296% ( 1) 00:14:54.037 7.822 - 7.870: 98.1373% ( 1) 00:14:54.037 7.917 - 7.964: 98.1449% ( 1) 00:14:54.037 8.059 - 8.107: 98.1525% ( 1) 00:14:54.037 8.107 - 8.154: 98.1678% ( 2) 00:14:54.037 8.154 - 8.201: 98.1754% ( 1) 00:14:54.037 8.296 - 8.344: 98.1831% ( 1) 00:14:54.037 8.391 - 8.439: 98.1983% ( 2) 00:14:54.037 8.439 - 8.486: 98.2212% ( 3) 00:14:54.037 8.486 - 8.533: 98.2365% ( 2) 00:14:54.037 8.581 - 8.628: 98.2518% ( 2) 00:14:54.037 8.628 - 8.676: 98.2594% ( 1) 00:14:54.037 8.676 - 8.723: 98.2670% ( 1) 00:14:54.037 8.723 - 8.770: 98.2747% ( 1) 00:14:54.037 8.818 - 8.865: 98.2976% ( 3) 00:14:54.037 8.865 - 8.913: 98.3052% ( 1) 00:14:54.037 9.102 - 9.150: 98.3128% ( 1) 00:14:54.037 9.387 - 9.434: 98.3205% ( 1) 00:14:54.037 9.624 - 9.671: 98.3281% ( 1) 00:14:54.037 9.671 - 9.719: 98.3358% ( 1) 00:14:54.037 9.908 - 9.956: 98.3434% ( 1) 00:14:54.037 10.003 - 10.050: 98.3663% ( 3) 00:14:54.037 10.335 - 10.382: 98.3739% ( 1) 00:14:54.037 10.382 - 10.430: 98.3892% ( 2) 00:14:54.037 10.430 - 10.477: 98.4045% ( 2) 00:14:54.037 10.477 - 10.524: 98.4197% ( 2) 00:14:54.037 10.714 - 10.761: 98.4274% ( 1) 00:14:54.037 10.809 - 10.856: 98.4426% ( 2) 00:14:54.037 10.856 - 10.904: 98.4579% ( 2) 00:14:54.037 10.904 - 10.951: 98.4655% ( 1) 00:14:54.037 10.951 - 10.999: 98.4732% ( 1) 00:14:54.037 11.046 - 11.093: 98.4808% ( 1) 00:14:54.037 11.236 - 11.283: 98.4961% ( 2) 00:14:54.037 11.378 - 11.425: 98.5037% ( 1) 00:14:54.037 11.520 - 11.567: 98.5113% ( 1) 00:14:54.037 11.757 - 11.804: 98.5190% ( 1) 00:14:54.037 11.804 - 11.852: 98.5266% ( 1) 00:14:54.037 11.852 - 11.899: 98.5419% ( 2) 00:14:54.037 12.231 - 12.326: 98.5571% ( 2) 00:14:54.037 12.326 - 12.421: 98.5648% ( 1) 00:14:54.037 12.516 - 12.610: 98.5724% ( 1) 00:14:54.037 12.705 - 12.800: 98.6029% ( 4) 00:14:54.037 12.895 - 12.990: 98.6258% ( 3) 00:14:54.037 12.990 - 13.084: 98.6411% ( 2) 00:14:54.037 13.084 - 13.179: 98.6564% ( 2) 00:14:54.037 13.179 - 13.274: 98.6717% ( 2) 00:14:54.037 13.274 - 13.369: 98.6793% ( 1) 00:14:54.037 13.369 - 13.464: 98.6869% ( 1) 00:14:54.037 13.464 - 13.559: 98.7022% ( 2) 00:14:54.037 13.653 - 13.748: 98.7098% ( 1) 00:14:54.037 13.938 - 14.033: 98.7175% ( 1) 00:14:54.037 14.033 - 14.127: 98.7327% ( 2) 00:14:54.037 14.127 - 14.222: 98.7404% ( 1) 00:14:54.037 14.317 - 14.412: 98.7480% ( 1) 00:14:54.037 14.412 - 14.507: 98.7556% ( 1) 00:14:54.037 14.507 - 14.601: 98.7709% ( 2) 00:14:54.037 14.601 - 14.696: 98.7785% ( 1) 00:14:54.037 14.696 - 14.791: 98.7862% ( 1) 00:14:54.037 14.791 - 14.886: 98.8091% ( 3) 00:14:54.037 14.981 - 15.076: 98.8167% ( 1) 00:14:54.037 15.076 - 15.170: 98.8320% ( 2) 00:14:54.037 15.170 - 15.265: 98.8396% ( 1) 00:14:54.037 15.644 - 15.739: 98.8472% ( 1) 00:14:54.037 16.877 - 16.972: 98.8549% ( 1) 00:14:54.037 17.067 - 17.161: 98.8625% ( 1) 00:14:54.037 17.256 - 17.351: 98.8701% ( 1) 00:14:54.037 17.351 - 17.446: 98.8778% ( 1) 00:14:54.037 17.446 - 17.541: 98.9083% ( 4) 00:14:54.037 17.541 - 17.636: 98.9236% ( 2) 00:14:54.037 17.636 - 17.730: 98.9312% ( 1) 00:14:54.037 17.730 - 17.825: 98.9465% ( 2) 00:14:54.037 17.825 - 17.920: 98.9999% ( 7) 00:14:54.037 17.920 - 18.015: 99.0763% ( 10) 00:14:54.037 18.015 - 18.110: 99.1221% ( 6) 00:14:54.037 18.110 - 18.204: 99.2366% ( 15) 00:14:54.037 18.204 - 18.299: 99.2977% ( 8) 00:14:54.037 18.299 - 18.394: 99.3587% ( 8) 00:14:54.037 18.394 - 18.489: 99.4274% ( 9) 00:14:54.037 18.489 - 18.584: 99.5190% ( 12) 00:14:54.037 18.584 - 18.679: 99.5801% ( 8) 00:14:54.037 18.679 - 18.773: 99.6412% ( 8) 00:14:54.037 18.773 - 18.868: 99.7099% ( 9) 00:14:54.037 18.868 - 18.963: 99.7481% ( 5) 00:14:54.037 18.963 - 19.058: 99.7710% ( 3) 00:14:54.037 19.058 - 19.153: 99.7862% ( 2) 00:14:54.037 19.153 - 19.247: 99.8015% ( 2) 00:14:54.037 19.247 - 19.342: 99.8091% ( 1) 00:14:54.037 19.342 - 19.437: 99.8320% ( 3) 00:14:54.037 19.437 - 19.532: 99.8397% ( 1) 00:14:54.037 19.532 - 19.627: 99.8473% ( 1) 00:14:54.037 19.627 - 19.721: 99.8550% ( 1) 00:14:54.037 19.721 - 19.816: 99.8626% ( 1) 00:14:54.037 21.807 - 21.902: 99.8702% ( 1) 00:14:54.037 22.092 - 22.187: 99.8779% ( 1) 00:14:54.037 22.850 - 22.945: 99.8855% ( 1) 00:14:54.037 24.652 - 24.841: 99.8931% ( 1) 00:14:54.037 28.444 - 28.634: 99.9008% ( 1) 00:14:54.037 3980.705 - 4004.978: 99.9542% ( 7) 00:14:54.037 4004.978 - 4029.250: 99.9924% ( 5) 00:14:54.037 5000.154 - 5024.427: 100.0000% ( 1) 00:14:54.037 00:14:54.037 Complete histogram 00:14:54.037 ================== 00:14:54.037 Range in us Cumulative Count 00:14:54.037 2.062 - 2.074: 0.0076% ( 1) 00:14:54.037 2.074 - 2.086: 7.8785% ( 1031) 00:14:54.037 2.086 - 2.098: 32.1170% ( 3175) 00:14:54.037 2.098 - 2.110: 35.7737% ( 479) 00:14:54.037 2.110 - 2.121: 48.0113% ( 1603) 00:14:54.037 2.121 - 2.133: 59.5542% ( 1512) 00:14:54.037 2.133 - 2.145: 61.5085% ( 256) 00:14:54.037 2.145 - 2.157: 67.1578% ( 740) 00:14:54.037 2.157 - 2.169: 72.8987% ( 752) 00:14:54.038 2.169 - 2.181: 73.9675% ( 140) 00:14:54.038 2.181 - 2.193: 78.5403% ( 599) 00:14:54.038 2.193 - 2.204: 81.7849% ( 425) 00:14:54.038 2.204 - 2.216: 82.4872% ( 92) 00:14:54.038 2.216 - 2.228: 84.9836% ( 327) 00:14:54.038 2.228 - 2.240: 88.9152% ( 515) 00:14:54.038 2.240 - 2.252: 90.5336% ( 212) 00:14:54.038 2.252 - 2.264: 92.2360% ( 223) 00:14:54.038 2.264 - 2.276: 93.7171% ( 194) 00:14:54.038 2.276 - 2.287: 94.0453% ( 43) 00:14:54.038 2.287 - 2.299: 94.3507% ( 40) 00:14:54.038 2.299 - 2.311: 94.8240% ( 62) 00:14:54.038 2.311 - 2.323: 95.4577% ( 83) 00:14:54.038 2.323 - 2.335: 95.5569% ( 13) 00:14:54.038 2.335 - 2.347: 95.5874% ( 4) 00:14:54.038 2.347 - 2.359: 95.6256% ( 5) 00:14:54.038 2.359 - 2.370: 95.7478% ( 16) 00:14:54.038 2.370 - 2.382: 95.9463% ( 26) 00:14:54.038 2.382 - 2.394: 96.5646% ( 81) 00:14:54.038 2.394 - 2.406: 97.0303% ( 61) 00:14:54.038 2.406 - 2.418: 97.2899% ( 34) 00:14:54.038 2.418 - 2.430: 97.5265% ( 31) 00:14:54.038 2.430 - 2.441: 97.7097% ( 24) 00:14:54.038 2.441 - 2.453: 97.8777% ( 22) 00:14:54.038 2.453 - 2.465: 97.9998% ( 16) 00:14:54.038 2.465 - 2.477: 98.1678% ( 22) 00:14:54.038 2.477 - 2.489: 98.2441% ( 10) 00:14:54.038 2.489 - 2.501: 98.2823% ( 5) 00:14:54.038 2.501 - 2.513: 98.3205% ( 5) 00:14:54.038 2.513 - 2.524: 98.3663% ( 6) 00:14:54.038 2.524 - 2.536: 98.3739% ( 1) 00:14:54.038 2.536 - 2.548: 98.4197% ( 6) 00:14:54.038 2.548 - 2.560: 98.4350% ( 2) 00:14:54.038 2.560 - 2.572: 98.4503% ( 2) 00:14:54.038 2.572 - 2.584: 98.4579% ( 1) 00:14:54.038 2.584 - 2.596: 98.4655% ( 1) 00:14:54.038 2.619 - 2.631: 98.4732% ( 1) 00:14:54.038 2.679 - 2.690: 9[2024-07-25 10:03:39.176127] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:54.328 8.4808% ( 1) 00:14:54.328 2.738 - 2.750: 98.4884% ( 1) 00:14:54.328 2.761 - 2.773: 98.4961% ( 1) 00:14:54.328 2.797 - 2.809: 98.5037% ( 1) 00:14:54.328 3.200 - 3.224: 98.5113% ( 1) 00:14:54.328 3.295 - 3.319: 98.5342% ( 3) 00:14:54.328 3.319 - 3.342: 98.5419% ( 1) 00:14:54.328 3.342 - 3.366: 98.5571% ( 2) 00:14:54.328 3.366 - 3.390: 98.5724% ( 2) 00:14:54.328 3.390 - 3.413: 98.5953% ( 3) 00:14:54.328 3.413 - 3.437: 98.6029% ( 1) 00:14:54.328 3.461 - 3.484: 98.6106% ( 1) 00:14:54.328 3.484 - 3.508: 98.6182% ( 1) 00:14:54.328 3.532 - 3.556: 98.6258% ( 1) 00:14:54.328 3.556 - 3.579: 98.6335% ( 1) 00:14:54.328 3.579 - 3.603: 98.6411% ( 1) 00:14:54.328 3.627 - 3.650: 98.6488% ( 1) 00:14:54.328 3.674 - 3.698: 98.6564% ( 1) 00:14:54.328 3.935 - 3.959: 98.6640% ( 1) 00:14:54.328 4.053 - 4.077: 98.6717% ( 1) 00:14:54.328 4.456 - 4.480: 98.6793% ( 1) 00:14:54.328 5.120 - 5.144: 98.6869% ( 1) 00:14:54.328 5.784 - 5.807: 98.6946% ( 1) 00:14:54.328 6.068 - 6.116: 98.7022% ( 1) 00:14:54.328 6.210 - 6.258: 98.7175% ( 2) 00:14:54.328 6.258 - 6.305: 98.7251% ( 1) 00:14:54.328 6.305 - 6.353: 98.7327% ( 1) 00:14:54.328 6.353 - 6.400: 98.7404% ( 1) 00:14:54.328 6.400 - 6.447: 98.7556% ( 2) 00:14:54.328 6.637 - 6.684: 98.7633% ( 1) 00:14:54.328 6.969 - 7.016: 98.7709% ( 1) 00:14:54.328 7.206 - 7.253: 98.7785% ( 1) 00:14:54.328 7.348 - 7.396: 98.7862% ( 1) 00:14:54.328 7.396 - 7.443: 98.7938% ( 1) 00:14:54.328 7.633 - 7.680: 98.8014% ( 1) 00:14:54.328 7.964 - 8.012: 98.8091% ( 1) 00:14:54.328 8.391 - 8.439: 98.8167% ( 1) 00:14:54.328 8.676 - 8.723: 98.8243% ( 1) 00:14:54.328 9.197 - 9.244: 98.8320% ( 1) 00:14:54.328 15.455 - 15.550: 98.8396% ( 1) 00:14:54.328 15.550 - 15.644: 98.8472% ( 1) 00:14:54.328 15.739 - 15.834: 98.8701% ( 3) 00:14:54.328 15.929 - 16.024: 98.8930% ( 3) 00:14:54.328 16.024 - 16.119: 98.9389% ( 6) 00:14:54.328 16.119 - 16.213: 98.9618% ( 3) 00:14:54.328 16.213 - 16.308: 98.9770% ( 2) 00:14:54.328 16.308 - 16.403: 99.0228% ( 6) 00:14:54.328 16.403 - 16.498: 99.0763% ( 7) 00:14:54.328 16.498 - 16.593: 99.0915% ( 2) 00:14:54.328 16.593 - 16.687: 99.1602% ( 9) 00:14:54.328 16.687 - 16.782: 99.1831% ( 3) 00:14:54.328 16.782 - 16.877: 99.2519% ( 9) 00:14:54.328 16.877 - 16.972: 99.2595% ( 1) 00:14:54.328 16.972 - 17.067: 99.2671% ( 1) 00:14:54.328 17.067 - 17.161: 99.2824% ( 2) 00:14:54.328 17.256 - 17.351: 99.3129% ( 4) 00:14:54.328 17.351 - 17.446: 99.3435% ( 4) 00:14:54.328 17.541 - 17.636: 99.3511% ( 1) 00:14:54.328 17.636 - 17.730: 99.3587% ( 1) 00:14:54.328 18.015 - 18.110: 99.3740% ( 2) 00:14:54.328 18.110 - 18.204: 99.3893% ( 2) 00:14:54.328 18.204 - 18.299: 99.3969% ( 1) 00:14:54.328 18.489 - 18.584: 99.4045% ( 1) 00:14:54.328 18.773 - 18.868: 99.4122% ( 1) 00:14:54.328 18.963 - 19.058: 99.4198% ( 1) 00:14:54.328 24.652 - 24.841: 99.4274% ( 1) 00:14:54.328 3592.344 - 3616.616: 99.4351% ( 1) 00:14:54.328 3810.797 - 3835.070: 99.4427% ( 1) 00:14:54.328 3980.705 - 4004.978: 99.7404% ( 39) 00:14:54.328 4004.978 - 4029.250: 100.0000% ( 34) 00:14:54.328 00:14:54.328 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:14:54.328 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:14:54.328 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:14:54.328 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:14:54.328 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:54.588 [ 00:14:54.588 { 00:14:54.588 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:54.588 "subtype": "Discovery", 00:14:54.588 "listen_addresses": [], 00:14:54.588 "allow_any_host": true, 00:14:54.588 "hosts": [] 00:14:54.588 }, 00:14:54.588 { 00:14:54.588 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:54.588 "subtype": "NVMe", 00:14:54.588 "listen_addresses": [ 00:14:54.588 { 00:14:54.588 "trtype": "VFIOUSER", 00:14:54.588 "adrfam": "IPv4", 00:14:54.588 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:54.588 "trsvcid": "0" 00:14:54.588 } 00:14:54.588 ], 00:14:54.588 "allow_any_host": true, 00:14:54.588 "hosts": [], 00:14:54.588 "serial_number": "SPDK1", 00:14:54.588 "model_number": "SPDK bdev Controller", 00:14:54.588 "max_namespaces": 32, 00:14:54.588 "min_cntlid": 1, 00:14:54.588 "max_cntlid": 65519, 00:14:54.588 "namespaces": [ 00:14:54.588 { 00:14:54.588 "nsid": 1, 00:14:54.588 "bdev_name": "Malloc1", 00:14:54.588 "name": "Malloc1", 00:14:54.588 "nguid": "752FA51337B04413BA97648DC6F7037A", 00:14:54.588 "uuid": "752fa513-37b0-4413-ba97-648dc6f7037a" 00:14:54.588 } 00:14:54.588 ] 00:14:54.588 }, 00:14:54.588 { 00:14:54.588 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:54.588 "subtype": "NVMe", 00:14:54.588 "listen_addresses": [ 00:14:54.588 { 00:14:54.588 "trtype": "VFIOUSER", 00:14:54.588 "adrfam": "IPv4", 00:14:54.588 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:54.588 "trsvcid": "0" 00:14:54.588 } 00:14:54.588 ], 00:14:54.588 "allow_any_host": true, 00:14:54.588 "hosts": [], 00:14:54.588 "serial_number": "SPDK2", 00:14:54.588 "model_number": "SPDK bdev Controller", 00:14:54.588 "max_namespaces": 32, 00:14:54.588 "min_cntlid": 1, 00:14:54.588 "max_cntlid": 65519, 00:14:54.588 "namespaces": [ 00:14:54.588 { 00:14:54.588 "nsid": 1, 00:14:54.588 "bdev_name": "Malloc2", 00:14:54.588 "name": "Malloc2", 00:14:54.588 "nguid": "981F597049EF4194A20563170EF3C631", 00:14:54.588 "uuid": "981f5970-49ef-4194-a205-63170ef3c631" 00:14:54.588 } 00:14:54.588 ] 00:14:54.588 } 00:14:54.588 ] 00:14:54.588 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:14:54.588 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=417524 00:14:54.588 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:14:54.588 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:14:54.588 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:14:54.588 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:54.588 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:54.588 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:14:54.588 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:14:54.588 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:14:54.588 EAL: No free 2048 kB hugepages reported on node 1 00:14:54.588 [2024-07-25 10:03:39.691910] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:54.846 Malloc3 00:14:54.846 10:03:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:14:55.411 [2024-07-25 10:03:40.286165] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:55.411 10:03:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:55.411 Asynchronous Event Request test 00:14:55.411 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:55.411 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:55.411 Registering asynchronous event callbacks... 00:14:55.411 Starting namespace attribute notice tests for all controllers... 00:14:55.411 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:14:55.411 aer_cb - Changed Namespace 00:14:55.411 Cleaning up... 00:14:55.411 [ 00:14:55.411 { 00:14:55.411 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:55.411 "subtype": "Discovery", 00:14:55.411 "listen_addresses": [], 00:14:55.411 "allow_any_host": true, 00:14:55.411 "hosts": [] 00:14:55.411 }, 00:14:55.411 { 00:14:55.411 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:55.411 "subtype": "NVMe", 00:14:55.411 "listen_addresses": [ 00:14:55.411 { 00:14:55.411 "trtype": "VFIOUSER", 00:14:55.411 "adrfam": "IPv4", 00:14:55.411 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:55.411 "trsvcid": "0" 00:14:55.411 } 00:14:55.411 ], 00:14:55.411 "allow_any_host": true, 00:14:55.411 "hosts": [], 00:14:55.411 "serial_number": "SPDK1", 00:14:55.411 "model_number": "SPDK bdev Controller", 00:14:55.411 "max_namespaces": 32, 00:14:55.411 "min_cntlid": 1, 00:14:55.411 "max_cntlid": 65519, 00:14:55.411 "namespaces": [ 00:14:55.411 { 00:14:55.411 "nsid": 1, 00:14:55.411 "bdev_name": "Malloc1", 00:14:55.411 "name": "Malloc1", 00:14:55.411 "nguid": "752FA51337B04413BA97648DC6F7037A", 00:14:55.411 "uuid": "752fa513-37b0-4413-ba97-648dc6f7037a" 00:14:55.411 }, 00:14:55.411 { 00:14:55.411 "nsid": 2, 00:14:55.411 "bdev_name": "Malloc3", 00:14:55.411 "name": "Malloc3", 00:14:55.411 "nguid": "E640B5F5082349A6BAB4B37C3DEABC2B", 00:14:55.411 "uuid": "e640b5f5-0823-49a6-bab4-b37c3deabc2b" 00:14:55.411 } 00:14:55.411 ] 00:14:55.411 }, 00:14:55.411 { 00:14:55.411 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:55.411 "subtype": "NVMe", 00:14:55.411 "listen_addresses": [ 00:14:55.411 { 00:14:55.411 "trtype": "VFIOUSER", 00:14:55.411 "adrfam": "IPv4", 00:14:55.411 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:55.411 "trsvcid": "0" 00:14:55.411 } 00:14:55.411 ], 00:14:55.411 "allow_any_host": true, 00:14:55.411 "hosts": [], 00:14:55.411 "serial_number": "SPDK2", 00:14:55.411 "model_number": "SPDK bdev Controller", 00:14:55.411 "max_namespaces": 32, 00:14:55.411 "min_cntlid": 1, 00:14:55.411 "max_cntlid": 65519, 00:14:55.411 "namespaces": [ 00:14:55.411 { 00:14:55.411 "nsid": 1, 00:14:55.411 "bdev_name": "Malloc2", 00:14:55.411 "name": "Malloc2", 00:14:55.411 "nguid": "981F597049EF4194A20563170EF3C631", 00:14:55.411 "uuid": "981f5970-49ef-4194-a205-63170ef3c631" 00:14:55.411 } 00:14:55.411 ] 00:14:55.411 } 00:14:55.411 ] 00:14:55.671 10:03:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 417524 00:14:55.671 10:03:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:55.671 10:03:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:14:55.671 10:03:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:14:55.671 10:03:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:14:55.671 [2024-07-25 10:03:40.598003] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:14:55.671 [2024-07-25 10:03:40.598039] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid417597 ] 00:14:55.671 EAL: No free 2048 kB hugepages reported on node 1 00:14:55.671 [2024-07-25 10:03:40.632676] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:14:55.671 [2024-07-25 10:03:40.640761] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:55.671 [2024-07-25 10:03:40.640792] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f7ecb61b000 00:14:55.671 [2024-07-25 10:03:40.641762] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:55.671 [2024-07-25 10:03:40.642765] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:55.671 [2024-07-25 10:03:40.643789] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:55.671 [2024-07-25 10:03:40.644812] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:55.671 [2024-07-25 10:03:40.647444] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:55.671 [2024-07-25 10:03:40.647834] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:55.671 [2024-07-25 10:03:40.648830] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:55.671 [2024-07-25 10:03:40.649838] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:55.671 [2024-07-25 10:03:40.650843] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:55.671 [2024-07-25 10:03:40.650870] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f7ecb610000 00:14:55.671 [2024-07-25 10:03:40.651985] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:55.671 [2024-07-25 10:03:40.670662] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:14:55.671 [2024-07-25 10:03:40.670696] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:14:55.671 [2024-07-25 10:03:40.672799] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:14:55.671 [2024-07-25 10:03:40.672854] nvme_pcie_common.c: 133:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:14:55.671 [2024-07-25 10:03:40.672941] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:14:55.671 [2024-07-25 10:03:40.672962] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:14:55.671 [2024-07-25 10:03:40.672973] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:14:55.671 [2024-07-25 10:03:40.673811] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:14:55.671 [2024-07-25 10:03:40.673837] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:14:55.671 [2024-07-25 10:03:40.673851] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:14:55.671 [2024-07-25 10:03:40.674810] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:14:55.671 [2024-07-25 10:03:40.674830] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:14:55.671 [2024-07-25 10:03:40.674844] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:14:55.671 [2024-07-25 10:03:40.675813] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:14:55.671 [2024-07-25 10:03:40.675833] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:55.671 [2024-07-25 10:03:40.676819] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:14:55.671 [2024-07-25 10:03:40.676844] nvme_ctrlr.c:3873:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:14:55.671 [2024-07-25 10:03:40.676854] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:14:55.671 [2024-07-25 10:03:40.676865] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:55.671 [2024-07-25 10:03:40.676974] nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:14:55.672 [2024-07-25 10:03:40.676982] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:55.672 [2024-07-25 10:03:40.676990] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:14:55.672 [2024-07-25 10:03:40.677827] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:14:55.672 [2024-07-25 10:03:40.678833] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:14:55.672 [2024-07-25 10:03:40.679840] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:14:55.672 [2024-07-25 10:03:40.680841] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:55.672 [2024-07-25 10:03:40.680914] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:55.672 [2024-07-25 10:03:40.681877] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:14:55.672 [2024-07-25 10:03:40.681898] nvme_ctrlr.c:3908:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:55.672 [2024-07-25 10:03:40.681907] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:14:55.672 [2024-07-25 10:03:40.681931] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:14:55.672 [2024-07-25 10:03:40.681944] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:14:55.672 [2024-07-25 10:03:40.681964] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:55.672 [2024-07-25 10:03:40.681974] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:55.672 [2024-07-25 10:03:40.681980] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:55.672 [2024-07-25 10:03:40.681996] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:55.672 [2024-07-25 10:03:40.688461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:14:55.672 [2024-07-25 10:03:40.688484] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:14:55.672 [2024-07-25 10:03:40.688493] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:14:55.672 [2024-07-25 10:03:40.688500] nvme_ctrlr.c:2064:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:14:55.672 [2024-07-25 10:03:40.688508] nvme_ctrlr.c:2075:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:14:55.672 [2024-07-25 10:03:40.688516] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:14:55.672 [2024-07-25 10:03:40.688527] nvme_ctrlr.c:2103:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:14:55.672 [2024-07-25 10:03:40.688536] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:14:55.672 [2024-07-25 10:03:40.688548] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:14:55.672 [2024-07-25 10:03:40.688567] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:14:55.672 [2024-07-25 10:03:40.696454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:14:55.672 [2024-07-25 10:03:40.696485] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:55.672 [2024-07-25 10:03:40.696500] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:55.672 [2024-07-25 10:03:40.696512] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:55.672 [2024-07-25 10:03:40.696524] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:55.672 [2024-07-25 10:03:40.696533] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:14:55.672 [2024-07-25 10:03:40.696550] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:55.672 [2024-07-25 10:03:40.696564] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:14:55.672 [2024-07-25 10:03:40.704447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:14:55.672 [2024-07-25 10:03:40.704465] nvme_ctrlr.c:3014:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:14:55.672 [2024-07-25 10:03:40.704475] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:55.672 [2024-07-25 10:03:40.704491] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:14:55.672 [2024-07-25 10:03:40.704501] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:14:55.672 [2024-07-25 10:03:40.704515] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:55.672 [2024-07-25 10:03:40.712438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:14:55.672 [2024-07-25 10:03:40.712512] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:14:55.672 [2024-07-25 10:03:40.712530] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:14:55.672 [2024-07-25 10:03:40.712544] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:14:55.672 [2024-07-25 10:03:40.712552] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:14:55.672 [2024-07-25 10:03:40.712558] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:55.672 [2024-07-25 10:03:40.712568] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:14:55.672 [2024-07-25 10:03:40.720442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:14:55.672 [2024-07-25 10:03:40.720466] nvme_ctrlr.c:4697:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:14:55.672 [2024-07-25 10:03:40.720486] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:14:55.672 [2024-07-25 10:03:40.720500] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:14:55.672 [2024-07-25 10:03:40.720514] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:55.672 [2024-07-25 10:03:40.720522] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:55.672 [2024-07-25 10:03:40.720528] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:55.672 [2024-07-25 10:03:40.720538] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:55.672 [2024-07-25 10:03:40.728443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:14:55.672 [2024-07-25 10:03:40.728470] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:55.672 [2024-07-25 10:03:40.728487] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:55.672 [2024-07-25 10:03:40.728500] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:55.672 [2024-07-25 10:03:40.728509] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:55.672 [2024-07-25 10:03:40.728515] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:55.672 [2024-07-25 10:03:40.728525] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:55.672 [2024-07-25 10:03:40.736441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:14:55.672 [2024-07-25 10:03:40.736462] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:55.672 [2024-07-25 10:03:40.736475] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:14:55.672 [2024-07-25 10:03:40.736490] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:14:55.672 [2024-07-25 10:03:40.736503] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:14:55.672 [2024-07-25 10:03:40.736512] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:55.672 [2024-07-25 10:03:40.736521] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:14:55.672 [2024-07-25 10:03:40.736529] nvme_ctrlr.c:3114:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:14:55.672 [2024-07-25 10:03:40.736537] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:14:55.672 [2024-07-25 10:03:40.736545] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:14:55.672 [2024-07-25 10:03:40.736572] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:14:55.672 [2024-07-25 10:03:40.744439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:14:55.672 [2024-07-25 10:03:40.744465] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:14:55.672 [2024-07-25 10:03:40.752438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:14:55.672 [2024-07-25 10:03:40.752463] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:14:55.672 [2024-07-25 10:03:40.760456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:14:55.672 [2024-07-25 10:03:40.760480] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:55.673 [2024-07-25 10:03:40.768442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:14:55.673 [2024-07-25 10:03:40.768473] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:14:55.673 [2024-07-25 10:03:40.768484] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:14:55.673 [2024-07-25 10:03:40.768491] nvme_pcie_common.c:1239:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:14:55.673 [2024-07-25 10:03:40.768497] nvme_pcie_common.c:1255:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:14:55.673 [2024-07-25 10:03:40.768503] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:14:55.673 [2024-07-25 10:03:40.768512] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:14:55.673 [2024-07-25 10:03:40.768524] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:14:55.673 [2024-07-25 10:03:40.768532] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:14:55.673 [2024-07-25 10:03:40.768538] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:55.673 [2024-07-25 10:03:40.768547] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:14:55.673 [2024-07-25 10:03:40.768558] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:14:55.673 [2024-07-25 10:03:40.768566] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:55.673 [2024-07-25 10:03:40.768571] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:55.673 [2024-07-25 10:03:40.768580] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:55.673 [2024-07-25 10:03:40.768592] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:14:55.673 [2024-07-25 10:03:40.768600] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:14:55.673 [2024-07-25 10:03:40.768605] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:55.673 [2024-07-25 10:03:40.768614] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:14:55.673 [2024-07-25 10:03:40.776456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:14:55.673 [2024-07-25 10:03:40.776485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:14:55.673 [2024-07-25 10:03:40.776504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:14:55.673 [2024-07-25 10:03:40.776520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:14:55.673 ===================================================== 00:14:55.673 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:55.673 ===================================================== 00:14:55.673 Controller Capabilities/Features 00:14:55.673 ================================ 00:14:55.673 Vendor ID: 4e58 00:14:55.673 Subsystem Vendor ID: 4e58 00:14:55.673 Serial Number: SPDK2 00:14:55.673 Model Number: SPDK bdev Controller 00:14:55.673 Firmware Version: 24.09 00:14:55.673 Recommended Arb Burst: 6 00:14:55.673 IEEE OUI Identifier: 8d 6b 50 00:14:55.673 Multi-path I/O 00:14:55.673 May have multiple subsystem ports: Yes 00:14:55.673 May have multiple controllers: Yes 00:14:55.673 Associated with SR-IOV VF: No 00:14:55.673 Max Data Transfer Size: 131072 00:14:55.673 Max Number of Namespaces: 32 00:14:55.673 Max Number of I/O Queues: 127 00:14:55.673 NVMe Specification Version (VS): 1.3 00:14:55.673 NVMe Specification Version (Identify): 1.3 00:14:55.673 Maximum Queue Entries: 256 00:14:55.673 Contiguous Queues Required: Yes 00:14:55.673 Arbitration Mechanisms Supported 00:14:55.673 Weighted Round Robin: Not Supported 00:14:55.673 Vendor Specific: Not Supported 00:14:55.673 Reset Timeout: 15000 ms 00:14:55.673 Doorbell Stride: 4 bytes 00:14:55.673 NVM Subsystem Reset: Not Supported 00:14:55.673 Command Sets Supported 00:14:55.673 NVM Command Set: Supported 00:14:55.673 Boot Partition: Not Supported 00:14:55.673 Memory Page Size Minimum: 4096 bytes 00:14:55.673 Memory Page Size Maximum: 4096 bytes 00:14:55.673 Persistent Memory Region: Not Supported 00:14:55.673 Optional Asynchronous Events Supported 00:14:55.673 Namespace Attribute Notices: Supported 00:14:55.673 Firmware Activation Notices: Not Supported 00:14:55.673 ANA Change Notices: Not Supported 00:14:55.673 PLE Aggregate Log Change Notices: Not Supported 00:14:55.673 LBA Status Info Alert Notices: Not Supported 00:14:55.673 EGE Aggregate Log Change Notices: Not Supported 00:14:55.673 Normal NVM Subsystem Shutdown event: Not Supported 00:14:55.673 Zone Descriptor Change Notices: Not Supported 00:14:55.673 Discovery Log Change Notices: Not Supported 00:14:55.673 Controller Attributes 00:14:55.673 128-bit Host Identifier: Supported 00:14:55.673 Non-Operational Permissive Mode: Not Supported 00:14:55.673 NVM Sets: Not Supported 00:14:55.673 Read Recovery Levels: Not Supported 00:14:55.673 Endurance Groups: Not Supported 00:14:55.673 Predictable Latency Mode: Not Supported 00:14:55.673 Traffic Based Keep ALive: Not Supported 00:14:55.673 Namespace Granularity: Not Supported 00:14:55.673 SQ Associations: Not Supported 00:14:55.673 UUID List: Not Supported 00:14:55.673 Multi-Domain Subsystem: Not Supported 00:14:55.673 Fixed Capacity Management: Not Supported 00:14:55.673 Variable Capacity Management: Not Supported 00:14:55.673 Delete Endurance Group: Not Supported 00:14:55.673 Delete NVM Set: Not Supported 00:14:55.673 Extended LBA Formats Supported: Not Supported 00:14:55.673 Flexible Data Placement Supported: Not Supported 00:14:55.673 00:14:55.673 Controller Memory Buffer Support 00:14:55.673 ================================ 00:14:55.673 Supported: No 00:14:55.673 00:14:55.673 Persistent Memory Region Support 00:14:55.673 ================================ 00:14:55.673 Supported: No 00:14:55.673 00:14:55.673 Admin Command Set Attributes 00:14:55.673 ============================ 00:14:55.673 Security Send/Receive: Not Supported 00:14:55.673 Format NVM: Not Supported 00:14:55.673 Firmware Activate/Download: Not Supported 00:14:55.673 Namespace Management: Not Supported 00:14:55.673 Device Self-Test: Not Supported 00:14:55.673 Directives: Not Supported 00:14:55.673 NVMe-MI: Not Supported 00:14:55.673 Virtualization Management: Not Supported 00:14:55.673 Doorbell Buffer Config: Not Supported 00:14:55.673 Get LBA Status Capability: Not Supported 00:14:55.673 Command & Feature Lockdown Capability: Not Supported 00:14:55.673 Abort Command Limit: 4 00:14:55.673 Async Event Request Limit: 4 00:14:55.673 Number of Firmware Slots: N/A 00:14:55.673 Firmware Slot 1 Read-Only: N/A 00:14:55.673 Firmware Activation Without Reset: N/A 00:14:55.673 Multiple Update Detection Support: N/A 00:14:55.673 Firmware Update Granularity: No Information Provided 00:14:55.673 Per-Namespace SMART Log: No 00:14:55.673 Asymmetric Namespace Access Log Page: Not Supported 00:14:55.673 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:14:55.673 Command Effects Log Page: Supported 00:14:55.673 Get Log Page Extended Data: Supported 00:14:55.673 Telemetry Log Pages: Not Supported 00:14:55.673 Persistent Event Log Pages: Not Supported 00:14:55.673 Supported Log Pages Log Page: May Support 00:14:55.673 Commands Supported & Effects Log Page: Not Supported 00:14:55.673 Feature Identifiers & Effects Log Page:May Support 00:14:55.673 NVMe-MI Commands & Effects Log Page: May Support 00:14:55.673 Data Area 4 for Telemetry Log: Not Supported 00:14:55.673 Error Log Page Entries Supported: 128 00:14:55.673 Keep Alive: Supported 00:14:55.673 Keep Alive Granularity: 10000 ms 00:14:55.673 00:14:55.673 NVM Command Set Attributes 00:14:55.673 ========================== 00:14:55.673 Submission Queue Entry Size 00:14:55.673 Max: 64 00:14:55.673 Min: 64 00:14:55.673 Completion Queue Entry Size 00:14:55.673 Max: 16 00:14:55.673 Min: 16 00:14:55.673 Number of Namespaces: 32 00:14:55.673 Compare Command: Supported 00:14:55.673 Write Uncorrectable Command: Not Supported 00:14:55.673 Dataset Management Command: Supported 00:14:55.673 Write Zeroes Command: Supported 00:14:55.673 Set Features Save Field: Not Supported 00:14:55.673 Reservations: Not Supported 00:14:55.673 Timestamp: Not Supported 00:14:55.673 Copy: Supported 00:14:55.673 Volatile Write Cache: Present 00:14:55.673 Atomic Write Unit (Normal): 1 00:14:55.673 Atomic Write Unit (PFail): 1 00:14:55.673 Atomic Compare & Write Unit: 1 00:14:55.673 Fused Compare & Write: Supported 00:14:55.673 Scatter-Gather List 00:14:55.673 SGL Command Set: Supported (Dword aligned) 00:14:55.673 SGL Keyed: Not Supported 00:14:55.673 SGL Bit Bucket Descriptor: Not Supported 00:14:55.673 SGL Metadata Pointer: Not Supported 00:14:55.673 Oversized SGL: Not Supported 00:14:55.673 SGL Metadata Address: Not Supported 00:14:55.673 SGL Offset: Not Supported 00:14:55.673 Transport SGL Data Block: Not Supported 00:14:55.673 Replay Protected Memory Block: Not Supported 00:14:55.673 00:14:55.673 Firmware Slot Information 00:14:55.674 ========================= 00:14:55.674 Active slot: 1 00:14:55.674 Slot 1 Firmware Revision: 24.09 00:14:55.674 00:14:55.674 00:14:55.674 Commands Supported and Effects 00:14:55.674 ============================== 00:14:55.674 Admin Commands 00:14:55.674 -------------- 00:14:55.674 Get Log Page (02h): Supported 00:14:55.674 Identify (06h): Supported 00:14:55.674 Abort (08h): Supported 00:14:55.674 Set Features (09h): Supported 00:14:55.674 Get Features (0Ah): Supported 00:14:55.674 Asynchronous Event Request (0Ch): Supported 00:14:55.674 Keep Alive (18h): Supported 00:14:55.674 I/O Commands 00:14:55.674 ------------ 00:14:55.674 Flush (00h): Supported LBA-Change 00:14:55.674 Write (01h): Supported LBA-Change 00:14:55.674 Read (02h): Supported 00:14:55.674 Compare (05h): Supported 00:14:55.674 Write Zeroes (08h): Supported LBA-Change 00:14:55.674 Dataset Management (09h): Supported LBA-Change 00:14:55.674 Copy (19h): Supported LBA-Change 00:14:55.674 00:14:55.674 Error Log 00:14:55.674 ========= 00:14:55.674 00:14:55.674 Arbitration 00:14:55.674 =========== 00:14:55.674 Arbitration Burst: 1 00:14:55.674 00:14:55.674 Power Management 00:14:55.674 ================ 00:14:55.674 Number of Power States: 1 00:14:55.674 Current Power State: Power State #0 00:14:55.674 Power State #0: 00:14:55.674 Max Power: 0.00 W 00:14:55.674 Non-Operational State: Operational 00:14:55.674 Entry Latency: Not Reported 00:14:55.674 Exit Latency: Not Reported 00:14:55.674 Relative Read Throughput: 0 00:14:55.674 Relative Read Latency: 0 00:14:55.674 Relative Write Throughput: 0 00:14:55.674 Relative Write Latency: 0 00:14:55.674 Idle Power: Not Reported 00:14:55.674 Active Power: Not Reported 00:14:55.674 Non-Operational Permissive Mode: Not Supported 00:14:55.674 00:14:55.674 Health Information 00:14:55.674 ================== 00:14:55.674 Critical Warnings: 00:14:55.674 Available Spare Space: OK 00:14:55.674 Temperature: OK 00:14:55.674 Device Reliability: OK 00:14:55.674 Read Only: No 00:14:55.674 Volatile Memory Backup: OK 00:14:55.674 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:55.674 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:55.674 Available Spare: 0% 00:14:55.674 Available Sp[2024-07-25 10:03:40.776636] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:14:55.674 [2024-07-25 10:03:40.784456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:14:55.674 [2024-07-25 10:03:40.784503] nvme_ctrlr.c:4361:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:14:55.674 [2024-07-25 10:03:40.784521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:55.674 [2024-07-25 10:03:40.784532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:55.674 [2024-07-25 10:03:40.784542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:55.674 [2024-07-25 10:03:40.784552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:55.674 [2024-07-25 10:03:40.784638] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:14:55.674 [2024-07-25 10:03:40.784660] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:14:55.674 [2024-07-25 10:03:40.785637] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:55.674 [2024-07-25 10:03:40.785718] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:14:55.674 [2024-07-25 10:03:40.785760] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:14:55.674 [2024-07-25 10:03:40.786647] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:14:55.674 [2024-07-25 10:03:40.786672] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:14:55.674 [2024-07-25 10:03:40.786740] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:14:55.674 [2024-07-25 10:03:40.787925] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:55.674 are Threshold: 0% 00:14:55.674 Life Percentage Used: 0% 00:14:55.674 Data Units Read: 0 00:14:55.674 Data Units Written: 0 00:14:55.674 Host Read Commands: 0 00:14:55.674 Host Write Commands: 0 00:14:55.674 Controller Busy Time: 0 minutes 00:14:55.674 Power Cycles: 0 00:14:55.674 Power On Hours: 0 hours 00:14:55.674 Unsafe Shutdowns: 0 00:14:55.674 Unrecoverable Media Errors: 0 00:14:55.674 Lifetime Error Log Entries: 0 00:14:55.674 Warning Temperature Time: 0 minutes 00:14:55.674 Critical Temperature Time: 0 minutes 00:14:55.674 00:14:55.674 Number of Queues 00:14:55.674 ================ 00:14:55.674 Number of I/O Submission Queues: 127 00:14:55.674 Number of I/O Completion Queues: 127 00:14:55.674 00:14:55.674 Active Namespaces 00:14:55.674 ================= 00:14:55.674 Namespace ID:1 00:14:55.674 Error Recovery Timeout: Unlimited 00:14:55.674 Command Set Identifier: NVM (00h) 00:14:55.674 Deallocate: Supported 00:14:55.674 Deallocated/Unwritten Error: Not Supported 00:14:55.674 Deallocated Read Value: Unknown 00:14:55.674 Deallocate in Write Zeroes: Not Supported 00:14:55.674 Deallocated Guard Field: 0xFFFF 00:14:55.674 Flush: Supported 00:14:55.674 Reservation: Supported 00:14:55.674 Namespace Sharing Capabilities: Multiple Controllers 00:14:55.674 Size (in LBAs): 131072 (0GiB) 00:14:55.674 Capacity (in LBAs): 131072 (0GiB) 00:14:55.674 Utilization (in LBAs): 131072 (0GiB) 00:14:55.674 NGUID: 981F597049EF4194A20563170EF3C631 00:14:55.674 UUID: 981f5970-49ef-4194-a205-63170ef3c631 00:14:55.674 Thin Provisioning: Not Supported 00:14:55.674 Per-NS Atomic Units: Yes 00:14:55.674 Atomic Boundary Size (Normal): 0 00:14:55.674 Atomic Boundary Size (PFail): 0 00:14:55.674 Atomic Boundary Offset: 0 00:14:55.674 Maximum Single Source Range Length: 65535 00:14:55.674 Maximum Copy Length: 65535 00:14:55.674 Maximum Source Range Count: 1 00:14:55.674 NGUID/EUI64 Never Reused: No 00:14:55.674 Namespace Write Protected: No 00:14:55.674 Number of LBA Formats: 1 00:14:55.674 Current LBA Format: LBA Format #00 00:14:55.674 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:55.674 00:14:55.674 10:03:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:14:55.931 EAL: No free 2048 kB hugepages reported on node 1 00:14:55.931 [2024-07-25 10:03:41.019075] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:01.190 Initializing NVMe Controllers 00:15:01.190 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:01.190 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:01.190 Initialization complete. Launching workers. 00:15:01.190 ======================================================== 00:15:01.190 Latency(us) 00:15:01.190 Device Information : IOPS MiB/s Average min max 00:15:01.190 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 34241.97 133.76 3737.42 1162.82 8294.42 00:15:01.190 ======================================================== 00:15:01.190 Total : 34241.97 133.76 3737.42 1162.82 8294.42 00:15:01.190 00:15:01.190 [2024-07-25 10:03:46.120799] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:01.190 10:03:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:01.190 EAL: No free 2048 kB hugepages reported on node 1 00:15:01.447 [2024-07-25 10:03:46.390577] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:06.704 Initializing NVMe Controllers 00:15:06.704 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:06.704 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:06.704 Initialization complete. Launching workers. 00:15:06.704 ======================================================== 00:15:06.704 Latency(us) 00:15:06.704 Device Information : IOPS MiB/s Average min max 00:15:06.704 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 31563.29 123.29 4054.51 1198.66 7720.36 00:15:06.704 ======================================================== 00:15:06.704 Total : 31563.29 123.29 4054.51 1198.66 7720.36 00:15:06.704 00:15:06.704 [2024-07-25 10:03:51.415978] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:06.704 10:03:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:06.704 EAL: No free 2048 kB hugepages reported on node 1 00:15:06.704 [2024-07-25 10:03:51.633269] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:11.964 [2024-07-25 10:03:56.767579] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:11.964 Initializing NVMe Controllers 00:15:11.964 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:11.964 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:11.964 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:15:11.964 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:15:11.964 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:15:11.964 Initialization complete. Launching workers. 00:15:11.964 Starting thread on core 2 00:15:11.964 Starting thread on core 3 00:15:11.964 Starting thread on core 1 00:15:11.964 10:03:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:15:11.964 EAL: No free 2048 kB hugepages reported on node 1 00:15:11.964 [2024-07-25 10:03:57.095928] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:16.148 [2024-07-25 10:04:00.894714] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:16.148 Initializing NVMe Controllers 00:15:16.148 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:16.148 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:16.148 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:15:16.148 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:15:16.148 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:15:16.148 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:15:16.148 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:16.148 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:16.148 Initialization complete. Launching workers. 00:15:16.148 Starting thread on core 1 with urgent priority queue 00:15:16.148 Starting thread on core 2 with urgent priority queue 00:15:16.148 Starting thread on core 3 with urgent priority queue 00:15:16.148 Starting thread on core 0 with urgent priority queue 00:15:16.148 SPDK bdev Controller (SPDK2 ) core 0: 1499.67 IO/s 66.68 secs/100000 ios 00:15:16.148 SPDK bdev Controller (SPDK2 ) core 1: 1375.33 IO/s 72.71 secs/100000 ios 00:15:16.148 SPDK bdev Controller (SPDK2 ) core 2: 1589.33 IO/s 62.92 secs/100000 ios 00:15:16.148 SPDK bdev Controller (SPDK2 ) core 3: 1202.00 IO/s 83.19 secs/100000 ios 00:15:16.148 ======================================================== 00:15:16.148 00:15:16.148 10:04:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:16.148 EAL: No free 2048 kB hugepages reported on node 1 00:15:16.148 [2024-07-25 10:04:01.203946] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:16.148 Initializing NVMe Controllers 00:15:16.148 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:16.148 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:16.148 Namespace ID: 1 size: 0GB 00:15:16.148 Initialization complete. 00:15:16.148 INFO: using host memory buffer for IO 00:15:16.148 Hello world! 00:15:16.148 [2024-07-25 10:04:01.214013] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:16.148 10:04:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:16.148 EAL: No free 2048 kB hugepages reported on node 1 00:15:16.406 [2024-07-25 10:04:01.512319] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:17.778 Initializing NVMe Controllers 00:15:17.778 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:17.778 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:17.778 Initialization complete. Launching workers. 00:15:17.778 submit (in ns) avg, min, max = 6581.3, 3495.6, 4016588.9 00:15:17.778 complete (in ns) avg, min, max = 28885.9, 2072.2, 4017403.3 00:15:17.778 00:15:17.778 Submit histogram 00:15:17.778 ================ 00:15:17.778 Range in us Cumulative Count 00:15:17.778 3.484 - 3.508: 0.0075% ( 1) 00:15:17.778 3.508 - 3.532: 0.0973% ( 12) 00:15:17.778 3.532 - 3.556: 1.1522% ( 141) 00:15:17.778 3.556 - 3.579: 3.3892% ( 299) 00:15:17.778 3.579 - 3.603: 8.5067% ( 684) 00:15:17.778 3.603 - 3.627: 15.5469% ( 941) 00:15:17.778 3.627 - 3.650: 26.9864% ( 1529) 00:15:17.778 3.650 - 3.674: 36.8397% ( 1317) 00:15:17.778 3.674 - 3.698: 45.5035% ( 1158) 00:15:17.778 3.698 - 3.721: 52.1547% ( 889) 00:15:17.778 3.721 - 3.745: 57.6313% ( 732) 00:15:17.778 3.745 - 3.769: 62.3223% ( 627) 00:15:17.778 3.769 - 3.793: 66.7589% ( 593) 00:15:17.778 3.793 - 3.816: 70.2454% ( 466) 00:15:17.778 3.816 - 3.840: 73.1109% ( 383) 00:15:17.778 3.840 - 3.864: 76.1858% ( 411) 00:15:17.778 3.864 - 3.887: 79.8219% ( 486) 00:15:17.778 3.887 - 3.911: 83.2485% ( 458) 00:15:17.778 3.911 - 3.935: 85.9494% ( 361) 00:15:17.778 3.935 - 3.959: 87.8124% ( 249) 00:15:17.778 3.959 - 3.982: 89.6828% ( 250) 00:15:17.778 3.982 - 4.006: 91.3512% ( 223) 00:15:17.778 4.006 - 4.030: 92.7353% ( 185) 00:15:17.778 4.030 - 4.053: 93.8950% ( 155) 00:15:17.778 4.053 - 4.077: 94.7179% ( 110) 00:15:17.778 4.077 - 4.101: 95.4586% ( 99) 00:15:17.778 4.101 - 4.124: 95.9674% ( 68) 00:15:17.778 4.124 - 4.148: 96.3714% ( 54) 00:15:17.778 4.148 - 4.172: 96.7081% ( 45) 00:15:17.778 4.172 - 4.196: 96.8427% ( 18) 00:15:17.778 4.196 - 4.219: 96.9924% ( 20) 00:15:17.778 4.219 - 4.243: 97.0747% ( 11) 00:15:17.778 4.243 - 4.267: 97.1495% ( 10) 00:15:17.778 4.267 - 4.290: 97.2093% ( 8) 00:15:17.778 4.290 - 4.314: 97.2916% ( 11) 00:15:17.778 4.314 - 4.338: 97.3814% ( 12) 00:15:17.778 4.338 - 4.361: 97.4712% ( 12) 00:15:17.778 4.361 - 4.385: 97.5236% ( 7) 00:15:17.778 4.385 - 4.409: 97.5759% ( 7) 00:15:17.778 4.409 - 4.433: 97.6059% ( 4) 00:15:17.778 4.456 - 4.480: 97.6358% ( 4) 00:15:17.778 4.551 - 4.575: 97.6582% ( 3) 00:15:17.778 4.599 - 4.622: 97.6732% ( 2) 00:15:17.778 4.717 - 4.741: 97.6807% ( 1) 00:15:17.778 4.741 - 4.764: 97.6882% ( 1) 00:15:17.778 4.764 - 4.788: 97.6956% ( 1) 00:15:17.778 4.788 - 4.812: 97.7181% ( 3) 00:15:17.778 4.812 - 4.836: 97.7480% ( 4) 00:15:17.778 4.836 - 4.859: 97.7929% ( 6) 00:15:17.778 4.859 - 4.883: 97.8378% ( 6) 00:15:17.778 4.883 - 4.907: 97.9126% ( 10) 00:15:17.778 4.907 - 4.930: 97.9949% ( 11) 00:15:17.778 4.930 - 4.954: 98.0922% ( 13) 00:15:17.778 4.954 - 4.978: 98.1445% ( 7) 00:15:17.778 4.978 - 5.001: 98.2343% ( 12) 00:15:17.778 5.001 - 5.025: 98.2643% ( 4) 00:15:17.778 5.025 - 5.049: 98.3017% ( 5) 00:15:17.778 5.049 - 5.073: 98.3466% ( 6) 00:15:17.778 5.073 - 5.096: 98.3765% ( 4) 00:15:17.778 5.096 - 5.120: 98.3914% ( 2) 00:15:17.778 5.120 - 5.144: 98.4288% ( 5) 00:15:17.778 5.144 - 5.167: 98.4438% ( 2) 00:15:17.778 5.167 - 5.191: 98.4513% ( 1) 00:15:17.778 5.191 - 5.215: 98.4962% ( 6) 00:15:17.778 5.215 - 5.239: 98.5037% ( 1) 00:15:17.778 5.239 - 5.262: 98.5261% ( 3) 00:15:17.778 5.262 - 5.286: 98.5336% ( 1) 00:15:17.778 5.310 - 5.333: 98.5411% ( 1) 00:15:17.778 5.333 - 5.357: 98.5486% ( 1) 00:15:17.778 5.381 - 5.404: 98.5560% ( 1) 00:15:17.778 5.428 - 5.452: 98.5635% ( 1) 00:15:17.778 5.452 - 5.476: 98.5710% ( 1) 00:15:17.778 5.760 - 5.784: 98.5785% ( 1) 00:15:17.778 6.021 - 6.044: 98.5860% ( 1) 00:15:17.778 6.068 - 6.116: 98.5934% ( 1) 00:15:17.778 6.353 - 6.400: 98.6009% ( 1) 00:15:17.778 6.542 - 6.590: 98.6084% ( 1) 00:15:17.778 6.590 - 6.637: 98.6159% ( 1) 00:15:17.778 6.684 - 6.732: 98.6234% ( 1) 00:15:17.778 6.732 - 6.779: 98.6309% ( 1) 00:15:17.778 6.779 - 6.827: 98.6383% ( 1) 00:15:17.778 7.301 - 7.348: 98.6608% ( 3) 00:15:17.778 7.348 - 7.396: 98.6683% ( 1) 00:15:17.778 7.538 - 7.585: 98.6757% ( 1) 00:15:17.778 7.870 - 7.917: 98.6832% ( 1) 00:15:17.778 7.917 - 7.964: 98.6907% ( 1) 00:15:17.778 8.059 - 8.107: 98.6982% ( 1) 00:15:17.778 8.107 - 8.154: 98.7057% ( 1) 00:15:17.778 8.154 - 8.201: 98.7132% ( 1) 00:15:17.778 8.201 - 8.249: 98.7206% ( 1) 00:15:17.778 8.296 - 8.344: 98.7281% ( 1) 00:15:17.778 8.344 - 8.391: 98.7356% ( 1) 00:15:17.778 8.486 - 8.533: 98.7431% ( 1) 00:15:17.778 8.581 - 8.628: 98.7506% ( 1) 00:15:17.778 8.628 - 8.676: 98.7580% ( 1) 00:15:17.778 8.676 - 8.723: 98.7655% ( 1) 00:15:17.778 8.723 - 8.770: 98.7805% ( 2) 00:15:17.778 8.770 - 8.818: 98.7880% ( 1) 00:15:17.778 8.865 - 8.913: 98.7955% ( 1) 00:15:17.778 8.913 - 8.960: 98.8029% ( 1) 00:15:17.778 9.007 - 9.055: 98.8179% ( 2) 00:15:17.778 9.055 - 9.102: 98.8254% ( 1) 00:15:17.778 9.150 - 9.197: 98.8329% ( 1) 00:15:17.778 9.197 - 9.244: 98.8553% ( 3) 00:15:17.778 9.387 - 9.434: 98.8777% ( 3) 00:15:17.778 9.434 - 9.481: 98.8852% ( 1) 00:15:17.778 9.624 - 9.671: 98.8927% ( 1) 00:15:17.778 9.671 - 9.719: 98.9002% ( 1) 00:15:17.778 10.003 - 10.050: 98.9152% ( 2) 00:15:17.778 10.145 - 10.193: 98.9226% ( 1) 00:15:17.778 10.240 - 10.287: 98.9301% ( 1) 00:15:17.778 10.287 - 10.335: 98.9376% ( 1) 00:15:17.778 10.524 - 10.572: 98.9526% ( 2) 00:15:17.778 10.667 - 10.714: 98.9600% ( 1) 00:15:17.778 10.856 - 10.904: 98.9675% ( 1) 00:15:17.778 11.046 - 11.093: 98.9750% ( 1) 00:15:17.778 11.567 - 11.615: 98.9900% ( 2) 00:15:17.778 11.662 - 11.710: 98.9975% ( 1) 00:15:17.778 12.041 - 12.089: 99.0124% ( 2) 00:15:17.778 12.136 - 12.231: 99.0199% ( 1) 00:15:17.778 12.326 - 12.421: 99.0349% ( 2) 00:15:17.778 12.610 - 12.705: 99.0423% ( 1) 00:15:17.778 12.705 - 12.800: 99.0498% ( 1) 00:15:17.778 12.990 - 13.084: 99.0573% ( 1) 00:15:17.778 13.274 - 13.369: 99.0723% ( 2) 00:15:17.778 13.653 - 13.748: 99.0798% ( 1) 00:15:17.778 13.748 - 13.843: 99.0872% ( 1) 00:15:17.778 13.843 - 13.938: 99.0947% ( 1) 00:15:17.778 13.938 - 14.033: 99.1022% ( 1) 00:15:17.778 14.127 - 14.222: 99.1097% ( 1) 00:15:17.778 14.412 - 14.507: 99.1172% ( 1) 00:15:17.778 14.696 - 14.791: 99.1246% ( 1) 00:15:17.778 17.161 - 17.256: 99.1321% ( 1) 00:15:17.778 17.256 - 17.351: 99.1396% ( 1) 00:15:17.778 17.351 - 17.446: 99.1621% ( 3) 00:15:17.778 17.446 - 17.541: 99.1770% ( 2) 00:15:17.778 17.541 - 17.636: 99.1920% ( 2) 00:15:17.778 17.636 - 17.730: 99.2369% ( 6) 00:15:17.778 17.730 - 17.825: 99.2892% ( 7) 00:15:17.778 17.825 - 17.920: 99.3566% ( 9) 00:15:17.778 17.920 - 18.015: 99.4314% ( 10) 00:15:17.778 18.015 - 18.110: 99.4763% ( 6) 00:15:17.778 18.110 - 18.204: 99.5212% ( 6) 00:15:17.778 18.204 - 18.299: 99.5810% ( 8) 00:15:17.778 18.299 - 18.394: 99.6484% ( 9) 00:15:17.778 18.394 - 18.489: 99.7307% ( 11) 00:15:17.778 18.489 - 18.584: 99.7980% ( 9) 00:15:17.778 18.679 - 18.773: 99.8204% ( 3) 00:15:17.779 18.773 - 18.868: 99.8429% ( 3) 00:15:17.779 18.963 - 19.058: 99.8504% ( 1) 00:15:17.779 19.058 - 19.153: 99.8653% ( 2) 00:15:17.779 19.247 - 19.342: 99.8803% ( 2) 00:15:17.779 19.342 - 19.437: 99.8878% ( 1) 00:15:17.779 19.437 - 19.532: 99.8953% ( 1) 00:15:17.779 20.859 - 20.954: 99.9027% ( 1) 00:15:17.779 22.376 - 22.471: 99.9102% ( 1) 00:15:17.779 22.566 - 22.661: 99.9177% ( 1) 00:15:17.779 23.609 - 23.704: 99.9252% ( 1) 00:15:17.779 27.307 - 27.496: 99.9327% ( 1) 00:15:17.779 3616.616 - 3640.889: 99.9401% ( 1) 00:15:17.779 3980.705 - 4004.978: 99.9925% ( 7) 00:15:17.779 4004.978 - 4029.250: 100.0000% ( 1) 00:15:17.779 00:15:17.779 Complete histogram 00:15:17.779 ================== 00:15:17.779 Range in us Cumulative Count 00:15:17.779 2.062 - 2.074: 0.0150% ( 2) 00:15:17.779 2.074 - 2.086: 9.7561% ( 1302) 00:15:17.779 2.086 - 2.098: 35.7325% ( 3472) 00:15:17.779 2.098 - 2.110: 39.8549% ( 551) 00:15:17.779 2.110 - 2.121: 50.6210% ( 1439) 00:15:17.779 2.121 - 2.133: 60.2125% ( 1282) 00:15:17.779 2.133 - 2.145: 62.5019% ( 306) 00:15:17.779 2.145 - 2.157: 70.3501% ( 1049) 00:15:17.779 2.157 - 2.169: 77.8543% ( 1003) 00:15:17.779 2.169 - 2.181: 79.3805% ( 204) 00:15:17.779 2.181 - 2.193: 85.1638% ( 773) 00:15:17.779 2.193 - 2.204: 88.7102% ( 474) 00:15:17.779 2.204 - 2.216: 89.5855% ( 117) 00:15:17.779 2.216 - 2.228: 90.6928% ( 148) 00:15:17.779 2.228 - 2.240: 92.3463% ( 221) 00:15:17.779 2.240 - 2.252: 94.1493% ( 241) 00:15:17.779 2.252 - 2.264: 94.9648% ( 109) 00:15:17.779 2.264 - 2.276: 95.2865% ( 43) 00:15:17.779 2.276 - 2.287: 95.4586% ( 23) 00:15:17.779 2.287 - 2.299: 95.5858% ( 17) 00:15:17.779 2.299 - 2.311: 95.7055% ( 16) 00:15:17.779 2.311 - 2.323: 95.9898% ( 38) 00:15:17.779 2.323 - 2.335: 96.0796% ( 12) 00:15:17.779 2.335 - 2.347: 96.0871% ( 1) 00:15:17.779 2.347 - 2.359: 96.1245% ( 5) 00:15:17.779 2.359 - 2.370: 96.2517% ( 17) 00:15:17.779 2.370 - 2.382: 96.5360% ( 38) 00:15:17.779 2.382 - 2.394: 96.9250% ( 52) 00:15:17.779 2.394 - 2.406: 97.2168% ( 39) 00:15:17.779 2.406 - 2.418: 97.4562% ( 32) 00:15:17.779 2.418 - 2.430: 97.6882% ( 31) 00:15:17.779 2.430 - 2.441: 97.8453% ( 21) 00:15:17.779 2.441 - 2.453: 97.9650% ( 16) 00:15:17.779 2.453 - 2.465: 98.0323% ( 9) 00:15:17.779 2.465 - 2.477: 98.1146% ( 11) 00:15:17.779 2.477 - 2.489: 98.1820% ( 9) 00:15:17.779 2.489 - 2.501: 98.2343% ( 7) 00:15:17.779 2.501 - 2.513: 98.2867% ( 7) 00:15:17.779 2.513 - 2.524: 98.2942% ( 1) 00:15:17.779 2.524 - 2.536: 98.3017% ( 1) 00:15:17.779 2.536 - 2.548: 98.3166% ( 2) 00:15:17.779 2.548 - 2.560: 98.3241% ( 1) 00:15:17.779 2.560 - 2.572: 98.3316% ( 1) 00:15:17.779 2.596 - 2.607: 98.3466% ( 2) 00:15:17.779 2.607 - 2.619: 98.3615% ( 2) 00:15:17.779 2.619 - 2.631: 98.3690% ( 1) 00:15:17.779 2.631 - 2.643: 98.3840% ( 2) 00:15:17.779 2.702 - 2.714: 98.3914% ( 1) 00:15:17.779 2.726 - 2.738: 98.4064% ( 2) 00:15:17.779 2.951 - 2.963: 98.4139% ( 1) 00:15:17.779 3.508 - 3.532: 98.4214% ( 1) 00:15:17.779 3.556 - 3.579: 98.4438% ( 3) 00:15:17.779 3.579 - 3.603: 98.4588% ( 2) 00:15:17.779 3.603 - 3.627: 98.4812% ( 3) 00:15:17.779 3.650 - 3.674: 98.4887% ( 1) 00:15:17.779 3.674 - 3.698: 98.5111% ( 3) 00:15:17.779 3.698 - 3.721: 98.5186% ( 1) 00:15:17.779 3.745 - 3.769: 98.5261% ( 1) 00:15:17.779 3.793 - 3.816: 98.5336% ( 1) 00:15:17.779 3.864 - 3.887: 98.5486% ( 2) 00:15:17.779 3.911 - 3.935: 98.5560% ( 1) 00:15:17.779 3.935 - 3.959: 98.5635% ( 1) 00:15:17.779 3.959 - 3.982: 98.5710% ( 1) 00:15:17.779 4.101 - 4.124: 98.5785% ( 1) 00:15:17.779 4.124 - 4.148: 98.5860% ( 1) 00:15:17.779 4.267 - 4.290: 98.5934% ( 1) 00:15:17.779 4.290 - 4.314: 98.6009% ( 1) 00:15:17.779 5.713 - 5.736: 98.6084% ( 1) 00:15:17.779 5.736 - 5.760: 98.6159% ( 1) 00:15:17.779 5.831 - 5.855: 98.6234% ( 1) 00:15:17.779 5.926 - 5.950: 98.6309% ( 1) 00:15:17.779 6.305 - 6.353: 98.6383% ( 1) 00:15:17.779 6.400 - 6.447: 98.6458% ( 1) 00:15:17.779 6.447 - 6.495: 98.6533% ( 1) 00:15:17.779 6.684 - 6.732: 98.6608% ( 1) 00:15:17.779 6.779 - 6.827: 98.6683% ( 1) 00:15:17.779 6.827 - 6.874: 98.6757% ( 1) 00:15:17.779 6.921 - 6.969: 98.6907% ( 2) 00:15:17.779 7.064 - 7.111: 98.6982% ( 1) 00:15:17.779 7.111 - 7.159: 98.7057% ( 1) 00:15:17.779 7.348 - 7.396: 98.7206% ( 2) 00:15:17.779 7.396 - 7.443: 98.7281% ( 1) 00:15:17.779 7.775 - 7.822: 98.7356% ( 1) 00:15:17.779 7.917 - 7.964: 98.7431% ( 1) 00:15:17.779 8.107 - 8.154: 98.7506% ( 1) 00:15:17.779 8.154 - 8.201: 98.7580% ( 1) 00:15:17.779 8.201 - 8.249: 98.7655% ( 1) 00:15:17.779 8.249 - 8.296: 98.7730% ( 1) 00:15:17.779 8.628 - 8.676: 98.7805% ( 1) 00:15:17.779 10.050 - 10.098: 9[2024-07-25 10:04:02.612301] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:17.779 8.7880% ( 1) 00:15:17.779 15.360 - 15.455: 98.7955% ( 1) 00:15:17.779 15.644 - 15.739: 98.8029% ( 1) 00:15:17.779 15.739 - 15.834: 98.8104% ( 1) 00:15:17.779 15.834 - 15.929: 98.8254% ( 2) 00:15:17.779 15.929 - 16.024: 98.8927% ( 9) 00:15:17.779 16.024 - 16.119: 98.9152% ( 3) 00:15:17.779 16.119 - 16.213: 98.9600% ( 6) 00:15:17.779 16.213 - 16.308: 98.9975% ( 5) 00:15:17.779 16.308 - 16.403: 99.0498% ( 7) 00:15:17.779 16.403 - 16.498: 99.0798% ( 4) 00:15:17.779 16.498 - 16.593: 99.0947% ( 2) 00:15:17.779 16.593 - 16.687: 99.1321% ( 5) 00:15:17.779 16.687 - 16.782: 99.1845% ( 7) 00:15:17.779 16.782 - 16.877: 99.2294% ( 6) 00:15:17.779 16.877 - 16.972: 99.2593% ( 4) 00:15:17.779 16.972 - 17.067: 99.2743% ( 2) 00:15:17.779 17.067 - 17.161: 99.2967% ( 3) 00:15:17.779 17.161 - 17.256: 99.3042% ( 1) 00:15:17.779 17.256 - 17.351: 99.3192% ( 2) 00:15:17.779 17.351 - 17.446: 99.3266% ( 1) 00:15:17.779 17.920 - 18.015: 99.3341% ( 1) 00:15:17.779 3980.705 - 4004.978: 99.7830% ( 60) 00:15:17.779 4004.978 - 4029.250: 100.0000% ( 29) 00:15:17.779 00:15:17.779 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:15:17.779 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:17.779 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:15:17.779 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:15:17.779 10:04:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:18.100 [ 00:15:18.100 { 00:15:18.100 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:18.100 "subtype": "Discovery", 00:15:18.100 "listen_addresses": [], 00:15:18.100 "allow_any_host": true, 00:15:18.100 "hosts": [] 00:15:18.100 }, 00:15:18.100 { 00:15:18.100 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:18.100 "subtype": "NVMe", 00:15:18.100 "listen_addresses": [ 00:15:18.100 { 00:15:18.100 "trtype": "VFIOUSER", 00:15:18.101 "adrfam": "IPv4", 00:15:18.101 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:18.101 "trsvcid": "0" 00:15:18.101 } 00:15:18.101 ], 00:15:18.101 "allow_any_host": true, 00:15:18.101 "hosts": [], 00:15:18.101 "serial_number": "SPDK1", 00:15:18.101 "model_number": "SPDK bdev Controller", 00:15:18.101 "max_namespaces": 32, 00:15:18.101 "min_cntlid": 1, 00:15:18.101 "max_cntlid": 65519, 00:15:18.101 "namespaces": [ 00:15:18.101 { 00:15:18.101 "nsid": 1, 00:15:18.101 "bdev_name": "Malloc1", 00:15:18.101 "name": "Malloc1", 00:15:18.101 "nguid": "752FA51337B04413BA97648DC6F7037A", 00:15:18.101 "uuid": "752fa513-37b0-4413-ba97-648dc6f7037a" 00:15:18.101 }, 00:15:18.101 { 00:15:18.101 "nsid": 2, 00:15:18.101 "bdev_name": "Malloc3", 00:15:18.101 "name": "Malloc3", 00:15:18.101 "nguid": "E640B5F5082349A6BAB4B37C3DEABC2B", 00:15:18.101 "uuid": "e640b5f5-0823-49a6-bab4-b37c3deabc2b" 00:15:18.101 } 00:15:18.101 ] 00:15:18.101 }, 00:15:18.101 { 00:15:18.101 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:18.101 "subtype": "NVMe", 00:15:18.101 "listen_addresses": [ 00:15:18.101 { 00:15:18.101 "trtype": "VFIOUSER", 00:15:18.101 "adrfam": "IPv4", 00:15:18.101 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:18.101 "trsvcid": "0" 00:15:18.101 } 00:15:18.101 ], 00:15:18.101 "allow_any_host": true, 00:15:18.101 "hosts": [], 00:15:18.101 "serial_number": "SPDK2", 00:15:18.101 "model_number": "SPDK bdev Controller", 00:15:18.101 "max_namespaces": 32, 00:15:18.101 "min_cntlid": 1, 00:15:18.101 "max_cntlid": 65519, 00:15:18.101 "namespaces": [ 00:15:18.101 { 00:15:18.101 "nsid": 1, 00:15:18.101 "bdev_name": "Malloc2", 00:15:18.101 "name": "Malloc2", 00:15:18.101 "nguid": "981F597049EF4194A20563170EF3C631", 00:15:18.101 "uuid": "981f5970-49ef-4194-a205-63170ef3c631" 00:15:18.101 } 00:15:18.101 ] 00:15:18.101 } 00:15:18.101 ] 00:15:18.101 10:04:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:18.101 10:04:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=420234 00:15:18.101 10:04:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:15:18.101 10:04:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:18.101 10:04:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:15:18.101 10:04:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:18.101 10:04:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:18.101 10:04:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:15:18.101 10:04:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:18.101 10:04:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:15:18.101 EAL: No free 2048 kB hugepages reported on node 1 00:15:18.359 [2024-07-25 10:04:03.324963] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:18.359 Malloc4 00:15:18.359 10:04:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:15:18.924 [2024-07-25 10:04:03.924490] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:18.924 10:04:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:18.924 Asynchronous Event Request test 00:15:18.924 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:18.924 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:18.924 Registering asynchronous event callbacks... 00:15:18.924 Starting namespace attribute notice tests for all controllers... 00:15:18.924 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:18.924 aer_cb - Changed Namespace 00:15:18.924 Cleaning up... 00:15:19.489 [ 00:15:19.489 { 00:15:19.489 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:19.489 "subtype": "Discovery", 00:15:19.489 "listen_addresses": [], 00:15:19.489 "allow_any_host": true, 00:15:19.489 "hosts": [] 00:15:19.489 }, 00:15:19.489 { 00:15:19.489 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:19.489 "subtype": "NVMe", 00:15:19.489 "listen_addresses": [ 00:15:19.489 { 00:15:19.489 "trtype": "VFIOUSER", 00:15:19.489 "adrfam": "IPv4", 00:15:19.489 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:19.489 "trsvcid": "0" 00:15:19.489 } 00:15:19.489 ], 00:15:19.489 "allow_any_host": true, 00:15:19.489 "hosts": [], 00:15:19.489 "serial_number": "SPDK1", 00:15:19.489 "model_number": "SPDK bdev Controller", 00:15:19.489 "max_namespaces": 32, 00:15:19.489 "min_cntlid": 1, 00:15:19.489 "max_cntlid": 65519, 00:15:19.489 "namespaces": [ 00:15:19.489 { 00:15:19.489 "nsid": 1, 00:15:19.489 "bdev_name": "Malloc1", 00:15:19.489 "name": "Malloc1", 00:15:19.489 "nguid": "752FA51337B04413BA97648DC6F7037A", 00:15:19.489 "uuid": "752fa513-37b0-4413-ba97-648dc6f7037a" 00:15:19.489 }, 00:15:19.489 { 00:15:19.489 "nsid": 2, 00:15:19.489 "bdev_name": "Malloc3", 00:15:19.489 "name": "Malloc3", 00:15:19.489 "nguid": "E640B5F5082349A6BAB4B37C3DEABC2B", 00:15:19.489 "uuid": "e640b5f5-0823-49a6-bab4-b37c3deabc2b" 00:15:19.489 } 00:15:19.489 ] 00:15:19.489 }, 00:15:19.489 { 00:15:19.489 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:19.489 "subtype": "NVMe", 00:15:19.489 "listen_addresses": [ 00:15:19.489 { 00:15:19.489 "trtype": "VFIOUSER", 00:15:19.489 "adrfam": "IPv4", 00:15:19.489 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:19.489 "trsvcid": "0" 00:15:19.489 } 00:15:19.489 ], 00:15:19.489 "allow_any_host": true, 00:15:19.489 "hosts": [], 00:15:19.489 "serial_number": "SPDK2", 00:15:19.489 "model_number": "SPDK bdev Controller", 00:15:19.489 "max_namespaces": 32, 00:15:19.489 "min_cntlid": 1, 00:15:19.489 "max_cntlid": 65519, 00:15:19.489 "namespaces": [ 00:15:19.489 { 00:15:19.489 "nsid": 1, 00:15:19.489 "bdev_name": "Malloc2", 00:15:19.489 "name": "Malloc2", 00:15:19.489 "nguid": "981F597049EF4194A20563170EF3C631", 00:15:19.489 "uuid": "981f5970-49ef-4194-a205-63170ef3c631" 00:15:19.489 }, 00:15:19.489 { 00:15:19.489 "nsid": 2, 00:15:19.489 "bdev_name": "Malloc4", 00:15:19.489 "name": "Malloc4", 00:15:19.489 "nguid": "3DCBE9CA6991494BB4C76B62D612DBF8", 00:15:19.489 "uuid": "3dcbe9ca-6991-494b-b4c7-6b62d612dbf8" 00:15:19.489 } 00:15:19.489 ] 00:15:19.489 } 00:15:19.489 ] 00:15:19.489 10:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 420234 00:15:19.489 10:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:15:19.489 10:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 414499 00:15:19.489 10:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 414499 ']' 00:15:19.489 10:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 414499 00:15:19.489 10:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:15:19.489 10:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:19.489 10:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 414499 00:15:19.489 10:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:19.489 10:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:19.489 10:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 414499' 00:15:19.489 killing process with pid 414499 00:15:19.489 10:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 414499 00:15:19.489 10:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 414499 00:15:20.055 10:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:20.055 10:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:20.055 10:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:15:20.055 10:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:15:20.055 10:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:15:20.055 10:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=420432 00:15:20.055 10:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 420432' 00:15:20.055 10:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:15:20.055 Process pid: 420432 00:15:20.055 10:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:20.055 10:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 420432 00:15:20.055 10:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 420432 ']' 00:15:20.055 10:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:20.055 10:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:20.055 10:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:20.055 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:20.055 10:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:20.055 10:04:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:20.055 [2024-07-25 10:04:04.980911] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:15:20.055 [2024-07-25 10:04:04.982149] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:15:20.055 [2024-07-25 10:04:04.982227] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:20.055 EAL: No free 2048 kB hugepages reported on node 1 00:15:20.055 [2024-07-25 10:04:05.051905] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:20.055 [2024-07-25 10:04:05.173956] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:20.055 [2024-07-25 10:04:05.174032] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:20.055 [2024-07-25 10:04:05.174048] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:20.055 [2024-07-25 10:04:05.174061] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:20.055 [2024-07-25 10:04:05.174073] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:20.055 [2024-07-25 10:04:05.174144] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:20.055 [2024-07-25 10:04:05.174211] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:20.055 [2024-07-25 10:04:05.174299] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:20.055 [2024-07-25 10:04:05.174302] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:20.312 [2024-07-25 10:04:05.285183] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:15:20.312 [2024-07-25 10:04:05.285439] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:15:20.312 [2024-07-25 10:04:05.285724] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:15:20.312 [2024-07-25 10:04:05.286385] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:15:20.312 [2024-07-25 10:04:05.286670] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:15:20.312 10:04:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:20.312 10:04:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:15:20.312 10:04:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:21.244 10:04:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:15:21.502 10:04:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:21.502 10:04:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:21.502 10:04:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:21.502 10:04:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:21.502 10:04:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:22.068 Malloc1 00:15:22.068 10:04:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:22.635 10:04:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:23.200 10:04:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:23.457 10:04:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:23.457 10:04:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:23.457 10:04:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:23.715 Malloc2 00:15:23.715 10:04:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:23.972 10:04:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:24.536 10:04:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:24.794 10:04:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:15:24.794 10:04:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 420432 00:15:24.794 10:04:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 420432 ']' 00:15:24.794 10:04:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 420432 00:15:24.794 10:04:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:15:24.794 10:04:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:24.794 10:04:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 420432 00:15:24.794 10:04:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:24.794 10:04:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:24.794 10:04:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 420432' 00:15:24.794 killing process with pid 420432 00:15:24.794 10:04:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 420432 00:15:24.794 10:04:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 420432 00:15:25.052 10:04:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:25.052 10:04:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:25.052 00:15:25.052 real 0m56.846s 00:15:25.052 user 3m44.050s 00:15:25.052 sys 0m5.001s 00:15:25.052 10:04:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:25.052 10:04:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:25.052 ************************************ 00:15:25.052 END TEST nvmf_vfio_user 00:15:25.052 ************************************ 00:15:25.052 10:04:10 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:25.052 10:04:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:25.052 10:04:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:25.052 10:04:10 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:25.052 ************************************ 00:15:25.052 START TEST nvmf_vfio_user_nvme_compliance 00:15:25.052 ************************************ 00:15:25.052 10:04:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:25.310 * Looking for test storage... 00:15:25.310 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:15:25.310 10:04:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:25.310 10:04:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:15:25.310 10:04:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:25.310 10:04:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:25.310 10:04:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:25.310 10:04:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:25.310 10:04:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:25.310 10:04:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:25.310 10:04:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:25.310 10:04:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:25.310 10:04:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:25.310 10:04:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:25.310 10:04:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:25.310 10:04:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:15:25.310 10:04:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:25.310 10:04:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:25.310 10:04:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:25.310 10:04:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:25.310 10:04:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:25.310 10:04:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:25.310 10:04:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:25.310 10:04:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:25.310 10:04:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:25.310 10:04:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:25.310 10:04:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:25.310 10:04:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:15:25.310 10:04:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:25.310 10:04:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:15:25.311 10:04:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:25.311 10:04:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:25.311 10:04:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:25.311 10:04:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:25.311 10:04:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:25.311 10:04:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:25.311 10:04:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:25.311 10:04:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:25.311 10:04:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:25.311 10:04:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:25.311 10:04:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:15:25.311 10:04:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:15:25.311 10:04:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:15:25.311 10:04:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=421110 00:15:25.311 10:04:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:15:25.311 10:04:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 421110' 00:15:25.311 Process pid: 421110 00:15:25.311 10:04:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:25.311 10:04:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 421110 00:15:25.311 10:04:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@831 -- # '[' -z 421110 ']' 00:15:25.311 10:04:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:25.311 10:04:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:25.311 10:04:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:25.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:25.311 10:04:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:25.311 10:04:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:25.311 [2024-07-25 10:04:10.341302] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:15:25.311 [2024-07-25 10:04:10.341491] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:25.311 EAL: No free 2048 kB hugepages reported on node 1 00:15:25.311 [2024-07-25 10:04:10.444951] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:25.570 [2024-07-25 10:04:10.570779] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:25.570 [2024-07-25 10:04:10.570838] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:25.570 [2024-07-25 10:04:10.570855] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:25.570 [2024-07-25 10:04:10.570868] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:25.570 [2024-07-25 10:04:10.570880] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:25.570 [2024-07-25 10:04:10.571200] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:25.570 [2024-07-25 10:04:10.571253] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:25.570 [2024-07-25 10:04:10.571257] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:25.570 10:04:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:25.570 10:04:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # return 0 00:15:25.570 10:04:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:15:26.942 10:04:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:26.942 10:04:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:15:26.942 10:04:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:26.942 10:04:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.942 10:04:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:26.942 10:04:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.942 10:04:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:15:26.942 10:04:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:26.942 10:04:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.942 10:04:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:26.942 malloc0 00:15:26.942 10:04:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.942 10:04:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:15:26.942 10:04:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.942 10:04:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:26.942 10:04:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.942 10:04:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:26.942 10:04:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.942 10:04:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:26.942 10:04:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.942 10:04:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:26.942 10:04:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.942 10:04:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:26.942 10:04:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.942 10:04:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:15:26.942 EAL: No free 2048 kB hugepages reported on node 1 00:15:26.942 00:15:26.942 00:15:26.942 CUnit - A unit testing framework for C - Version 2.1-3 00:15:26.942 http://cunit.sourceforge.net/ 00:15:26.942 00:15:26.942 00:15:26.942 Suite: nvme_compliance 00:15:26.942 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-25 10:04:11.944978] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:26.942 [2024-07-25 10:04:11.946403] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:15:26.942 [2024-07-25 10:04:11.946450] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:15:26.943 [2024-07-25 10:04:11.946464] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:15:26.943 [2024-07-25 10:04:11.951008] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:26.943 passed 00:15:26.943 Test: admin_identify_ctrlr_verify_fused ...[2024-07-25 10:04:12.035613] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:26.943 [2024-07-25 10:04:12.038634] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:26.943 passed 00:15:27.200 Test: admin_identify_ns ...[2024-07-25 10:04:12.125458] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:27.200 [2024-07-25 10:04:12.187444] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:15:27.200 [2024-07-25 10:04:12.195445] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:15:27.200 [2024-07-25 10:04:12.216568] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:27.200 passed 00:15:27.200 Test: admin_get_features_mandatory_features ...[2024-07-25 10:04:12.301217] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:27.200 [2024-07-25 10:04:12.304241] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:27.200 passed 00:15:27.458 Test: admin_get_features_optional_features ...[2024-07-25 10:04:12.385796] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:27.458 [2024-07-25 10:04:12.389822] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:27.458 passed 00:15:27.458 Test: admin_set_features_number_of_queues ...[2024-07-25 10:04:12.474066] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:27.458 [2024-07-25 10:04:12.578528] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:27.458 passed 00:15:27.715 Test: admin_get_log_page_mandatory_logs ...[2024-07-25 10:04:12.662205] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:27.715 [2024-07-25 10:04:12.665229] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:27.715 passed 00:15:27.715 Test: admin_get_log_page_with_lpo ...[2024-07-25 10:04:12.745442] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:27.715 [2024-07-25 10:04:12.814448] ctrlr.c:2688:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:15:27.715 [2024-07-25 10:04:12.827527] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:27.715 passed 00:15:27.973 Test: fabric_property_get ...[2024-07-25 10:04:12.911092] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:27.973 [2024-07-25 10:04:12.912366] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:15:27.973 [2024-07-25 10:04:12.916130] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:27.973 passed 00:15:27.973 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-25 10:04:12.999723] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:27.973 [2024-07-25 10:04:13.001033] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:15:27.973 [2024-07-25 10:04:13.002755] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:27.973 passed 00:15:27.973 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-25 10:04:13.087991] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:28.231 [2024-07-25 10:04:13.172441] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:28.231 [2024-07-25 10:04:13.188453] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:28.231 [2024-07-25 10:04:13.193529] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:28.231 passed 00:15:28.231 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-25 10:04:13.276145] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:28.231 [2024-07-25 10:04:13.277488] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:15:28.231 [2024-07-25 10:04:13.279164] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:28.231 passed 00:15:28.231 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-25 10:04:13.360385] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:28.488 [2024-07-25 10:04:13.437436] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:28.488 [2024-07-25 10:04:13.461451] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:28.488 [2024-07-25 10:04:13.466555] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:28.488 passed 00:15:28.488 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-25 10:04:13.551174] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:28.488 [2024-07-25 10:04:13.552471] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:15:28.488 [2024-07-25 10:04:13.552511] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:15:28.488 [2024-07-25 10:04:13.554197] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:28.488 passed 00:15:28.488 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-25 10:04:13.633526] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:28.745 [2024-07-25 10:04:13.727442] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:15:28.745 [2024-07-25 10:04:13.735450] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:15:28.745 [2024-07-25 10:04:13.743435] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:15:28.745 [2024-07-25 10:04:13.751451] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:15:28.745 [2024-07-25 10:04:13.780554] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:28.745 passed 00:15:28.745 Test: admin_create_io_sq_verify_pc ...[2024-07-25 10:04:13.864154] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:28.745 [2024-07-25 10:04:13.880454] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:15:28.745 [2024-07-25 10:04:13.898509] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:29.003 passed 00:15:29.003 Test: admin_create_io_qp_max_qps ...[2024-07-25 10:04:13.984074] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:29.935 [2024-07-25 10:04:15.094447] nvme_ctrlr.c:5469:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:15:30.500 [2024-07-25 10:04:15.467113] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:30.500 passed 00:15:30.500 Test: admin_create_io_sq_shared_cq ...[2024-07-25 10:04:15.548472] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:30.758 [2024-07-25 10:04:15.682442] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:30.758 [2024-07-25 10:04:15.719538] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:30.758 passed 00:15:30.758 00:15:30.758 Run Summary: Type Total Ran Passed Failed Inactive 00:15:30.758 suites 1 1 n/a 0 0 00:15:30.758 tests 18 18 18 0 0 00:15:30.758 asserts 360 360 360 0 n/a 00:15:30.758 00:15:30.758 Elapsed time = 1.564 seconds 00:15:30.758 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 421110 00:15:30.758 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@950 -- # '[' -z 421110 ']' 00:15:30.758 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # kill -0 421110 00:15:30.758 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # uname 00:15:30.758 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:15:30.758 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 421110 00:15:30.758 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:15:30.758 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:15:30.758 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@968 -- # echo 'killing process with pid 421110' 00:15:30.758 killing process with pid 421110 00:15:30.758 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@969 -- # kill 421110 00:15:30.758 10:04:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@974 -- # wait 421110 00:15:31.016 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:15:31.016 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:15:31.016 00:15:31.016 real 0m5.929s 00:15:31.016 user 0m16.480s 00:15:31.016 sys 0m0.666s 00:15:31.017 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1126 -- # xtrace_disable 00:15:31.017 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:31.017 ************************************ 00:15:31.017 END TEST nvmf_vfio_user_nvme_compliance 00:15:31.017 ************************************ 00:15:31.017 10:04:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:31.017 10:04:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:15:31.017 10:04:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:15:31.017 10:04:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:31.017 ************************************ 00:15:31.017 START TEST nvmf_vfio_user_fuzz 00:15:31.017 ************************************ 00:15:31.017 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:31.277 * Looking for test storage... 00:15:31.277 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:31.277 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:31.277 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:15:31.277 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:31.277 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:31.277 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:31.277 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:31.277 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:31.277 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:31.277 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:31.277 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:31.277 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:31.277 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:31.277 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:15:31.277 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:15:31.277 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:31.277 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:31.277 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:31.277 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:31.277 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:31.277 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:31.277 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:31.277 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:31.278 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:31.278 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:31.278 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:31.278 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:15:31.278 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:31.278 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:15:31.278 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:31.278 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:31.278 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:31.278 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:31.278 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:31.278 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:31.278 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:31.278 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:31.278 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:31.278 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:31.278 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:31.278 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:15:31.278 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:31.278 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:31.278 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:15:31.278 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=421835 00:15:31.278 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:31.278 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 421835' 00:15:31.278 Process pid: 421835 00:15:31.278 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:31.278 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 421835 00:15:31.278 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@831 -- # '[' -z 421835 ']' 00:15:31.278 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:31.278 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # local max_retries=100 00:15:31.278 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:31.278 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:31.278 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # xtrace_disable 00:15:31.278 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:31.597 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:15:31.597 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # return 0 00:15:31.597 10:04:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:15:32.530 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:32.530 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.530 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:32.530 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.530 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:15:32.530 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:32.530 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.530 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:32.530 malloc0 00:15:32.530 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.530 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:15:32.530 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.530 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:32.530 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.530 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:32.530 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.530 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:32.530 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.530 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:32.530 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.530 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:32.530 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.530 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:15:32.531 10:04:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:16:04.595 Fuzzing completed. Shutting down the fuzz application 00:16:04.595 00:16:04.595 Dumping successful admin opcodes: 00:16:04.595 8, 9, 10, 24, 00:16:04.595 Dumping successful io opcodes: 00:16:04.595 0, 00:16:04.595 NS: 0x200003a1ef00 I/O qp, Total commands completed: 401340, total successful commands: 1585, random_seed: 4172265792 00:16:04.595 NS: 0x200003a1ef00 admin qp, Total commands completed: 57759, total successful commands: 462, random_seed: 701774848 00:16:04.595 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:16:04.595 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.595 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:04.595 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.595 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 421835 00:16:04.595 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@950 -- # '[' -z 421835 ']' 00:16:04.595 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # kill -0 421835 00:16:04.595 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # uname 00:16:04.595 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:04.595 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 421835 00:16:04.595 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:04.595 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:04.595 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@968 -- # echo 'killing process with pid 421835' 00:16:04.595 killing process with pid 421835 00:16:04.595 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@969 -- # kill 421835 00:16:04.595 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@974 -- # wait 421835 00:16:04.595 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:16:04.595 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:16:04.595 00:16:04.595 real 0m33.412s 00:16:04.595 user 0m32.358s 00:16:04.595 sys 0m21.450s 00:16:04.595 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:16:04.595 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:04.595 ************************************ 00:16:04.595 END TEST nvmf_vfio_user_fuzz 00:16:04.595 ************************************ 00:16:04.595 10:04:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:04.595 10:04:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:16:04.595 10:04:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:16:04.595 10:04:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:04.595 ************************************ 00:16:04.595 START TEST nvmf_auth_target 00:16:04.595 ************************************ 00:16:04.595 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:04.595 * Looking for test storage... 00:16:04.595 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:04.596 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:04.596 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:16:04.596 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:04.596 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:04.596 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:04.596 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:04.596 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:04.596 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:04.596 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:04.596 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:04.596 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:04.596 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:04.596 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:04.596 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:16:04.596 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:04.596 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:04.596 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:04.596 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:04.596 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:04.596 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:04.596 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:04.596 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:04.596 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:04.596 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:04.596 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:04.596 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:16:04.596 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:04.596 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:16:04.596 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:04.596 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:04.596 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:04.596 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:04.596 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:04.596 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:04.596 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:04.596 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:04.596 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:16:04.596 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:16:04.596 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:16:04.596 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:04.596 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:16:04.596 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:16:04.596 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:16:04.596 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:16:04.596 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:04.596 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:04.596 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:04.596 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:04.596 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:04.596 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:04.596 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:04.596 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:04.596 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:04.596 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:04.596 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:16:04.596 10:04:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.128 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:07.128 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:16:07.128 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:07.128 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:07.128 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:07.128 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:07.128 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:07.128 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:16:07.128 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:07.128 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:16:07.128 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:16:07.128 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:16:07.128 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:16:07.128 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:16:07.128 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:16:07.129 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:07.129 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:07.129 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:07.129 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:07.129 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:07.129 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:07.129 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:07.129 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:07.129 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:07.129 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:07.129 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:07.129 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:07.129 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:07.129 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:07.129 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:07.129 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:07.129 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:07.129 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:07.129 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:16:07.129 Found 0000:84:00.0 (0x8086 - 0x159b) 00:16:07.129 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:07.129 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:07.129 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:07.129 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:07.129 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:07.129 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:07.129 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:16:07.129 Found 0000:84:00.1 (0x8086 - 0x159b) 00:16:07.129 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:07.129 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:07.129 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:07.129 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:07.129 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:07.129 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:07.129 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:07.129 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:07.129 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:07.129 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:07.129 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:07.129 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:07.129 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:07.129 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:07.129 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:07.129 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:16:07.129 Found net devices under 0000:84:00.0: cvl_0_0 00:16:07.129 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:07.129 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:07.129 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:07.129 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:07.129 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:07.129 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:07.129 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:07.129 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:07.129 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:16:07.129 Found net devices under 0000:84:00.1: cvl_0_1 00:16:07.129 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:07.129 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:07.129 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:16:07.129 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:07.129 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:07.129 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:07.129 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:07.129 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:07.129 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:07.129 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:07.129 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:07.129 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:07.129 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:07.129 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:07.129 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:07.129 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:07.129 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:07.129 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:07.129 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:07.129 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:07.129 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:07.130 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:07.130 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:07.130 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:07.130 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:07.130 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:07.130 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:07.130 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.121 ms 00:16:07.130 00:16:07.130 --- 10.0.0.2 ping statistics --- 00:16:07.130 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:07.130 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:16:07.130 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:07.130 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:07.130 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.162 ms 00:16:07.130 00:16:07.130 --- 10.0.0.1 ping statistics --- 00:16:07.130 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:07.130 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:16:07.130 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:07.130 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:16:07.130 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:07.130 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:07.130 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:07.130 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:07.130 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:07.130 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:07.130 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:07.388 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:16:07.388 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:07.388 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:07.388 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.388 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:16:07.388 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=427306 00:16:07.388 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 427306 00:16:07.388 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 427306 ']' 00:16:07.388 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:07.388 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:07.388 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:07.388 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:07.388 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.646 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:07.646 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:16:07.646 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:07.646 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:07.646 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.646 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:07.646 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=427432 00:16:07.646 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:16:07.646 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:16:07.646 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:16:07.646 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:07.647 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:07.647 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:07.647 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:16:07.647 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:16:07.647 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:07.647 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=2a7144fc2bfd5c6e803d015cdc3f4e5de571c1d280364c50 00:16:07.647 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:16:07.647 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.psD 00:16:07.647 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 2a7144fc2bfd5c6e803d015cdc3f4e5de571c1d280364c50 0 00:16:07.647 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 2a7144fc2bfd5c6e803d015cdc3f4e5de571c1d280364c50 0 00:16:07.647 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:07.647 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:07.647 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=2a7144fc2bfd5c6e803d015cdc3f4e5de571c1d280364c50 00:16:07.647 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:16:07.647 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:07.647 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.psD 00:16:07.647 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.psD 00:16:07.647 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.psD 00:16:07.647 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:16:07.647 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:07.647 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:07.647 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:07.647 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:16:07.647 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:16:07.647 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:07.647 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=b48a2bf4654b84e678b3807e38fdae93a8a4f21317bfde7aa8f2abbf3c315ec3 00:16:07.647 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:16:07.647 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.kKa 00:16:07.647 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key b48a2bf4654b84e678b3807e38fdae93a8a4f21317bfde7aa8f2abbf3c315ec3 3 00:16:07.647 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 b48a2bf4654b84e678b3807e38fdae93a8a4f21317bfde7aa8f2abbf3c315ec3 3 00:16:07.647 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:07.647 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:07.647 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=b48a2bf4654b84e678b3807e38fdae93a8a4f21317bfde7aa8f2abbf3c315ec3 00:16:07.647 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:16:07.647 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:07.905 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.kKa 00:16:07.905 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.kKa 00:16:07.905 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.kKa 00:16:07.905 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:16:07.905 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:07.905 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:07.905 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:07.905 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:16:07.905 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:16:07.905 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:07.905 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=395589de15e8017e20aed78a8274d473 00:16:07.905 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:16:07.905 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.lwz 00:16:07.905 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 395589de15e8017e20aed78a8274d473 1 00:16:07.905 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 395589de15e8017e20aed78a8274d473 1 00:16:07.905 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:07.905 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:07.905 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=395589de15e8017e20aed78a8274d473 00:16:07.905 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:16:07.905 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:07.905 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.lwz 00:16:07.905 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.lwz 00:16:07.905 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.lwz 00:16:07.905 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:16:07.906 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:07.906 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:07.906 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:07.906 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:16:07.906 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:16:07.906 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:07.906 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=5c53c49d3e413c5fe8aed40f4a1e421a2c3573f6f8f57b6e 00:16:07.906 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:16:07.906 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.T25 00:16:07.906 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 5c53c49d3e413c5fe8aed40f4a1e421a2c3573f6f8f57b6e 2 00:16:07.906 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 5c53c49d3e413c5fe8aed40f4a1e421a2c3573f6f8f57b6e 2 00:16:07.906 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:07.906 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:07.906 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=5c53c49d3e413c5fe8aed40f4a1e421a2c3573f6f8f57b6e 00:16:07.906 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:16:07.906 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:07.906 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.T25 00:16:07.906 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.T25 00:16:07.906 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.T25 00:16:07.906 10:04:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:16:07.906 10:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:07.906 10:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:07.906 10:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:07.906 10:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:16:07.906 10:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:16:07.906 10:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:07.906 10:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=56eec9fa818ceb3b5110def4daa8b604eeea7c5739cef7f2 00:16:07.906 10:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:16:07.906 10:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.wKS 00:16:07.906 10:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 56eec9fa818ceb3b5110def4daa8b604eeea7c5739cef7f2 2 00:16:07.906 10:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 56eec9fa818ceb3b5110def4daa8b604eeea7c5739cef7f2 2 00:16:07.906 10:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:07.906 10:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:07.906 10:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=56eec9fa818ceb3b5110def4daa8b604eeea7c5739cef7f2 00:16:07.906 10:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:16:07.906 10:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:08.165 10:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.wKS 00:16:08.165 10:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.wKS 00:16:08.165 10:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.wKS 00:16:08.165 10:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:16:08.165 10:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:08.165 10:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:08.165 10:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:08.165 10:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:16:08.165 10:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:16:08.165 10:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:08.165 10:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=cdcc810fa8aa7939bad772d097088511 00:16:08.165 10:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:16:08.165 10:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.hCx 00:16:08.165 10:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key cdcc810fa8aa7939bad772d097088511 1 00:16:08.165 10:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 cdcc810fa8aa7939bad772d097088511 1 00:16:08.165 10:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:08.165 10:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:08.165 10:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=cdcc810fa8aa7939bad772d097088511 00:16:08.165 10:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:16:08.165 10:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:08.165 10:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.hCx 00:16:08.165 10:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.hCx 00:16:08.165 10:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.hCx 00:16:08.165 10:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:16:08.165 10:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:08.165 10:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:08.165 10:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:08.165 10:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:16:08.165 10:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:16:08.165 10:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:08.165 10:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=887c9a47a6a61a1e00672fc6e2f2203adc4066a52988c18508c1d8dbb00111a9 00:16:08.165 10:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:16:08.165 10:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.vMI 00:16:08.165 10:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 887c9a47a6a61a1e00672fc6e2f2203adc4066a52988c18508c1d8dbb00111a9 3 00:16:08.165 10:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 887c9a47a6a61a1e00672fc6e2f2203adc4066a52988c18508c1d8dbb00111a9 3 00:16:08.165 10:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:08.165 10:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:08.165 10:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=887c9a47a6a61a1e00672fc6e2f2203adc4066a52988c18508c1d8dbb00111a9 00:16:08.165 10:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:16:08.165 10:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:08.165 10:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.vMI 00:16:08.165 10:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.vMI 00:16:08.165 10:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.vMI 00:16:08.165 10:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:16:08.165 10:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 427306 00:16:08.165 10:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 427306 ']' 00:16:08.165 10:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:08.165 10:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:08.165 10:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:08.165 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:08.165 10:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:08.165 10:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.729 10:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:08.729 10:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:16:08.729 10:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 427432 /var/tmp/host.sock 00:16:08.729 10:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 427432 ']' 00:16:08.729 10:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:16:08.729 10:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:08.729 10:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:16:08.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:16:08.729 10:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:08.729 10:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.987 10:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:08.987 10:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:16:08.987 10:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:16:08.987 10:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.987 10:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.987 10:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.987 10:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:16:08.987 10:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.psD 00:16:08.987 10:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.988 10:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.988 10:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.988 10:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.psD 00:16:08.988 10:04:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.psD 00:16:09.554 10:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.kKa ]] 00:16:09.554 10:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.kKa 00:16:09.554 10:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.554 10:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.554 10:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.554 10:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.kKa 00:16:09.554 10:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.kKa 00:16:09.812 10:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:16:09.812 10:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.lwz 00:16:09.812 10:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.812 10:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.812 10:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.812 10:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.lwz 00:16:09.812 10:04:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.lwz 00:16:10.377 10:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.T25 ]] 00:16:10.377 10:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.T25 00:16:10.377 10:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.377 10:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.377 10:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.377 10:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.T25 00:16:10.377 10:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.T25 00:16:10.635 10:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:16:10.635 10:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.wKS 00:16:10.635 10:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.635 10:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.635 10:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.635 10:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.wKS 00:16:10.635 10:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.wKS 00:16:10.893 10:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.hCx ]] 00:16:10.893 10:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.hCx 00:16:10.893 10:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.893 10:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.893 10:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.893 10:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.hCx 00:16:10.893 10:04:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.hCx 00:16:11.150 10:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:16:11.151 10:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.vMI 00:16:11.151 10:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.151 10:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.151 10:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.151 10:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.vMI 00:16:11.151 10:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.vMI 00:16:11.716 10:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:16:11.716 10:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:16:11.716 10:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:11.716 10:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:11.716 10:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:11.716 10:04:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:11.974 10:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:16:11.974 10:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:11.974 10:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:11.974 10:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:11.974 10:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:11.974 10:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:11.974 10:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:11.974 10:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:11.974 10:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.974 10:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:11.974 10:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:11.974 10:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:12.908 00:16:12.908 10:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:12.908 10:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:12.908 10:04:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:13.203 10:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:13.203 10:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:13.203 10:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:13.203 10:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.203 10:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:13.203 10:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:13.203 { 00:16:13.203 "cntlid": 1, 00:16:13.203 "qid": 0, 00:16:13.203 "state": "enabled", 00:16:13.203 "thread": "nvmf_tgt_poll_group_000", 00:16:13.203 "listen_address": { 00:16:13.203 "trtype": "TCP", 00:16:13.203 "adrfam": "IPv4", 00:16:13.203 "traddr": "10.0.0.2", 00:16:13.203 "trsvcid": "4420" 00:16:13.203 }, 00:16:13.203 "peer_address": { 00:16:13.203 "trtype": "TCP", 00:16:13.203 "adrfam": "IPv4", 00:16:13.203 "traddr": "10.0.0.1", 00:16:13.203 "trsvcid": "45796" 00:16:13.203 }, 00:16:13.203 "auth": { 00:16:13.203 "state": "completed", 00:16:13.203 "digest": "sha256", 00:16:13.203 "dhgroup": "null" 00:16:13.203 } 00:16:13.203 } 00:16:13.203 ]' 00:16:13.203 10:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:13.203 10:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:13.203 10:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:13.203 10:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:13.203 10:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:13.461 10:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:13.461 10:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:13.461 10:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:13.718 10:04:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:MmE3MTQ0ZmMyYmZkNWM2ZTgwM2QwMTVjZGMzZjRlNWRlNTcxYzFkMjgwMzY0YzUw+y0etQ==: --dhchap-ctrl-secret DHHC-1:03:YjQ4YTJiZjQ2NTRiODRlNjc4YjM4MDdlMzhmZGFlOTNhOGE0ZjIxMzE3YmZkZTdhYThmMmFiYmYzYzMxNWVjMysm5Vk=: 00:16:14.651 10:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:14.651 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:14.651 10:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:14.651 10:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.651 10:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.651 10:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.651 10:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:14.651 10:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:14.651 10:04:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:15.217 10:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:16:15.217 10:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:15.217 10:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:15.217 10:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:15.217 10:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:15.217 10:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:15.217 10:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:15.217 10:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.217 10:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.217 10:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.217 10:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:15.217 10:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:15.474 00:16:15.474 10:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:15.474 10:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:15.474 10:05:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:16.406 10:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:16.406 10:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:16.406 10:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.406 10:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.406 10:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.406 10:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:16.406 { 00:16:16.406 "cntlid": 3, 00:16:16.406 "qid": 0, 00:16:16.406 "state": "enabled", 00:16:16.406 "thread": "nvmf_tgt_poll_group_000", 00:16:16.406 "listen_address": { 00:16:16.406 "trtype": "TCP", 00:16:16.406 "adrfam": "IPv4", 00:16:16.406 "traddr": "10.0.0.2", 00:16:16.406 "trsvcid": "4420" 00:16:16.406 }, 00:16:16.406 "peer_address": { 00:16:16.406 "trtype": "TCP", 00:16:16.406 "adrfam": "IPv4", 00:16:16.406 "traddr": "10.0.0.1", 00:16:16.406 "trsvcid": "60090" 00:16:16.406 }, 00:16:16.406 "auth": { 00:16:16.406 "state": "completed", 00:16:16.406 "digest": "sha256", 00:16:16.406 "dhgroup": "null" 00:16:16.406 } 00:16:16.406 } 00:16:16.406 ]' 00:16:16.406 10:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:16.406 10:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:16.406 10:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:16.406 10:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:16.406 10:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:16.406 10:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:16.406 10:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:16.406 10:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:16.663 10:05:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:Mzk1NTg5ZGUxNWU4MDE3ZTIwYWVkNzhhODI3NGQ0NzMtq9te: --dhchap-ctrl-secret DHHC-1:02:NWM1M2M0OWQzZTQxM2M1ZmU4YWVkNDBmNGExZTQyMWEyYzM1NzNmNmY4ZjU3YjZlY8rP+g==: 00:16:18.035 10:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:18.035 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:18.035 10:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:18.035 10:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.035 10:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.035 10:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.035 10:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:18.035 10:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:18.035 10:05:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:18.295 10:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:16:18.295 10:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:18.295 10:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:18.295 10:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:18.295 10:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:18.295 10:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:18.295 10:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:18.295 10:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.295 10:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.295 10:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.295 10:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:18.295 10:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:18.554 00:16:18.554 10:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:18.554 10:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:18.554 10:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:18.811 10:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:18.811 10:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:18.811 10:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.811 10:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.811 10:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.811 10:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:18.811 { 00:16:18.811 "cntlid": 5, 00:16:18.811 "qid": 0, 00:16:18.811 "state": "enabled", 00:16:18.811 "thread": "nvmf_tgt_poll_group_000", 00:16:18.811 "listen_address": { 00:16:18.811 "trtype": "TCP", 00:16:18.811 "adrfam": "IPv4", 00:16:18.811 "traddr": "10.0.0.2", 00:16:18.811 "trsvcid": "4420" 00:16:18.811 }, 00:16:18.811 "peer_address": { 00:16:18.811 "trtype": "TCP", 00:16:18.811 "adrfam": "IPv4", 00:16:18.811 "traddr": "10.0.0.1", 00:16:18.811 "trsvcid": "60126" 00:16:18.811 }, 00:16:18.811 "auth": { 00:16:18.811 "state": "completed", 00:16:18.811 "digest": "sha256", 00:16:18.811 "dhgroup": "null" 00:16:18.811 } 00:16:18.811 } 00:16:18.811 ]' 00:16:18.811 10:05:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:19.068 10:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:19.068 10:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:19.068 10:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:19.068 10:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:19.068 10:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:19.068 10:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:19.068 10:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:19.325 10:05:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:NTZlZWM5ZmE4MThjZWIzYjUxMTBkZWY0ZGFhOGI2MDRlZWVhN2M1NzM5Y2VmN2Yy9TJSEw==: --dhchap-ctrl-secret DHHC-1:01:Y2RjYzgxMGZhOGFhNzkzOWJhZDc3MmQwOTcwODg1MTG0PvqR: 00:16:20.696 10:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:20.696 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:20.696 10:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:20.696 10:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.696 10:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.696 10:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.696 10:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:20.696 10:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:20.696 10:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:20.696 10:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:16:20.696 10:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:20.696 10:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:20.696 10:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:20.696 10:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:20.696 10:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:20.696 10:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:16:20.696 10:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:20.696 10:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.696 10:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:20.696 10:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:20.696 10:05:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:21.262 00:16:21.520 10:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:21.520 10:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:21.520 10:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:21.777 10:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:21.777 10:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:21.777 10:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.777 10:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.777 10:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.777 10:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:21.777 { 00:16:21.777 "cntlid": 7, 00:16:21.777 "qid": 0, 00:16:21.777 "state": "enabled", 00:16:21.777 "thread": "nvmf_tgt_poll_group_000", 00:16:21.777 "listen_address": { 00:16:21.777 "trtype": "TCP", 00:16:21.777 "adrfam": "IPv4", 00:16:21.777 "traddr": "10.0.0.2", 00:16:21.777 "trsvcid": "4420" 00:16:21.777 }, 00:16:21.777 "peer_address": { 00:16:21.777 "trtype": "TCP", 00:16:21.777 "adrfam": "IPv4", 00:16:21.777 "traddr": "10.0.0.1", 00:16:21.777 "trsvcid": "60144" 00:16:21.777 }, 00:16:21.777 "auth": { 00:16:21.777 "state": "completed", 00:16:21.777 "digest": "sha256", 00:16:21.777 "dhgroup": "null" 00:16:21.777 } 00:16:21.777 } 00:16:21.777 ]' 00:16:21.778 10:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:21.778 10:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:21.778 10:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:21.778 10:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:21.778 10:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:21.778 10:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:21.778 10:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:21.778 10:05:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:22.035 10:05:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:ODg3YzlhNDdhNmE2MWExZTAwNjcyZmM2ZTJmMjIwM2FkYzQwNjZhNTI5ODhjMTg1MDhjMWQ4ZGJiMDAxMTFhOZgH7UI=: 00:16:23.407 10:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:23.407 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:23.407 10:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:23.407 10:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.407 10:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.407 10:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.407 10:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:23.407 10:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:23.407 10:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:23.407 10:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:23.664 10:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:16:23.664 10:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:23.664 10:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:23.664 10:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:23.664 10:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:23.664 10:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:23.664 10:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:23.664 10:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.664 10:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.664 10:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.664 10:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:23.664 10:05:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:23.922 00:16:23.922 10:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:23.922 10:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:23.922 10:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:24.487 10:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:24.487 10:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:24.487 10:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.487 10:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.487 10:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.487 10:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:24.487 { 00:16:24.487 "cntlid": 9, 00:16:24.487 "qid": 0, 00:16:24.487 "state": "enabled", 00:16:24.487 "thread": "nvmf_tgt_poll_group_000", 00:16:24.487 "listen_address": { 00:16:24.487 "trtype": "TCP", 00:16:24.487 "adrfam": "IPv4", 00:16:24.487 "traddr": "10.0.0.2", 00:16:24.487 "trsvcid": "4420" 00:16:24.487 }, 00:16:24.487 "peer_address": { 00:16:24.487 "trtype": "TCP", 00:16:24.487 "adrfam": "IPv4", 00:16:24.487 "traddr": "10.0.0.1", 00:16:24.487 "trsvcid": "60184" 00:16:24.487 }, 00:16:24.487 "auth": { 00:16:24.487 "state": "completed", 00:16:24.487 "digest": "sha256", 00:16:24.487 "dhgroup": "ffdhe2048" 00:16:24.487 } 00:16:24.487 } 00:16:24.487 ]' 00:16:24.487 10:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:24.487 10:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:24.487 10:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:24.487 10:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:24.487 10:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:24.744 10:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:24.744 10:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:24.744 10:05:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:25.001 10:05:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:MmE3MTQ0ZmMyYmZkNWM2ZTgwM2QwMTVjZGMzZjRlNWRlNTcxYzFkMjgwMzY0YzUw+y0etQ==: --dhchap-ctrl-secret DHHC-1:03:YjQ4YTJiZjQ2NTRiODRlNjc4YjM4MDdlMzhmZGFlOTNhOGE0ZjIxMzE3YmZkZTdhYThmMmFiYmYzYzMxNWVjMysm5Vk=: 00:16:26.372 10:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:26.372 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:26.373 10:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:26.373 10:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.373 10:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.373 10:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.373 10:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:26.373 10:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:26.373 10:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:26.373 10:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:16:26.373 10:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:26.373 10:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:26.373 10:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:26.373 10:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:26.373 10:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:26.373 10:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:26.373 10:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.373 10:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.373 10:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.373 10:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:26.373 10:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:26.937 00:16:26.937 10:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:26.937 10:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:26.937 10:05:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:27.194 10:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:27.194 10:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:27.194 10:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:27.194 10:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.194 10:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:27.194 10:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:27.194 { 00:16:27.194 "cntlid": 11, 00:16:27.194 "qid": 0, 00:16:27.194 "state": "enabled", 00:16:27.194 "thread": "nvmf_tgt_poll_group_000", 00:16:27.194 "listen_address": { 00:16:27.194 "trtype": "TCP", 00:16:27.194 "adrfam": "IPv4", 00:16:27.194 "traddr": "10.0.0.2", 00:16:27.194 "trsvcid": "4420" 00:16:27.194 }, 00:16:27.194 "peer_address": { 00:16:27.194 "trtype": "TCP", 00:16:27.194 "adrfam": "IPv4", 00:16:27.194 "traddr": "10.0.0.1", 00:16:27.194 "trsvcid": "53176" 00:16:27.194 }, 00:16:27.194 "auth": { 00:16:27.194 "state": "completed", 00:16:27.194 "digest": "sha256", 00:16:27.194 "dhgroup": "ffdhe2048" 00:16:27.194 } 00:16:27.194 } 00:16:27.194 ]' 00:16:27.194 10:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:27.194 10:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:27.194 10:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:27.452 10:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:27.452 10:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:27.452 10:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:27.452 10:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:27.452 10:05:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:28.045 10:05:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:Mzk1NTg5ZGUxNWU4MDE3ZTIwYWVkNzhhODI3NGQ0NzMtq9te: --dhchap-ctrl-secret DHHC-1:02:NWM1M2M0OWQzZTQxM2M1ZmU4YWVkNDBmNGExZTQyMWEyYzM1NzNmNmY4ZjU3YjZlY8rP+g==: 00:16:28.976 10:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:28.976 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:28.976 10:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:28.977 10:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.977 10:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.977 10:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.977 10:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:28.977 10:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:28.977 10:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:29.234 10:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:16:29.234 10:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:29.234 10:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:29.234 10:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:29.234 10:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:29.234 10:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:29.234 10:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:29.234 10:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.234 10:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.234 10:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.234 10:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:29.234 10:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:29.800 00:16:29.800 10:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:29.800 10:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:29.800 10:05:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:30.058 10:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:30.058 10:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:30.058 10:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.058 10:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.058 10:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.058 10:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:30.058 { 00:16:30.058 "cntlid": 13, 00:16:30.058 "qid": 0, 00:16:30.058 "state": "enabled", 00:16:30.058 "thread": "nvmf_tgt_poll_group_000", 00:16:30.058 "listen_address": { 00:16:30.058 "trtype": "TCP", 00:16:30.058 "adrfam": "IPv4", 00:16:30.058 "traddr": "10.0.0.2", 00:16:30.058 "trsvcid": "4420" 00:16:30.058 }, 00:16:30.058 "peer_address": { 00:16:30.058 "trtype": "TCP", 00:16:30.058 "adrfam": "IPv4", 00:16:30.058 "traddr": "10.0.0.1", 00:16:30.058 "trsvcid": "53204" 00:16:30.058 }, 00:16:30.058 "auth": { 00:16:30.058 "state": "completed", 00:16:30.058 "digest": "sha256", 00:16:30.058 "dhgroup": "ffdhe2048" 00:16:30.058 } 00:16:30.058 } 00:16:30.058 ]' 00:16:30.058 10:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:30.058 10:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:30.058 10:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:30.058 10:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:30.058 10:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:30.058 10:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:30.058 10:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:30.058 10:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:30.623 10:05:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:NTZlZWM5ZmE4MThjZWIzYjUxMTBkZWY0ZGFhOGI2MDRlZWVhN2M1NzM5Y2VmN2Yy9TJSEw==: --dhchap-ctrl-secret DHHC-1:01:Y2RjYzgxMGZhOGFhNzkzOWJhZDc3MmQwOTcwODg1MTG0PvqR: 00:16:31.996 10:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:31.996 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:31.996 10:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:31.996 10:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.996 10:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.996 10:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.996 10:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:31.996 10:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:31.996 10:05:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:32.254 10:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:16:32.254 10:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:32.254 10:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:32.254 10:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:32.254 10:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:32.254 10:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:32.254 10:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:16:32.254 10:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.254 10:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.254 10:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.254 10:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:32.254 10:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:32.511 00:16:32.511 10:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:32.511 10:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:32.511 10:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:32.770 10:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:32.770 10:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:32.770 10:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:32.770 10:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.770 10:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:32.770 10:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:32.770 { 00:16:32.770 "cntlid": 15, 00:16:32.770 "qid": 0, 00:16:32.770 "state": "enabled", 00:16:32.770 "thread": "nvmf_tgt_poll_group_000", 00:16:32.770 "listen_address": { 00:16:32.770 "trtype": "TCP", 00:16:32.770 "adrfam": "IPv4", 00:16:32.770 "traddr": "10.0.0.2", 00:16:32.770 "trsvcid": "4420" 00:16:32.770 }, 00:16:32.770 "peer_address": { 00:16:32.770 "trtype": "TCP", 00:16:32.770 "adrfam": "IPv4", 00:16:32.770 "traddr": "10.0.0.1", 00:16:32.770 "trsvcid": "53230" 00:16:32.770 }, 00:16:32.770 "auth": { 00:16:32.770 "state": "completed", 00:16:32.770 "digest": "sha256", 00:16:32.770 "dhgroup": "ffdhe2048" 00:16:32.770 } 00:16:32.770 } 00:16:32.770 ]' 00:16:32.770 10:05:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:33.028 10:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:33.028 10:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:33.028 10:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:33.028 10:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:33.028 10:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:33.028 10:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:33.028 10:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:33.594 10:05:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:ODg3YzlhNDdhNmE2MWExZTAwNjcyZmM2ZTJmMjIwM2FkYzQwNjZhNTI5ODhjMTg1MDhjMWQ4ZGJiMDAxMTFhOZgH7UI=: 00:16:34.528 10:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:34.786 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:34.786 10:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:34.786 10:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.786 10:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.786 10:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.786 10:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:34.786 10:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:34.786 10:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:34.786 10:05:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:35.044 10:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:16:35.044 10:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:35.044 10:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:35.044 10:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:35.044 10:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:35.044 10:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:35.044 10:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:35.044 10:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.044 10:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.044 10:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.044 10:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:35.044 10:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:35.609 00:16:35.609 10:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:35.609 10:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:35.609 10:05:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:35.867 10:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:35.867 10:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:35.867 10:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.867 10:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.124 10:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.124 10:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:36.124 { 00:16:36.124 "cntlid": 17, 00:16:36.124 "qid": 0, 00:16:36.124 "state": "enabled", 00:16:36.124 "thread": "nvmf_tgt_poll_group_000", 00:16:36.124 "listen_address": { 00:16:36.124 "trtype": "TCP", 00:16:36.125 "adrfam": "IPv4", 00:16:36.125 "traddr": "10.0.0.2", 00:16:36.125 "trsvcid": "4420" 00:16:36.125 }, 00:16:36.125 "peer_address": { 00:16:36.125 "trtype": "TCP", 00:16:36.125 "adrfam": "IPv4", 00:16:36.125 "traddr": "10.0.0.1", 00:16:36.125 "trsvcid": "43130" 00:16:36.125 }, 00:16:36.125 "auth": { 00:16:36.125 "state": "completed", 00:16:36.125 "digest": "sha256", 00:16:36.125 "dhgroup": "ffdhe3072" 00:16:36.125 } 00:16:36.125 } 00:16:36.125 ]' 00:16:36.125 10:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:36.125 10:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:36.125 10:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:36.125 10:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:36.125 10:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:36.125 10:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:36.125 10:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:36.125 10:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:36.383 10:05:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:MmE3MTQ0ZmMyYmZkNWM2ZTgwM2QwMTVjZGMzZjRlNWRlNTcxYzFkMjgwMzY0YzUw+y0etQ==: --dhchap-ctrl-secret DHHC-1:03:YjQ4YTJiZjQ2NTRiODRlNjc4YjM4MDdlMzhmZGFlOTNhOGE0ZjIxMzE3YmZkZTdhYThmMmFiYmYzYzMxNWVjMysm5Vk=: 00:16:37.757 10:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:37.757 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:37.757 10:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:37.757 10:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.757 10:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.757 10:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.757 10:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:37.757 10:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:37.757 10:05:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:38.016 10:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:16:38.016 10:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:38.016 10:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:38.016 10:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:38.016 10:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:38.016 10:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:38.016 10:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:38.016 10:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.016 10:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.016 10:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.016 10:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:38.016 10:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:38.581 00:16:38.581 10:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:38.581 10:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:38.581 10:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:38.839 10:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:38.839 10:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:38.839 10:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.839 10:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.839 10:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.839 10:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:38.839 { 00:16:38.839 "cntlid": 19, 00:16:38.839 "qid": 0, 00:16:38.839 "state": "enabled", 00:16:38.839 "thread": "nvmf_tgt_poll_group_000", 00:16:38.839 "listen_address": { 00:16:38.839 "trtype": "TCP", 00:16:38.839 "adrfam": "IPv4", 00:16:38.839 "traddr": "10.0.0.2", 00:16:38.839 "trsvcid": "4420" 00:16:38.839 }, 00:16:38.839 "peer_address": { 00:16:38.839 "trtype": "TCP", 00:16:38.839 "adrfam": "IPv4", 00:16:38.839 "traddr": "10.0.0.1", 00:16:38.839 "trsvcid": "43164" 00:16:38.839 }, 00:16:38.839 "auth": { 00:16:38.839 "state": "completed", 00:16:38.839 "digest": "sha256", 00:16:38.839 "dhgroup": "ffdhe3072" 00:16:38.839 } 00:16:38.839 } 00:16:38.839 ]' 00:16:38.839 10:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:38.839 10:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:38.839 10:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:38.839 10:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:38.839 10:05:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:38.839 10:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:38.839 10:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:38.839 10:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:39.405 10:05:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:Mzk1NTg5ZGUxNWU4MDE3ZTIwYWVkNzhhODI3NGQ0NzMtq9te: --dhchap-ctrl-secret DHHC-1:02:NWM1M2M0OWQzZTQxM2M1ZmU4YWVkNDBmNGExZTQyMWEyYzM1NzNmNmY4ZjU3YjZlY8rP+g==: 00:16:40.338 10:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:40.338 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:40.338 10:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:40.338 10:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.338 10:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.338 10:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.338 10:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:40.338 10:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:40.338 10:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:40.595 10:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:16:40.595 10:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:40.595 10:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:40.595 10:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:40.595 10:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:40.595 10:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:40.595 10:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:40.595 10:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:40.595 10:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.595 10:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:40.595 10:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:40.595 10:05:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:41.529 00:16:41.529 10:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:41.529 10:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:41.529 10:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:41.787 10:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:41.787 10:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:41.787 10:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.787 10:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.787 10:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.787 10:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:41.787 { 00:16:41.787 "cntlid": 21, 00:16:41.787 "qid": 0, 00:16:41.787 "state": "enabled", 00:16:41.787 "thread": "nvmf_tgt_poll_group_000", 00:16:41.787 "listen_address": { 00:16:41.787 "trtype": "TCP", 00:16:41.787 "adrfam": "IPv4", 00:16:41.787 "traddr": "10.0.0.2", 00:16:41.787 "trsvcid": "4420" 00:16:41.787 }, 00:16:41.787 "peer_address": { 00:16:41.787 "trtype": "TCP", 00:16:41.787 "adrfam": "IPv4", 00:16:41.787 "traddr": "10.0.0.1", 00:16:41.787 "trsvcid": "43186" 00:16:41.787 }, 00:16:41.787 "auth": { 00:16:41.787 "state": "completed", 00:16:41.787 "digest": "sha256", 00:16:41.787 "dhgroup": "ffdhe3072" 00:16:41.787 } 00:16:41.787 } 00:16:41.787 ]' 00:16:41.787 10:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:41.787 10:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:41.787 10:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:41.787 10:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:41.787 10:05:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:42.076 10:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:42.076 10:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:42.076 10:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:42.334 10:05:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:NTZlZWM5ZmE4MThjZWIzYjUxMTBkZWY0ZGFhOGI2MDRlZWVhN2M1NzM5Y2VmN2Yy9TJSEw==: --dhchap-ctrl-secret DHHC-1:01:Y2RjYzgxMGZhOGFhNzkzOWJhZDc3MmQwOTcwODg1MTG0PvqR: 00:16:43.267 10:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:43.267 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:43.267 10:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:43.267 10:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.267 10:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.267 10:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.267 10:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:43.267 10:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:43.267 10:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:43.833 10:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:16:43.833 10:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:43.833 10:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:43.833 10:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:43.833 10:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:43.833 10:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:43.833 10:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:16:43.833 10:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.833 10:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.833 10:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.833 10:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:43.833 10:05:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:44.399 00:16:44.399 10:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:44.399 10:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:44.399 10:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:44.658 10:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:44.658 10:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:44.658 10:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.658 10:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.658 10:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.658 10:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:44.658 { 00:16:44.658 "cntlid": 23, 00:16:44.658 "qid": 0, 00:16:44.658 "state": "enabled", 00:16:44.658 "thread": "nvmf_tgt_poll_group_000", 00:16:44.658 "listen_address": { 00:16:44.658 "trtype": "TCP", 00:16:44.658 "adrfam": "IPv4", 00:16:44.658 "traddr": "10.0.0.2", 00:16:44.658 "trsvcid": "4420" 00:16:44.658 }, 00:16:44.658 "peer_address": { 00:16:44.658 "trtype": "TCP", 00:16:44.658 "adrfam": "IPv4", 00:16:44.658 "traddr": "10.0.0.1", 00:16:44.658 "trsvcid": "44716" 00:16:44.658 }, 00:16:44.658 "auth": { 00:16:44.658 "state": "completed", 00:16:44.658 "digest": "sha256", 00:16:44.658 "dhgroup": "ffdhe3072" 00:16:44.658 } 00:16:44.658 } 00:16:44.658 ]' 00:16:44.658 10:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:44.658 10:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:44.658 10:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:44.916 10:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:44.916 10:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:44.916 10:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:44.916 10:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:44.916 10:05:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:45.173 10:05:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:ODg3YzlhNDdhNmE2MWExZTAwNjcyZmM2ZTJmMjIwM2FkYzQwNjZhNTI5ODhjMTg1MDhjMWQ4ZGJiMDAxMTFhOZgH7UI=: 00:16:46.104 10:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:46.104 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:46.104 10:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:46.104 10:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.104 10:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.104 10:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.104 10:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:46.104 10:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:46.104 10:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:46.104 10:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:46.668 10:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:16:46.668 10:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:46.668 10:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:46.668 10:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:46.668 10:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:46.668 10:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:46.669 10:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:46.669 10:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.669 10:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.669 10:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.669 10:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:46.669 10:05:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:46.927 00:16:46.927 10:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:46.927 10:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:46.927 10:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:47.492 10:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:47.492 10:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:47.492 10:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.492 10:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.492 10:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.492 10:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:47.492 { 00:16:47.492 "cntlid": 25, 00:16:47.492 "qid": 0, 00:16:47.492 "state": "enabled", 00:16:47.492 "thread": "nvmf_tgt_poll_group_000", 00:16:47.492 "listen_address": { 00:16:47.492 "trtype": "TCP", 00:16:47.492 "adrfam": "IPv4", 00:16:47.492 "traddr": "10.0.0.2", 00:16:47.492 "trsvcid": "4420" 00:16:47.492 }, 00:16:47.492 "peer_address": { 00:16:47.492 "trtype": "TCP", 00:16:47.492 "adrfam": "IPv4", 00:16:47.492 "traddr": "10.0.0.1", 00:16:47.492 "trsvcid": "44750" 00:16:47.492 }, 00:16:47.492 "auth": { 00:16:47.492 "state": "completed", 00:16:47.492 "digest": "sha256", 00:16:47.492 "dhgroup": "ffdhe4096" 00:16:47.492 } 00:16:47.492 } 00:16:47.492 ]' 00:16:47.492 10:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:47.492 10:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:47.492 10:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:47.492 10:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:47.492 10:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:47.492 10:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:47.492 10:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:47.492 10:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:48.055 10:05:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:MmE3MTQ0ZmMyYmZkNWM2ZTgwM2QwMTVjZGMzZjRlNWRlNTcxYzFkMjgwMzY0YzUw+y0etQ==: --dhchap-ctrl-secret DHHC-1:03:YjQ4YTJiZjQ2NTRiODRlNjc4YjM4MDdlMzhmZGFlOTNhOGE0ZjIxMzE3YmZkZTdhYThmMmFiYmYzYzMxNWVjMysm5Vk=: 00:16:48.988 10:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:48.988 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:48.988 10:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:48.988 10:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.988 10:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.988 10:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.988 10:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:48.988 10:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:48.988 10:05:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:49.246 10:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:16:49.246 10:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:49.246 10:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:49.246 10:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:49.246 10:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:49.246 10:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:49.246 10:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:49.246 10:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.246 10:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.246 10:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.246 10:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:49.246 10:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:49.810 00:16:49.810 10:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:49.810 10:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:49.810 10:05:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:50.381 10:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:50.381 10:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:50.381 10:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.381 10:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.381 10:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.381 10:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:50.381 { 00:16:50.381 "cntlid": 27, 00:16:50.381 "qid": 0, 00:16:50.381 "state": "enabled", 00:16:50.381 "thread": "nvmf_tgt_poll_group_000", 00:16:50.381 "listen_address": { 00:16:50.381 "trtype": "TCP", 00:16:50.381 "adrfam": "IPv4", 00:16:50.381 "traddr": "10.0.0.2", 00:16:50.381 "trsvcid": "4420" 00:16:50.381 }, 00:16:50.381 "peer_address": { 00:16:50.381 "trtype": "TCP", 00:16:50.381 "adrfam": "IPv4", 00:16:50.381 "traddr": "10.0.0.1", 00:16:50.381 "trsvcid": "44782" 00:16:50.381 }, 00:16:50.381 "auth": { 00:16:50.381 "state": "completed", 00:16:50.381 "digest": "sha256", 00:16:50.381 "dhgroup": "ffdhe4096" 00:16:50.381 } 00:16:50.381 } 00:16:50.381 ]' 00:16:50.381 10:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:50.381 10:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:50.381 10:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:50.381 10:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:50.381 10:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:50.640 10:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:50.640 10:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:50.640 10:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:50.898 10:05:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:Mzk1NTg5ZGUxNWU4MDE3ZTIwYWVkNzhhODI3NGQ0NzMtq9te: --dhchap-ctrl-secret DHHC-1:02:NWM1M2M0OWQzZTQxM2M1ZmU4YWVkNDBmNGExZTQyMWEyYzM1NzNmNmY4ZjU3YjZlY8rP+g==: 00:16:51.830 10:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:51.830 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:51.830 10:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:51.830 10:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.830 10:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.830 10:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.830 10:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:51.830 10:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:51.830 10:05:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:52.396 10:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:16:52.396 10:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:52.396 10:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:52.396 10:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:52.396 10:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:52.396 10:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:52.397 10:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:52.397 10:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.397 10:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.397 10:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.397 10:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:52.397 10:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:52.962 00:16:52.962 10:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:52.962 10:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:52.962 10:05:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:53.220 10:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:53.220 10:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:53.220 10:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.220 10:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.220 10:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.220 10:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:53.220 { 00:16:53.220 "cntlid": 29, 00:16:53.220 "qid": 0, 00:16:53.220 "state": "enabled", 00:16:53.220 "thread": "nvmf_tgt_poll_group_000", 00:16:53.220 "listen_address": { 00:16:53.220 "trtype": "TCP", 00:16:53.220 "adrfam": "IPv4", 00:16:53.220 "traddr": "10.0.0.2", 00:16:53.220 "trsvcid": "4420" 00:16:53.220 }, 00:16:53.220 "peer_address": { 00:16:53.220 "trtype": "TCP", 00:16:53.220 "adrfam": "IPv4", 00:16:53.220 "traddr": "10.0.0.1", 00:16:53.220 "trsvcid": "44816" 00:16:53.220 }, 00:16:53.220 "auth": { 00:16:53.220 "state": "completed", 00:16:53.220 "digest": "sha256", 00:16:53.220 "dhgroup": "ffdhe4096" 00:16:53.220 } 00:16:53.220 } 00:16:53.220 ]' 00:16:53.220 10:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:53.220 10:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:53.220 10:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:53.481 10:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:53.481 10:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:53.481 10:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:53.481 10:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:53.481 10:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:53.739 10:05:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:NTZlZWM5ZmE4MThjZWIzYjUxMTBkZWY0ZGFhOGI2MDRlZWVhN2M1NzM5Y2VmN2Yy9TJSEw==: --dhchap-ctrl-secret DHHC-1:01:Y2RjYzgxMGZhOGFhNzkzOWJhZDc3MmQwOTcwODg1MTG0PvqR: 00:16:55.113 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:55.113 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:55.113 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:55.113 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.113 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.113 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.113 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:55.113 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:55.113 10:05:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:55.371 10:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:16:55.371 10:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:55.371 10:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:55.371 10:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:55.371 10:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:55.371 10:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:55.371 10:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:16:55.371 10:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.371 10:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.371 10:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:55.371 10:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:55.371 10:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:55.629 00:16:55.894 10:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:55.894 10:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:55.894 10:05:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:56.203 10:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:56.203 10:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:56.203 10:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.203 10:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.203 10:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.203 10:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:56.203 { 00:16:56.203 "cntlid": 31, 00:16:56.203 "qid": 0, 00:16:56.203 "state": "enabled", 00:16:56.203 "thread": "nvmf_tgt_poll_group_000", 00:16:56.203 "listen_address": { 00:16:56.203 "trtype": "TCP", 00:16:56.203 "adrfam": "IPv4", 00:16:56.203 "traddr": "10.0.0.2", 00:16:56.203 "trsvcid": "4420" 00:16:56.203 }, 00:16:56.203 "peer_address": { 00:16:56.203 "trtype": "TCP", 00:16:56.203 "adrfam": "IPv4", 00:16:56.203 "traddr": "10.0.0.1", 00:16:56.203 "trsvcid": "39588" 00:16:56.203 }, 00:16:56.203 "auth": { 00:16:56.203 "state": "completed", 00:16:56.203 "digest": "sha256", 00:16:56.203 "dhgroup": "ffdhe4096" 00:16:56.203 } 00:16:56.203 } 00:16:56.203 ]' 00:16:56.203 10:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:56.203 10:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:56.203 10:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:56.203 10:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:56.203 10:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:56.203 10:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:56.203 10:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:56.203 10:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:56.769 10:05:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:ODg3YzlhNDdhNmE2MWExZTAwNjcyZmM2ZTJmMjIwM2FkYzQwNjZhNTI5ODhjMTg1MDhjMWQ4ZGJiMDAxMTFhOZgH7UI=: 00:16:57.703 10:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:57.703 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:57.703 10:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:16:57.703 10:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.703 10:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.961 10:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.961 10:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:57.961 10:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:57.961 10:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:57.961 10:05:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:58.526 10:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:16:58.526 10:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:58.526 10:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:58.526 10:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:58.526 10:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:58.526 10:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:58.526 10:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:58.526 10:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.526 10:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.526 10:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.526 10:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:58.526 10:05:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:59.091 00:16:59.091 10:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:59.091 10:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:59.091 10:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:59.350 10:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:59.350 10:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:59.350 10:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:59.350 10:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.350 10:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:59.350 10:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:59.350 { 00:16:59.350 "cntlid": 33, 00:16:59.350 "qid": 0, 00:16:59.350 "state": "enabled", 00:16:59.350 "thread": "nvmf_tgt_poll_group_000", 00:16:59.350 "listen_address": { 00:16:59.350 "trtype": "TCP", 00:16:59.350 "adrfam": "IPv4", 00:16:59.350 "traddr": "10.0.0.2", 00:16:59.350 "trsvcid": "4420" 00:16:59.350 }, 00:16:59.350 "peer_address": { 00:16:59.350 "trtype": "TCP", 00:16:59.350 "adrfam": "IPv4", 00:16:59.350 "traddr": "10.0.0.1", 00:16:59.350 "trsvcid": "39626" 00:16:59.350 }, 00:16:59.350 "auth": { 00:16:59.350 "state": "completed", 00:16:59.350 "digest": "sha256", 00:16:59.350 "dhgroup": "ffdhe6144" 00:16:59.350 } 00:16:59.350 } 00:16:59.350 ]' 00:16:59.350 10:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:59.608 10:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:59.608 10:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:59.608 10:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:59.608 10:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:59.608 10:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:59.608 10:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:59.608 10:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:59.865 10:05:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:MmE3MTQ0ZmMyYmZkNWM2ZTgwM2QwMTVjZGMzZjRlNWRlNTcxYzFkMjgwMzY0YzUw+y0etQ==: --dhchap-ctrl-secret DHHC-1:03:YjQ4YTJiZjQ2NTRiODRlNjc4YjM4MDdlMzhmZGFlOTNhOGE0ZjIxMzE3YmZkZTdhYThmMmFiYmYzYzMxNWVjMysm5Vk=: 00:17:01.238 10:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:01.238 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:01.238 10:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:01.238 10:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.238 10:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.238 10:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.238 10:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:01.238 10:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:01.238 10:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:01.804 10:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:17:01.804 10:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:01.804 10:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:01.804 10:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:01.804 10:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:01.804 10:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:01.804 10:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:01.804 10:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:01.804 10:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.804 10:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:01.804 10:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:01.804 10:05:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:02.370 00:17:02.370 10:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:02.370 10:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:02.370 10:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:02.935 10:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:02.935 10:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:02.935 10:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:02.935 10:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.935 10:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:02.935 10:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:02.935 { 00:17:02.935 "cntlid": 35, 00:17:02.935 "qid": 0, 00:17:02.935 "state": "enabled", 00:17:02.935 "thread": "nvmf_tgt_poll_group_000", 00:17:02.935 "listen_address": { 00:17:02.935 "trtype": "TCP", 00:17:02.935 "adrfam": "IPv4", 00:17:02.935 "traddr": "10.0.0.2", 00:17:02.935 "trsvcid": "4420" 00:17:02.935 }, 00:17:02.935 "peer_address": { 00:17:02.935 "trtype": "TCP", 00:17:02.935 "adrfam": "IPv4", 00:17:02.935 "traddr": "10.0.0.1", 00:17:02.935 "trsvcid": "39656" 00:17:02.935 }, 00:17:02.935 "auth": { 00:17:02.935 "state": "completed", 00:17:02.935 "digest": "sha256", 00:17:02.935 "dhgroup": "ffdhe6144" 00:17:02.935 } 00:17:02.935 } 00:17:02.935 ]' 00:17:02.935 10:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:02.935 10:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:02.935 10:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:02.935 10:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:02.935 10:05:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:02.935 10:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:02.935 10:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:02.935 10:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:03.500 10:05:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:Mzk1NTg5ZGUxNWU4MDE3ZTIwYWVkNzhhODI3NGQ0NzMtq9te: --dhchap-ctrl-secret DHHC-1:02:NWM1M2M0OWQzZTQxM2M1ZmU4YWVkNDBmNGExZTQyMWEyYzM1NzNmNmY4ZjU3YjZlY8rP+g==: 00:17:04.433 10:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:04.433 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:04.433 10:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:04.433 10:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.433 10:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.433 10:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:04.433 10:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:04.433 10:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:04.433 10:05:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:04.999 10:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:17:04.999 10:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:04.999 10:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:04.999 10:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:04.999 10:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:04.999 10:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:04.999 10:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:04.999 10:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:04.999 10:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.000 10:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:05.000 10:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:05.000 10:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:05.933 00:17:05.933 10:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:05.933 10:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:05.933 10:05:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:06.190 10:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:06.190 10:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:06.190 10:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:06.190 10:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.448 10:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:06.448 10:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:06.448 { 00:17:06.448 "cntlid": 37, 00:17:06.448 "qid": 0, 00:17:06.448 "state": "enabled", 00:17:06.448 "thread": "nvmf_tgt_poll_group_000", 00:17:06.448 "listen_address": { 00:17:06.448 "trtype": "TCP", 00:17:06.448 "adrfam": "IPv4", 00:17:06.448 "traddr": "10.0.0.2", 00:17:06.448 "trsvcid": "4420" 00:17:06.448 }, 00:17:06.448 "peer_address": { 00:17:06.448 "trtype": "TCP", 00:17:06.448 "adrfam": "IPv4", 00:17:06.448 "traddr": "10.0.0.1", 00:17:06.448 "trsvcid": "53578" 00:17:06.448 }, 00:17:06.448 "auth": { 00:17:06.448 "state": "completed", 00:17:06.448 "digest": "sha256", 00:17:06.448 "dhgroup": "ffdhe6144" 00:17:06.448 } 00:17:06.448 } 00:17:06.448 ]' 00:17:06.448 10:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:06.448 10:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:06.448 10:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:06.448 10:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:06.448 10:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:06.448 10:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:06.448 10:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:06.448 10:05:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:07.013 10:05:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:NTZlZWM5ZmE4MThjZWIzYjUxMTBkZWY0ZGFhOGI2MDRlZWVhN2M1NzM5Y2VmN2Yy9TJSEw==: --dhchap-ctrl-secret DHHC-1:01:Y2RjYzgxMGZhOGFhNzkzOWJhZDc3MmQwOTcwODg1MTG0PvqR: 00:17:08.389 10:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:08.389 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:08.389 10:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:08.389 10:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.389 10:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.389 10:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.389 10:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:08.389 10:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:08.389 10:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:17:08.389 10:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:17:08.389 10:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:08.389 10:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:08.389 10:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:08.389 10:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:08.389 10:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:08.389 10:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:17:08.389 10:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.389 10:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.389 10:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.389 10:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:08.389 10:05:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:09.327 00:17:09.327 10:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:09.327 10:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:09.327 10:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:09.891 10:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.891 10:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:09.891 10:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.891 10:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.891 10:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.891 10:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:09.891 { 00:17:09.891 "cntlid": 39, 00:17:09.891 "qid": 0, 00:17:09.891 "state": "enabled", 00:17:09.891 "thread": "nvmf_tgt_poll_group_000", 00:17:09.891 "listen_address": { 00:17:09.891 "trtype": "TCP", 00:17:09.891 "adrfam": "IPv4", 00:17:09.891 "traddr": "10.0.0.2", 00:17:09.891 "trsvcid": "4420" 00:17:09.891 }, 00:17:09.891 "peer_address": { 00:17:09.891 "trtype": "TCP", 00:17:09.891 "adrfam": "IPv4", 00:17:09.891 "traddr": "10.0.0.1", 00:17:09.891 "trsvcid": "53600" 00:17:09.891 }, 00:17:09.891 "auth": { 00:17:09.891 "state": "completed", 00:17:09.891 "digest": "sha256", 00:17:09.891 "dhgroup": "ffdhe6144" 00:17:09.891 } 00:17:09.891 } 00:17:09.891 ]' 00:17:09.891 10:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:09.891 10:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:09.891 10:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:09.891 10:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:09.891 10:05:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:09.891 10:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:09.891 10:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:09.891 10:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:10.487 10:05:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:ODg3YzlhNDdhNmE2MWExZTAwNjcyZmM2ZTJmMjIwM2FkYzQwNjZhNTI5ODhjMTg1MDhjMWQ4ZGJiMDAxMTFhOZgH7UI=: 00:17:11.860 10:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:11.860 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:11.860 10:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:11.860 10:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:11.860 10:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.860 10:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:11.860 10:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:11.860 10:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:11.860 10:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:11.860 10:05:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:12.425 10:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:17:12.425 10:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:12.425 10:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:12.425 10:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:12.425 10:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:12.425 10:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:12.425 10:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:12.425 10:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:12.425 10:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.425 10:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:12.425 10:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:12.425 10:05:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:13.796 00:17:13.796 10:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:13.796 10:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:13.796 10:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:13.796 10:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.796 10:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:13.796 10:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:13.796 10:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.796 10:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:13.796 10:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:13.796 { 00:17:13.796 "cntlid": 41, 00:17:13.796 "qid": 0, 00:17:13.796 "state": "enabled", 00:17:13.796 "thread": "nvmf_tgt_poll_group_000", 00:17:13.796 "listen_address": { 00:17:13.796 "trtype": "TCP", 00:17:13.796 "adrfam": "IPv4", 00:17:13.796 "traddr": "10.0.0.2", 00:17:13.796 "trsvcid": "4420" 00:17:13.796 }, 00:17:13.796 "peer_address": { 00:17:13.796 "trtype": "TCP", 00:17:13.796 "adrfam": "IPv4", 00:17:13.796 "traddr": "10.0.0.1", 00:17:13.796 "trsvcid": "53628" 00:17:13.796 }, 00:17:13.796 "auth": { 00:17:13.796 "state": "completed", 00:17:13.796 "digest": "sha256", 00:17:13.796 "dhgroup": "ffdhe8192" 00:17:13.796 } 00:17:13.796 } 00:17:13.796 ]' 00:17:13.796 10:05:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:14.051 10:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:14.051 10:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:14.051 10:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:14.051 10:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:14.051 10:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:14.051 10:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:14.051 10:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:14.612 10:05:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:MmE3MTQ0ZmMyYmZkNWM2ZTgwM2QwMTVjZGMzZjRlNWRlNTcxYzFkMjgwMzY0YzUw+y0etQ==: --dhchap-ctrl-secret DHHC-1:03:YjQ4YTJiZjQ2NTRiODRlNjc4YjM4MDdlMzhmZGFlOTNhOGE0ZjIxMzE3YmZkZTdhYThmMmFiYmYzYzMxNWVjMysm5Vk=: 00:17:15.545 10:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:15.545 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:15.545 10:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:15.545 10:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.545 10:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.545 10:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.545 10:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:15.546 10:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:15.546 10:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:15.803 10:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:17:15.803 10:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:15.803 10:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:15.803 10:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:15.803 10:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:15.803 10:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:15.804 10:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:15.804 10:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.804 10:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.804 10:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.804 10:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:15.804 10:06:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:16.734 00:17:16.734 10:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:16.734 10:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:16.734 10:06:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:17.299 10:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.299 10:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:17.299 10:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:17.299 10:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.299 10:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:17.299 10:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:17.299 { 00:17:17.299 "cntlid": 43, 00:17:17.299 "qid": 0, 00:17:17.299 "state": "enabled", 00:17:17.299 "thread": "nvmf_tgt_poll_group_000", 00:17:17.299 "listen_address": { 00:17:17.299 "trtype": "TCP", 00:17:17.299 "adrfam": "IPv4", 00:17:17.299 "traddr": "10.0.0.2", 00:17:17.299 "trsvcid": "4420" 00:17:17.299 }, 00:17:17.299 "peer_address": { 00:17:17.299 "trtype": "TCP", 00:17:17.299 "adrfam": "IPv4", 00:17:17.299 "traddr": "10.0.0.1", 00:17:17.299 "trsvcid": "38348" 00:17:17.299 }, 00:17:17.299 "auth": { 00:17:17.299 "state": "completed", 00:17:17.299 "digest": "sha256", 00:17:17.299 "dhgroup": "ffdhe8192" 00:17:17.299 } 00:17:17.299 } 00:17:17.299 ]' 00:17:17.299 10:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:17.299 10:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:17.299 10:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:17.299 10:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:17.299 10:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:17.299 10:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:17.299 10:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:17.299 10:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:17.557 10:06:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:Mzk1NTg5ZGUxNWU4MDE3ZTIwYWVkNzhhODI3NGQ0NzMtq9te: --dhchap-ctrl-secret DHHC-1:02:NWM1M2M0OWQzZTQxM2M1ZmU4YWVkNDBmNGExZTQyMWEyYzM1NzNmNmY4ZjU3YjZlY8rP+g==: 00:17:18.927 10:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:18.927 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:18.927 10:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:18.927 10:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:18.927 10:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.927 10:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:18.927 10:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:18.927 10:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:18.927 10:06:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:19.184 10:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:17:19.184 10:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:19.184 10:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:19.184 10:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:19.184 10:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:19.184 10:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:19.184 10:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:19.184 10:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:19.184 10:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.184 10:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:19.184 10:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:19.184 10:06:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:20.116 00:17:20.116 10:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:20.116 10:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:20.116 10:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:20.683 10:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.683 10:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:20.683 10:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:20.683 10:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.683 10:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:20.683 10:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:20.683 { 00:17:20.683 "cntlid": 45, 00:17:20.683 "qid": 0, 00:17:20.683 "state": "enabled", 00:17:20.683 "thread": "nvmf_tgt_poll_group_000", 00:17:20.683 "listen_address": { 00:17:20.683 "trtype": "TCP", 00:17:20.683 "adrfam": "IPv4", 00:17:20.683 "traddr": "10.0.0.2", 00:17:20.683 "trsvcid": "4420" 00:17:20.683 }, 00:17:20.683 "peer_address": { 00:17:20.683 "trtype": "TCP", 00:17:20.683 "adrfam": "IPv4", 00:17:20.683 "traddr": "10.0.0.1", 00:17:20.683 "trsvcid": "38382" 00:17:20.683 }, 00:17:20.683 "auth": { 00:17:20.683 "state": "completed", 00:17:20.683 "digest": "sha256", 00:17:20.683 "dhgroup": "ffdhe8192" 00:17:20.683 } 00:17:20.683 } 00:17:20.683 ]' 00:17:20.683 10:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:20.683 10:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:20.683 10:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:20.683 10:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:20.683 10:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:20.683 10:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:20.683 10:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:20.683 10:06:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:21.249 10:06:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:NTZlZWM5ZmE4MThjZWIzYjUxMTBkZWY0ZGFhOGI2MDRlZWVhN2M1NzM5Y2VmN2Yy9TJSEw==: --dhchap-ctrl-secret DHHC-1:01:Y2RjYzgxMGZhOGFhNzkzOWJhZDc3MmQwOTcwODg1MTG0PvqR: 00:17:22.183 10:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:22.183 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:22.183 10:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:22.183 10:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.183 10:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.183 10:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.183 10:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:22.183 10:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:22.183 10:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:22.440 10:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:17:22.440 10:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:22.440 10:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:22.440 10:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:22.440 10:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:22.440 10:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:22.440 10:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:17:22.440 10:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:22.440 10:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.440 10:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:22.440 10:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:22.441 10:06:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:23.374 00:17:23.374 10:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:23.374 10:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:23.374 10:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:23.941 10:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:23.941 10:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:23.941 10:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:23.941 10:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.941 10:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:23.941 10:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:23.941 { 00:17:23.941 "cntlid": 47, 00:17:23.941 "qid": 0, 00:17:23.941 "state": "enabled", 00:17:23.941 "thread": "nvmf_tgt_poll_group_000", 00:17:23.941 "listen_address": { 00:17:23.941 "trtype": "TCP", 00:17:23.941 "adrfam": "IPv4", 00:17:23.941 "traddr": "10.0.0.2", 00:17:23.941 "trsvcid": "4420" 00:17:23.941 }, 00:17:23.941 "peer_address": { 00:17:23.941 "trtype": "TCP", 00:17:23.941 "adrfam": "IPv4", 00:17:23.941 "traddr": "10.0.0.1", 00:17:23.941 "trsvcid": "38402" 00:17:23.941 }, 00:17:23.941 "auth": { 00:17:23.941 "state": "completed", 00:17:23.941 "digest": "sha256", 00:17:23.941 "dhgroup": "ffdhe8192" 00:17:23.941 } 00:17:23.941 } 00:17:23.941 ]' 00:17:23.941 10:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:23.941 10:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:23.941 10:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:23.941 10:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:23.941 10:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:23.941 10:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:23.941 10:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:23.941 10:06:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:24.199 10:06:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:ODg3YzlhNDdhNmE2MWExZTAwNjcyZmM2ZTJmMjIwM2FkYzQwNjZhNTI5ODhjMTg1MDhjMWQ4ZGJiMDAxMTFhOZgH7UI=: 00:17:25.135 10:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:25.135 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:25.135 10:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:25.135 10:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.135 10:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.431 10:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.431 10:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:17:25.431 10:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:25.431 10:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:25.431 10:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:25.431 10:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:25.998 10:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:17:25.998 10:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:25.998 10:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:25.998 10:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:25.998 10:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:25.998 10:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:25.998 10:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:25.998 10:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:25.998 10:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.998 10:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:25.998 10:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:25.998 10:06:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:26.257 00:17:26.257 10:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:26.257 10:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:26.257 10:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:26.514 10:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.514 10:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:26.772 10:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:26.772 10:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.772 10:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:26.772 10:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:26.772 { 00:17:26.772 "cntlid": 49, 00:17:26.772 "qid": 0, 00:17:26.772 "state": "enabled", 00:17:26.772 "thread": "nvmf_tgt_poll_group_000", 00:17:26.772 "listen_address": { 00:17:26.772 "trtype": "TCP", 00:17:26.772 "adrfam": "IPv4", 00:17:26.772 "traddr": "10.0.0.2", 00:17:26.772 "trsvcid": "4420" 00:17:26.772 }, 00:17:26.772 "peer_address": { 00:17:26.772 "trtype": "TCP", 00:17:26.772 "adrfam": "IPv4", 00:17:26.772 "traddr": "10.0.0.1", 00:17:26.772 "trsvcid": "56092" 00:17:26.772 }, 00:17:26.772 "auth": { 00:17:26.772 "state": "completed", 00:17:26.772 "digest": "sha384", 00:17:26.772 "dhgroup": "null" 00:17:26.772 } 00:17:26.772 } 00:17:26.772 ]' 00:17:26.772 10:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:26.773 10:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:26.773 10:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:26.773 10:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:26.773 10:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:26.773 10:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:26.773 10:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:26.773 10:06:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:27.338 10:06:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:MmE3MTQ0ZmMyYmZkNWM2ZTgwM2QwMTVjZGMzZjRlNWRlNTcxYzFkMjgwMzY0YzUw+y0etQ==: --dhchap-ctrl-secret DHHC-1:03:YjQ4YTJiZjQ2NTRiODRlNjc4YjM4MDdlMzhmZGFlOTNhOGE0ZjIxMzE3YmZkZTdhYThmMmFiYmYzYzMxNWVjMysm5Vk=: 00:17:28.713 10:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:28.713 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:28.713 10:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:28.713 10:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.713 10:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.713 10:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.713 10:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:28.713 10:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:28.713 10:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:28.971 10:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:17:28.971 10:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:28.971 10:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:28.971 10:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:28.971 10:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:28.971 10:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:28.971 10:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:28.971 10:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:28.971 10:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.971 10:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:28.971 10:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:28.971 10:06:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:29.228 00:17:29.228 10:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:29.228 10:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:29.228 10:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:29.793 10:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:29.793 10:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:29.793 10:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:29.793 10:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.793 10:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:29.793 10:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:29.793 { 00:17:29.793 "cntlid": 51, 00:17:29.793 "qid": 0, 00:17:29.793 "state": "enabled", 00:17:29.793 "thread": "nvmf_tgt_poll_group_000", 00:17:29.793 "listen_address": { 00:17:29.793 "trtype": "TCP", 00:17:29.793 "adrfam": "IPv4", 00:17:29.793 "traddr": "10.0.0.2", 00:17:29.793 "trsvcid": "4420" 00:17:29.793 }, 00:17:29.793 "peer_address": { 00:17:29.793 "trtype": "TCP", 00:17:29.793 "adrfam": "IPv4", 00:17:29.793 "traddr": "10.0.0.1", 00:17:29.793 "trsvcid": "56106" 00:17:29.793 }, 00:17:29.793 "auth": { 00:17:29.793 "state": "completed", 00:17:29.793 "digest": "sha384", 00:17:29.793 "dhgroup": "null" 00:17:29.793 } 00:17:29.793 } 00:17:29.793 ]' 00:17:29.793 10:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:29.793 10:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:29.793 10:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:29.793 10:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:29.793 10:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:29.793 10:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:29.793 10:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:29.793 10:06:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:30.050 10:06:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:Mzk1NTg5ZGUxNWU4MDE3ZTIwYWVkNzhhODI3NGQ0NzMtq9te: --dhchap-ctrl-secret DHHC-1:02:NWM1M2M0OWQzZTQxM2M1ZmU4YWVkNDBmNGExZTQyMWEyYzM1NzNmNmY4ZjU3YjZlY8rP+g==: 00:17:31.424 10:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:31.424 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:31.424 10:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:31.424 10:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.424 10:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.424 10:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.424 10:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:31.424 10:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:31.424 10:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:31.424 10:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:17:31.424 10:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:31.424 10:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:31.424 10:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:31.424 10:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:31.424 10:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:31.424 10:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:31.424 10:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:31.424 10:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.424 10:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:31.424 10:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:31.424 10:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:31.682 00:17:31.939 10:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:31.939 10:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:31.939 10:06:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:32.197 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:32.197 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:32.197 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:32.197 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.197 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:32.197 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:32.197 { 00:17:32.197 "cntlid": 53, 00:17:32.197 "qid": 0, 00:17:32.197 "state": "enabled", 00:17:32.197 "thread": "nvmf_tgt_poll_group_000", 00:17:32.197 "listen_address": { 00:17:32.197 "trtype": "TCP", 00:17:32.197 "adrfam": "IPv4", 00:17:32.197 "traddr": "10.0.0.2", 00:17:32.197 "trsvcid": "4420" 00:17:32.197 }, 00:17:32.197 "peer_address": { 00:17:32.197 "trtype": "TCP", 00:17:32.197 "adrfam": "IPv4", 00:17:32.197 "traddr": "10.0.0.1", 00:17:32.197 "trsvcid": "56128" 00:17:32.197 }, 00:17:32.197 "auth": { 00:17:32.197 "state": "completed", 00:17:32.198 "digest": "sha384", 00:17:32.198 "dhgroup": "null" 00:17:32.198 } 00:17:32.198 } 00:17:32.198 ]' 00:17:32.198 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:32.455 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:32.455 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:32.455 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:32.455 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:32.455 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:32.455 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:32.455 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:32.713 10:06:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:NTZlZWM5ZmE4MThjZWIzYjUxMTBkZWY0ZGFhOGI2MDRlZWVhN2M1NzM5Y2VmN2Yy9TJSEw==: --dhchap-ctrl-secret DHHC-1:01:Y2RjYzgxMGZhOGFhNzkzOWJhZDc3MmQwOTcwODg1MTG0PvqR: 00:17:34.086 10:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:34.086 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:34.086 10:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:34.086 10:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.086 10:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.086 10:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.086 10:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:34.086 10:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:34.086 10:06:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:34.344 10:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:17:34.344 10:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:34.344 10:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:34.344 10:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:34.344 10:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:34.344 10:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:34.344 10:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:17:34.344 10:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:34.344 10:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.344 10:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:34.344 10:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:34.344 10:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:34.910 00:17:34.910 10:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:34.910 10:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:34.910 10:06:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:35.168 10:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:35.168 10:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:35.168 10:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:35.168 10:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.168 10:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:35.169 10:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:35.169 { 00:17:35.169 "cntlid": 55, 00:17:35.169 "qid": 0, 00:17:35.169 "state": "enabled", 00:17:35.169 "thread": "nvmf_tgt_poll_group_000", 00:17:35.169 "listen_address": { 00:17:35.169 "trtype": "TCP", 00:17:35.169 "adrfam": "IPv4", 00:17:35.169 "traddr": "10.0.0.2", 00:17:35.169 "trsvcid": "4420" 00:17:35.169 }, 00:17:35.169 "peer_address": { 00:17:35.169 "trtype": "TCP", 00:17:35.169 "adrfam": "IPv4", 00:17:35.169 "traddr": "10.0.0.1", 00:17:35.169 "trsvcid": "47588" 00:17:35.169 }, 00:17:35.169 "auth": { 00:17:35.169 "state": "completed", 00:17:35.169 "digest": "sha384", 00:17:35.169 "dhgroup": "null" 00:17:35.169 } 00:17:35.169 } 00:17:35.169 ]' 00:17:35.169 10:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:35.169 10:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:35.169 10:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:35.169 10:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:35.169 10:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:35.169 10:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:35.169 10:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:35.169 10:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:35.735 10:06:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:ODg3YzlhNDdhNmE2MWExZTAwNjcyZmM2ZTJmMjIwM2FkYzQwNjZhNTI5ODhjMTg1MDhjMWQ4ZGJiMDAxMTFhOZgH7UI=: 00:17:37.108 10:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:37.108 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:37.108 10:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:37.108 10:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.108 10:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.108 10:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.108 10:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:37.108 10:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:37.108 10:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:37.108 10:06:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:37.108 10:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:17:37.108 10:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:37.108 10:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:37.108 10:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:37.108 10:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:37.108 10:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:37.108 10:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:37.108 10:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.108 10:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.108 10:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.108 10:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:37.108 10:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:37.674 00:17:37.674 10:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:37.674 10:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:37.674 10:06:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:37.932 10:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:37.932 10:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:37.932 10:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:37.932 10:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.932 10:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:37.933 10:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:37.933 { 00:17:37.933 "cntlid": 57, 00:17:37.933 "qid": 0, 00:17:37.933 "state": "enabled", 00:17:37.933 "thread": "nvmf_tgt_poll_group_000", 00:17:37.933 "listen_address": { 00:17:37.933 "trtype": "TCP", 00:17:37.933 "adrfam": "IPv4", 00:17:37.933 "traddr": "10.0.0.2", 00:17:37.933 "trsvcid": "4420" 00:17:37.933 }, 00:17:37.933 "peer_address": { 00:17:37.933 "trtype": "TCP", 00:17:37.933 "adrfam": "IPv4", 00:17:37.933 "traddr": "10.0.0.1", 00:17:37.933 "trsvcid": "47634" 00:17:37.933 }, 00:17:37.933 "auth": { 00:17:37.933 "state": "completed", 00:17:37.933 "digest": "sha384", 00:17:37.933 "dhgroup": "ffdhe2048" 00:17:37.933 } 00:17:37.933 } 00:17:37.933 ]' 00:17:37.933 10:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:37.933 10:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:37.933 10:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:38.191 10:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:38.191 10:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:38.191 10:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:38.191 10:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:38.191 10:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:38.449 10:06:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:MmE3MTQ0ZmMyYmZkNWM2ZTgwM2QwMTVjZGMzZjRlNWRlNTcxYzFkMjgwMzY0YzUw+y0etQ==: --dhchap-ctrl-secret DHHC-1:03:YjQ4YTJiZjQ2NTRiODRlNjc4YjM4MDdlMzhmZGFlOTNhOGE0ZjIxMzE3YmZkZTdhYThmMmFiYmYzYzMxNWVjMysm5Vk=: 00:17:39.383 10:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:39.642 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:39.642 10:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:39.642 10:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:39.642 10:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.642 10:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:39.642 10:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:39.642 10:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:39.642 10:06:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:40.247 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:17:40.247 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:40.247 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:40.247 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:40.247 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:40.247 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:40.247 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:40.247 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.247 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.247 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.247 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:40.247 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:40.505 00:17:40.505 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:40.505 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:40.505 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:40.763 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:40.763 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:40.763 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:40.763 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.763 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:40.763 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:40.763 { 00:17:40.763 "cntlid": 59, 00:17:40.763 "qid": 0, 00:17:40.763 "state": "enabled", 00:17:40.763 "thread": "nvmf_tgt_poll_group_000", 00:17:40.763 "listen_address": { 00:17:40.763 "trtype": "TCP", 00:17:40.763 "adrfam": "IPv4", 00:17:40.763 "traddr": "10.0.0.2", 00:17:40.763 "trsvcid": "4420" 00:17:40.763 }, 00:17:40.763 "peer_address": { 00:17:40.763 "trtype": "TCP", 00:17:40.763 "adrfam": "IPv4", 00:17:40.763 "traddr": "10.0.0.1", 00:17:40.763 "trsvcid": "47670" 00:17:40.763 }, 00:17:40.763 "auth": { 00:17:40.763 "state": "completed", 00:17:40.763 "digest": "sha384", 00:17:40.763 "dhgroup": "ffdhe2048" 00:17:40.763 } 00:17:40.763 } 00:17:40.763 ]' 00:17:40.763 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:41.021 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:41.021 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:41.021 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:41.021 10:06:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:41.021 10:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:41.021 10:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:41.021 10:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:41.279 10:06:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:Mzk1NTg5ZGUxNWU4MDE3ZTIwYWVkNzhhODI3NGQ0NzMtq9te: --dhchap-ctrl-secret DHHC-1:02:NWM1M2M0OWQzZTQxM2M1ZmU4YWVkNDBmNGExZTQyMWEyYzM1NzNmNmY4ZjU3YjZlY8rP+g==: 00:17:42.213 10:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:42.213 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:42.213 10:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:42.213 10:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.213 10:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.213 10:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.213 10:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:42.213 10:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:42.213 10:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:42.780 10:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:17:42.780 10:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:42.780 10:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:42.780 10:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:42.780 10:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:42.780 10:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:42.780 10:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:42.780 10:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:42.780 10:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.780 10:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:42.780 10:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:42.780 10:06:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:43.037 00:17:43.296 10:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:43.296 10:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:43.296 10:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:43.554 10:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:43.554 10:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:43.554 10:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:43.554 10:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.554 10:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:43.554 10:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:43.554 { 00:17:43.554 "cntlid": 61, 00:17:43.554 "qid": 0, 00:17:43.554 "state": "enabled", 00:17:43.554 "thread": "nvmf_tgt_poll_group_000", 00:17:43.554 "listen_address": { 00:17:43.554 "trtype": "TCP", 00:17:43.554 "adrfam": "IPv4", 00:17:43.554 "traddr": "10.0.0.2", 00:17:43.554 "trsvcid": "4420" 00:17:43.554 }, 00:17:43.554 "peer_address": { 00:17:43.554 "trtype": "TCP", 00:17:43.554 "adrfam": "IPv4", 00:17:43.554 "traddr": "10.0.0.1", 00:17:43.554 "trsvcid": "47696" 00:17:43.554 }, 00:17:43.554 "auth": { 00:17:43.554 "state": "completed", 00:17:43.554 "digest": "sha384", 00:17:43.554 "dhgroup": "ffdhe2048" 00:17:43.554 } 00:17:43.554 } 00:17:43.554 ]' 00:17:43.554 10:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:43.554 10:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:43.554 10:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:43.554 10:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:43.554 10:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:43.554 10:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:43.554 10:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:43.554 10:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:43.812 10:06:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:NTZlZWM5ZmE4MThjZWIzYjUxMTBkZWY0ZGFhOGI2MDRlZWVhN2M1NzM5Y2VmN2Yy9TJSEw==: --dhchap-ctrl-secret DHHC-1:01:Y2RjYzgxMGZhOGFhNzkzOWJhZDc3MmQwOTcwODg1MTG0PvqR: 00:17:45.187 10:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:45.187 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:45.187 10:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:45.187 10:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.187 10:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.187 10:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.187 10:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:45.187 10:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:45.187 10:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:45.445 10:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:17:45.445 10:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:45.445 10:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:45.445 10:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:45.445 10:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:45.445 10:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:45.445 10:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:17:45.445 10:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:45.445 10:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.445 10:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:45.445 10:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:45.445 10:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:46.009 00:17:46.009 10:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:46.009 10:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:46.009 10:06:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:46.009 10:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:46.009 10:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:46.009 10:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:46.009 10:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.266 10:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:46.267 10:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:46.267 { 00:17:46.267 "cntlid": 63, 00:17:46.267 "qid": 0, 00:17:46.267 "state": "enabled", 00:17:46.267 "thread": "nvmf_tgt_poll_group_000", 00:17:46.267 "listen_address": { 00:17:46.267 "trtype": "TCP", 00:17:46.267 "adrfam": "IPv4", 00:17:46.267 "traddr": "10.0.0.2", 00:17:46.267 "trsvcid": "4420" 00:17:46.267 }, 00:17:46.267 "peer_address": { 00:17:46.267 "trtype": "TCP", 00:17:46.267 "adrfam": "IPv4", 00:17:46.267 "traddr": "10.0.0.1", 00:17:46.267 "trsvcid": "38794" 00:17:46.267 }, 00:17:46.267 "auth": { 00:17:46.267 "state": "completed", 00:17:46.267 "digest": "sha384", 00:17:46.267 "dhgroup": "ffdhe2048" 00:17:46.267 } 00:17:46.267 } 00:17:46.267 ]' 00:17:46.267 10:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:46.267 10:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:46.267 10:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:46.267 10:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:46.267 10:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:46.267 10:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:46.267 10:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:46.267 10:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:46.524 10:06:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:ODg3YzlhNDdhNmE2MWExZTAwNjcyZmM2ZTJmMjIwM2FkYzQwNjZhNTI5ODhjMTg1MDhjMWQ4ZGJiMDAxMTFhOZgH7UI=: 00:17:47.457 10:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:47.457 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:47.457 10:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:47.457 10:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.457 10:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.457 10:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.457 10:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:47.457 10:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:47.457 10:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:47.457 10:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:47.715 10:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:17:47.715 10:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:47.715 10:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:47.715 10:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:47.715 10:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:47.715 10:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:47.715 10:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:47.715 10:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:47.715 10:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.715 10:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:47.715 10:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:47.715 10:06:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:48.281 00:17:48.281 10:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:48.281 10:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:48.281 10:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:48.538 10:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:48.538 10:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:48.538 10:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.538 10:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.538 10:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.538 10:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:48.538 { 00:17:48.538 "cntlid": 65, 00:17:48.538 "qid": 0, 00:17:48.538 "state": "enabled", 00:17:48.538 "thread": "nvmf_tgt_poll_group_000", 00:17:48.538 "listen_address": { 00:17:48.538 "trtype": "TCP", 00:17:48.538 "adrfam": "IPv4", 00:17:48.538 "traddr": "10.0.0.2", 00:17:48.538 "trsvcid": "4420" 00:17:48.538 }, 00:17:48.538 "peer_address": { 00:17:48.538 "trtype": "TCP", 00:17:48.538 "adrfam": "IPv4", 00:17:48.538 "traddr": "10.0.0.1", 00:17:48.538 "trsvcid": "38820" 00:17:48.538 }, 00:17:48.538 "auth": { 00:17:48.538 "state": "completed", 00:17:48.538 "digest": "sha384", 00:17:48.538 "dhgroup": "ffdhe3072" 00:17:48.538 } 00:17:48.538 } 00:17:48.538 ]' 00:17:48.538 10:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:48.538 10:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:48.538 10:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:48.538 10:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:48.538 10:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:48.538 10:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:48.538 10:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:48.538 10:06:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:49.103 10:06:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:MmE3MTQ0ZmMyYmZkNWM2ZTgwM2QwMTVjZGMzZjRlNWRlNTcxYzFkMjgwMzY0YzUw+y0etQ==: --dhchap-ctrl-secret DHHC-1:03:YjQ4YTJiZjQ2NTRiODRlNjc4YjM4MDdlMzhmZGFlOTNhOGE0ZjIxMzE3YmZkZTdhYThmMmFiYmYzYzMxNWVjMysm5Vk=: 00:17:50.037 10:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:50.037 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:50.037 10:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:50.037 10:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.037 10:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.037 10:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.037 10:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:50.037 10:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:50.037 10:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:50.602 10:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:17:50.602 10:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:50.602 10:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:50.602 10:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:50.602 10:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:50.602 10:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:50.602 10:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:50.602 10:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.602 10:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.602 10:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.602 10:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:50.602 10:06:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:50.860 00:17:50.860 10:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:50.860 10:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:50.860 10:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:51.425 10:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:51.425 10:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:51.425 10:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.425 10:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.425 10:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.425 10:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:51.425 { 00:17:51.425 "cntlid": 67, 00:17:51.425 "qid": 0, 00:17:51.425 "state": "enabled", 00:17:51.425 "thread": "nvmf_tgt_poll_group_000", 00:17:51.425 "listen_address": { 00:17:51.425 "trtype": "TCP", 00:17:51.425 "adrfam": "IPv4", 00:17:51.425 "traddr": "10.0.0.2", 00:17:51.425 "trsvcid": "4420" 00:17:51.425 }, 00:17:51.425 "peer_address": { 00:17:51.425 "trtype": "TCP", 00:17:51.425 "adrfam": "IPv4", 00:17:51.425 "traddr": "10.0.0.1", 00:17:51.425 "trsvcid": "38858" 00:17:51.425 }, 00:17:51.425 "auth": { 00:17:51.425 "state": "completed", 00:17:51.425 "digest": "sha384", 00:17:51.425 "dhgroup": "ffdhe3072" 00:17:51.425 } 00:17:51.425 } 00:17:51.425 ]' 00:17:51.425 10:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:51.425 10:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:51.425 10:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:51.425 10:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:51.425 10:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:51.425 10:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:51.425 10:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:51.426 10:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:51.990 10:06:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:Mzk1NTg5ZGUxNWU4MDE3ZTIwYWVkNzhhODI3NGQ0NzMtq9te: --dhchap-ctrl-secret DHHC-1:02:NWM1M2M0OWQzZTQxM2M1ZmU4YWVkNDBmNGExZTQyMWEyYzM1NzNmNmY4ZjU3YjZlY8rP+g==: 00:17:52.924 10:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:52.924 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:52.924 10:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:52.924 10:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.924 10:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.924 10:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.924 10:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:52.924 10:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:52.924 10:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:53.490 10:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:17:53.490 10:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:53.490 10:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:53.490 10:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:53.490 10:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:53.490 10:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:53.490 10:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:53.490 10:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.490 10:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.490 10:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.490 10:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:53.490 10:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:53.748 00:17:53.748 10:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:53.748 10:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:53.748 10:06:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:54.042 10:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:54.042 10:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:54.042 10:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.042 10:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.042 10:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.042 10:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:54.042 { 00:17:54.042 "cntlid": 69, 00:17:54.042 "qid": 0, 00:17:54.042 "state": "enabled", 00:17:54.042 "thread": "nvmf_tgt_poll_group_000", 00:17:54.042 "listen_address": { 00:17:54.042 "trtype": "TCP", 00:17:54.042 "adrfam": "IPv4", 00:17:54.042 "traddr": "10.0.0.2", 00:17:54.042 "trsvcid": "4420" 00:17:54.042 }, 00:17:54.042 "peer_address": { 00:17:54.042 "trtype": "TCP", 00:17:54.042 "adrfam": "IPv4", 00:17:54.042 "traddr": "10.0.0.1", 00:17:54.042 "trsvcid": "38888" 00:17:54.042 }, 00:17:54.042 "auth": { 00:17:54.042 "state": "completed", 00:17:54.042 "digest": "sha384", 00:17:54.042 "dhgroup": "ffdhe3072" 00:17:54.042 } 00:17:54.042 } 00:17:54.042 ]' 00:17:54.042 10:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:54.042 10:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:54.042 10:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:54.042 10:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:54.042 10:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:54.301 10:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:54.301 10:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:54.301 10:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:54.559 10:06:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:NTZlZWM5ZmE4MThjZWIzYjUxMTBkZWY0ZGFhOGI2MDRlZWVhN2M1NzM5Y2VmN2Yy9TJSEw==: --dhchap-ctrl-secret DHHC-1:01:Y2RjYzgxMGZhOGFhNzkzOWJhZDc3MmQwOTcwODg1MTG0PvqR: 00:17:55.492 10:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:55.750 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:55.750 10:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:55.750 10:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.750 10:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.750 10:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.750 10:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:55.750 10:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:55.750 10:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:56.007 10:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:17:56.007 10:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:56.007 10:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:56.007 10:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:56.007 10:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:56.007 10:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:56.007 10:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:17:56.007 10:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.007 10:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.007 10:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.007 10:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:56.007 10:06:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:56.264 00:17:56.264 10:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:56.264 10:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:56.264 10:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:56.829 10:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:56.829 10:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:56.829 10:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.829 10:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.829 10:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.829 10:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:56.829 { 00:17:56.829 "cntlid": 71, 00:17:56.829 "qid": 0, 00:17:56.829 "state": "enabled", 00:17:56.829 "thread": "nvmf_tgt_poll_group_000", 00:17:56.829 "listen_address": { 00:17:56.829 "trtype": "TCP", 00:17:56.829 "adrfam": "IPv4", 00:17:56.829 "traddr": "10.0.0.2", 00:17:56.829 "trsvcid": "4420" 00:17:56.829 }, 00:17:56.829 "peer_address": { 00:17:56.829 "trtype": "TCP", 00:17:56.829 "adrfam": "IPv4", 00:17:56.829 "traddr": "10.0.0.1", 00:17:56.829 "trsvcid": "33774" 00:17:56.829 }, 00:17:56.829 "auth": { 00:17:56.829 "state": "completed", 00:17:56.829 "digest": "sha384", 00:17:56.829 "dhgroup": "ffdhe3072" 00:17:56.829 } 00:17:56.829 } 00:17:56.829 ]' 00:17:56.829 10:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:56.829 10:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:56.829 10:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:56.829 10:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:56.829 10:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:56.829 10:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:56.829 10:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:56.829 10:06:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:57.086 10:06:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:ODg3YzlhNDdhNmE2MWExZTAwNjcyZmM2ZTJmMjIwM2FkYzQwNjZhNTI5ODhjMTg1MDhjMWQ4ZGJiMDAxMTFhOZgH7UI=: 00:17:58.455 10:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:58.455 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:58.455 10:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:17:58.455 10:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.455 10:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.455 10:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.455 10:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:58.455 10:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:58.455 10:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:58.455 10:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:58.455 10:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:17:58.455 10:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:58.455 10:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:58.455 10:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:58.455 10:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:58.455 10:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:58.455 10:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:58.455 10:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.455 10:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.455 10:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.455 10:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:58.455 10:06:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:59.018 00:17:59.018 10:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:59.018 10:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:59.018 10:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:59.583 10:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:59.583 10:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:59.583 10:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.583 10:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.583 10:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.583 10:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:59.583 { 00:17:59.583 "cntlid": 73, 00:17:59.583 "qid": 0, 00:17:59.583 "state": "enabled", 00:17:59.583 "thread": "nvmf_tgt_poll_group_000", 00:17:59.583 "listen_address": { 00:17:59.583 "trtype": "TCP", 00:17:59.583 "adrfam": "IPv4", 00:17:59.583 "traddr": "10.0.0.2", 00:17:59.583 "trsvcid": "4420" 00:17:59.583 }, 00:17:59.583 "peer_address": { 00:17:59.583 "trtype": "TCP", 00:17:59.583 "adrfam": "IPv4", 00:17:59.583 "traddr": "10.0.0.1", 00:17:59.583 "trsvcid": "33806" 00:17:59.583 }, 00:17:59.583 "auth": { 00:17:59.583 "state": "completed", 00:17:59.583 "digest": "sha384", 00:17:59.583 "dhgroup": "ffdhe4096" 00:17:59.583 } 00:17:59.583 } 00:17:59.583 ]' 00:17:59.583 10:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:59.583 10:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:59.583 10:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:59.583 10:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:59.583 10:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:59.583 10:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:59.583 10:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:59.583 10:06:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:59.840 10:06:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:MmE3MTQ0ZmMyYmZkNWM2ZTgwM2QwMTVjZGMzZjRlNWRlNTcxYzFkMjgwMzY0YzUw+y0etQ==: --dhchap-ctrl-secret DHHC-1:03:YjQ4YTJiZjQ2NTRiODRlNjc4YjM4MDdlMzhmZGFlOTNhOGE0ZjIxMzE3YmZkZTdhYThmMmFiYmYzYzMxNWVjMysm5Vk=: 00:18:01.211 10:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:01.211 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:01.211 10:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:01.211 10:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.211 10:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.211 10:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.211 10:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:01.211 10:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:01.211 10:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:01.468 10:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:18:01.468 10:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:01.468 10:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:01.468 10:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:01.468 10:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:01.468 10:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:01.468 10:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:01.468 10:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.468 10:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.468 10:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.468 10:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:01.468 10:06:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:02.034 00:18:02.034 10:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:02.034 10:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:02.034 10:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:02.292 10:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:02.292 10:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:02.292 10:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:02.292 10:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.292 10:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:02.292 10:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:02.292 { 00:18:02.292 "cntlid": 75, 00:18:02.292 "qid": 0, 00:18:02.292 "state": "enabled", 00:18:02.292 "thread": "nvmf_tgt_poll_group_000", 00:18:02.292 "listen_address": { 00:18:02.292 "trtype": "TCP", 00:18:02.292 "adrfam": "IPv4", 00:18:02.292 "traddr": "10.0.0.2", 00:18:02.292 "trsvcid": "4420" 00:18:02.292 }, 00:18:02.292 "peer_address": { 00:18:02.292 "trtype": "TCP", 00:18:02.292 "adrfam": "IPv4", 00:18:02.292 "traddr": "10.0.0.1", 00:18:02.292 "trsvcid": "33816" 00:18:02.292 }, 00:18:02.292 "auth": { 00:18:02.292 "state": "completed", 00:18:02.292 "digest": "sha384", 00:18:02.292 "dhgroup": "ffdhe4096" 00:18:02.292 } 00:18:02.292 } 00:18:02.292 ]' 00:18:02.292 10:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:02.292 10:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:02.292 10:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:02.292 10:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:02.292 10:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:02.292 10:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:02.292 10:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:02.292 10:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:02.856 10:06:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:Mzk1NTg5ZGUxNWU4MDE3ZTIwYWVkNzhhODI3NGQ0NzMtq9te: --dhchap-ctrl-secret DHHC-1:02:NWM1M2M0OWQzZTQxM2M1ZmU4YWVkNDBmNGExZTQyMWEyYzM1NzNmNmY4ZjU3YjZlY8rP+g==: 00:18:03.789 10:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:03.789 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:03.789 10:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:03.789 10:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.789 10:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.789 10:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.789 10:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:03.789 10:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:03.789 10:06:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:04.047 10:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:18:04.047 10:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:04.047 10:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:04.047 10:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:04.047 10:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:04.047 10:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:04.047 10:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:04.047 10:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.047 10:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.047 10:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.047 10:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:04.047 10:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:04.613 00:18:04.613 10:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:04.613 10:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:04.613 10:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:04.869 10:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:04.870 10:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:04.870 10:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.870 10:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.870 10:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.870 10:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:04.870 { 00:18:04.870 "cntlid": 77, 00:18:04.870 "qid": 0, 00:18:04.870 "state": "enabled", 00:18:04.870 "thread": "nvmf_tgt_poll_group_000", 00:18:04.870 "listen_address": { 00:18:04.870 "trtype": "TCP", 00:18:04.870 "adrfam": "IPv4", 00:18:04.870 "traddr": "10.0.0.2", 00:18:04.870 "trsvcid": "4420" 00:18:04.870 }, 00:18:04.870 "peer_address": { 00:18:04.870 "trtype": "TCP", 00:18:04.870 "adrfam": "IPv4", 00:18:04.870 "traddr": "10.0.0.1", 00:18:04.870 "trsvcid": "41278" 00:18:04.870 }, 00:18:04.870 "auth": { 00:18:04.870 "state": "completed", 00:18:04.870 "digest": "sha384", 00:18:04.870 "dhgroup": "ffdhe4096" 00:18:04.870 } 00:18:04.870 } 00:18:04.870 ]' 00:18:04.870 10:06:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:04.870 10:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:04.870 10:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:05.127 10:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:05.127 10:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:05.127 10:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:05.127 10:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:05.127 10:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:05.691 10:06:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:NTZlZWM5ZmE4MThjZWIzYjUxMTBkZWY0ZGFhOGI2MDRlZWVhN2M1NzM5Y2VmN2Yy9TJSEw==: --dhchap-ctrl-secret DHHC-1:01:Y2RjYzgxMGZhOGFhNzkzOWJhZDc3MmQwOTcwODg1MTG0PvqR: 00:18:06.626 10:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:06.626 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:06.626 10:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:06.626 10:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.626 10:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.626 10:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.626 10:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:06.626 10:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:06.626 10:06:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:18:07.190 10:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:18:07.190 10:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:07.190 10:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:07.190 10:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:07.190 10:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:07.190 10:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:07.190 10:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:18:07.190 10:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:07.190 10:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.190 10:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:07.190 10:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:07.190 10:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:07.753 00:18:07.753 10:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:07.753 10:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:07.753 10:06:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:08.010 10:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:08.010 10:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:08.010 10:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.010 10:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.010 10:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.010 10:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:08.010 { 00:18:08.010 "cntlid": 79, 00:18:08.010 "qid": 0, 00:18:08.010 "state": "enabled", 00:18:08.010 "thread": "nvmf_tgt_poll_group_000", 00:18:08.010 "listen_address": { 00:18:08.010 "trtype": "TCP", 00:18:08.010 "adrfam": "IPv4", 00:18:08.010 "traddr": "10.0.0.2", 00:18:08.010 "trsvcid": "4420" 00:18:08.010 }, 00:18:08.010 "peer_address": { 00:18:08.010 "trtype": "TCP", 00:18:08.010 "adrfam": "IPv4", 00:18:08.010 "traddr": "10.0.0.1", 00:18:08.010 "trsvcid": "41306" 00:18:08.010 }, 00:18:08.010 "auth": { 00:18:08.010 "state": "completed", 00:18:08.010 "digest": "sha384", 00:18:08.010 "dhgroup": "ffdhe4096" 00:18:08.010 } 00:18:08.010 } 00:18:08.010 ]' 00:18:08.010 10:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:08.010 10:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:08.010 10:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:08.010 10:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:08.010 10:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:08.268 10:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:08.268 10:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:08.268 10:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:08.831 10:06:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:ODg3YzlhNDdhNmE2MWExZTAwNjcyZmM2ZTJmMjIwM2FkYzQwNjZhNTI5ODhjMTg1MDhjMWQ4ZGJiMDAxMTFhOZgH7UI=: 00:18:10.201 10:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:10.201 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:10.201 10:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:10.201 10:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.201 10:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.201 10:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.201 10:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:10.201 10:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:10.201 10:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:10.201 10:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:10.459 10:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:18:10.459 10:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:10.459 10:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:10.459 10:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:10.459 10:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:10.459 10:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:10.459 10:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:10.459 10:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.459 10:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.459 10:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.459 10:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:10.459 10:06:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:11.391 00:18:11.391 10:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:11.391 10:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:11.391 10:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:11.648 10:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:11.648 10:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:11.648 10:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.648 10:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.648 10:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.648 10:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:11.648 { 00:18:11.648 "cntlid": 81, 00:18:11.648 "qid": 0, 00:18:11.648 "state": "enabled", 00:18:11.648 "thread": "nvmf_tgt_poll_group_000", 00:18:11.648 "listen_address": { 00:18:11.648 "trtype": "TCP", 00:18:11.648 "adrfam": "IPv4", 00:18:11.648 "traddr": "10.0.0.2", 00:18:11.648 "trsvcid": "4420" 00:18:11.648 }, 00:18:11.648 "peer_address": { 00:18:11.648 "trtype": "TCP", 00:18:11.648 "adrfam": "IPv4", 00:18:11.648 "traddr": "10.0.0.1", 00:18:11.648 "trsvcid": "41334" 00:18:11.648 }, 00:18:11.648 "auth": { 00:18:11.648 "state": "completed", 00:18:11.648 "digest": "sha384", 00:18:11.648 "dhgroup": "ffdhe6144" 00:18:11.648 } 00:18:11.648 } 00:18:11.648 ]' 00:18:11.648 10:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:11.648 10:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:11.648 10:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:11.648 10:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:11.648 10:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:11.648 10:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:11.648 10:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:11.648 10:06:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:12.580 10:06:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:MmE3MTQ0ZmMyYmZkNWM2ZTgwM2QwMTVjZGMzZjRlNWRlNTcxYzFkMjgwMzY0YzUw+y0etQ==: --dhchap-ctrl-secret DHHC-1:03:YjQ4YTJiZjQ2NTRiODRlNjc4YjM4MDdlMzhmZGFlOTNhOGE0ZjIxMzE3YmZkZTdhYThmMmFiYmYzYzMxNWVjMysm5Vk=: 00:18:13.510 10:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:13.510 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:13.510 10:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:13.510 10:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.510 10:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.510 10:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.510 10:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:13.510 10:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:13.510 10:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:13.768 10:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:18:13.768 10:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:13.768 10:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:13.768 10:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:13.768 10:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:13.768 10:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:13.768 10:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:13.768 10:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.768 10:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.768 10:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.768 10:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:13.768 10:06:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:14.700 00:18:14.700 10:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:14.700 10:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:14.700 10:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:14.958 10:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:14.958 10:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:14.958 10:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.958 10:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.958 10:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.958 10:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:14.958 { 00:18:14.958 "cntlid": 83, 00:18:14.958 "qid": 0, 00:18:14.958 "state": "enabled", 00:18:14.958 "thread": "nvmf_tgt_poll_group_000", 00:18:14.958 "listen_address": { 00:18:14.958 "trtype": "TCP", 00:18:14.958 "adrfam": "IPv4", 00:18:14.958 "traddr": "10.0.0.2", 00:18:14.958 "trsvcid": "4420" 00:18:14.958 }, 00:18:14.958 "peer_address": { 00:18:14.958 "trtype": "TCP", 00:18:14.958 "adrfam": "IPv4", 00:18:14.958 "traddr": "10.0.0.1", 00:18:14.958 "trsvcid": "53492" 00:18:14.958 }, 00:18:14.958 "auth": { 00:18:14.958 "state": "completed", 00:18:14.958 "digest": "sha384", 00:18:14.958 "dhgroup": "ffdhe6144" 00:18:14.958 } 00:18:14.958 } 00:18:14.958 ]' 00:18:14.958 10:06:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:14.958 10:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:14.958 10:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:14.958 10:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:14.958 10:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:15.214 10:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:15.214 10:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:15.214 10:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:15.779 10:07:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:Mzk1NTg5ZGUxNWU4MDE3ZTIwYWVkNzhhODI3NGQ0NzMtq9te: --dhchap-ctrl-secret DHHC-1:02:NWM1M2M0OWQzZTQxM2M1ZmU4YWVkNDBmNGExZTQyMWEyYzM1NzNmNmY4ZjU3YjZlY8rP+g==: 00:18:16.710 10:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:16.710 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:16.710 10:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:16.710 10:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:16.710 10:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.710 10:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:16.710 10:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:16.710 10:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:16.710 10:07:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:17.273 10:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:18:17.273 10:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:17.273 10:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:17.273 10:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:17.273 10:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:17.273 10:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:17.273 10:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:17.273 10:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.273 10:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.273 10:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.273 10:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:17.273 10:07:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:18.206 00:18:18.206 10:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:18.206 10:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:18.206 10:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:18.474 10:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:18.474 10:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:18.474 10:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.474 10:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.474 10:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.737 10:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:18.737 { 00:18:18.737 "cntlid": 85, 00:18:18.737 "qid": 0, 00:18:18.737 "state": "enabled", 00:18:18.737 "thread": "nvmf_tgt_poll_group_000", 00:18:18.737 "listen_address": { 00:18:18.737 "trtype": "TCP", 00:18:18.737 "adrfam": "IPv4", 00:18:18.737 "traddr": "10.0.0.2", 00:18:18.737 "trsvcid": "4420" 00:18:18.737 }, 00:18:18.737 "peer_address": { 00:18:18.737 "trtype": "TCP", 00:18:18.737 "adrfam": "IPv4", 00:18:18.737 "traddr": "10.0.0.1", 00:18:18.737 "trsvcid": "53520" 00:18:18.737 }, 00:18:18.737 "auth": { 00:18:18.737 "state": "completed", 00:18:18.737 "digest": "sha384", 00:18:18.737 "dhgroup": "ffdhe6144" 00:18:18.737 } 00:18:18.737 } 00:18:18.737 ]' 00:18:18.737 10:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:18.737 10:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:18.737 10:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:18.737 10:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:18.737 10:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:18.737 10:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:18.737 10:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:18.737 10:07:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:18.994 10:07:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:NTZlZWM5ZmE4MThjZWIzYjUxMTBkZWY0ZGFhOGI2MDRlZWVhN2M1NzM5Y2VmN2Yy9TJSEw==: --dhchap-ctrl-secret DHHC-1:01:Y2RjYzgxMGZhOGFhNzkzOWJhZDc3MmQwOTcwODg1MTG0PvqR: 00:18:20.366 10:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:20.366 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:20.366 10:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:20.366 10:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.366 10:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.367 10:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.367 10:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:20.367 10:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:20.367 10:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:18:20.624 10:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:18:20.624 10:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:20.624 10:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:20.624 10:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:20.624 10:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:20.624 10:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:20.624 10:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:18:20.624 10:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.624 10:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.624 10:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.624 10:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:20.624 10:07:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:21.189 00:18:21.189 10:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:21.189 10:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:21.189 10:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:21.754 10:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:21.754 10:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:21.754 10:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.754 10:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.754 10:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.754 10:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:21.754 { 00:18:21.754 "cntlid": 87, 00:18:21.754 "qid": 0, 00:18:21.754 "state": "enabled", 00:18:21.754 "thread": "nvmf_tgt_poll_group_000", 00:18:21.754 "listen_address": { 00:18:21.754 "trtype": "TCP", 00:18:21.754 "adrfam": "IPv4", 00:18:21.754 "traddr": "10.0.0.2", 00:18:21.754 "trsvcid": "4420" 00:18:21.754 }, 00:18:21.754 "peer_address": { 00:18:21.754 "trtype": "TCP", 00:18:21.754 "adrfam": "IPv4", 00:18:21.754 "traddr": "10.0.0.1", 00:18:21.754 "trsvcid": "53550" 00:18:21.754 }, 00:18:21.754 "auth": { 00:18:21.754 "state": "completed", 00:18:21.754 "digest": "sha384", 00:18:21.754 "dhgroup": "ffdhe6144" 00:18:21.754 } 00:18:21.754 } 00:18:21.754 ]' 00:18:21.754 10:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:21.754 10:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:21.754 10:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:21.754 10:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:21.754 10:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:21.754 10:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:21.754 10:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:21.754 10:07:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:22.319 10:07:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:ODg3YzlhNDdhNmE2MWExZTAwNjcyZmM2ZTJmMjIwM2FkYzQwNjZhNTI5ODhjMTg1MDhjMWQ4ZGJiMDAxMTFhOZgH7UI=: 00:18:23.719 10:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:23.719 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:23.719 10:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:23.719 10:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.719 10:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.719 10:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.719 10:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:23.719 10:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:23.719 10:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:23.720 10:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:23.977 10:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:18:23.977 10:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:23.977 10:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:23.977 10:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:23.977 10:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:23.977 10:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:23.977 10:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:23.977 10:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:23.977 10:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.977 10:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:23.977 10:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:23.977 10:07:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:24.908 00:18:24.908 10:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:24.908 10:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:24.908 10:07:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:25.473 10:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:25.473 10:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:25.473 10:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.473 10:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.473 10:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.473 10:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:25.473 { 00:18:25.473 "cntlid": 89, 00:18:25.473 "qid": 0, 00:18:25.473 "state": "enabled", 00:18:25.473 "thread": "nvmf_tgt_poll_group_000", 00:18:25.473 "listen_address": { 00:18:25.473 "trtype": "TCP", 00:18:25.473 "adrfam": "IPv4", 00:18:25.473 "traddr": "10.0.0.2", 00:18:25.473 "trsvcid": "4420" 00:18:25.473 }, 00:18:25.473 "peer_address": { 00:18:25.473 "trtype": "TCP", 00:18:25.473 "adrfam": "IPv4", 00:18:25.473 "traddr": "10.0.0.1", 00:18:25.473 "trsvcid": "45832" 00:18:25.473 }, 00:18:25.473 "auth": { 00:18:25.473 "state": "completed", 00:18:25.473 "digest": "sha384", 00:18:25.473 "dhgroup": "ffdhe8192" 00:18:25.473 } 00:18:25.473 } 00:18:25.473 ]' 00:18:25.473 10:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:25.473 10:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:25.473 10:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:25.731 10:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:25.731 10:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:25.731 10:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:25.731 10:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:25.731 10:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:25.988 10:07:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:MmE3MTQ0ZmMyYmZkNWM2ZTgwM2QwMTVjZGMzZjRlNWRlNTcxYzFkMjgwMzY0YzUw+y0etQ==: --dhchap-ctrl-secret DHHC-1:03:YjQ4YTJiZjQ2NTRiODRlNjc4YjM4MDdlMzhmZGFlOTNhOGE0ZjIxMzE3YmZkZTdhYThmMmFiYmYzYzMxNWVjMysm5Vk=: 00:18:27.360 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:27.360 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:27.360 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:27.360 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.360 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.360 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.360 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:27.360 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:27.360 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:27.618 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:18:27.618 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:27.618 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:27.618 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:27.618 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:27.618 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:27.618 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:27.618 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.618 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.618 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.618 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:27.618 10:07:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:28.989 00:18:28.990 10:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:28.990 10:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:28.990 10:07:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:28.990 10:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:28.990 10:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:28.990 10:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.990 10:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.990 10:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.247 10:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:29.247 { 00:18:29.247 "cntlid": 91, 00:18:29.247 "qid": 0, 00:18:29.247 "state": "enabled", 00:18:29.247 "thread": "nvmf_tgt_poll_group_000", 00:18:29.247 "listen_address": { 00:18:29.247 "trtype": "TCP", 00:18:29.247 "adrfam": "IPv4", 00:18:29.247 "traddr": "10.0.0.2", 00:18:29.247 "trsvcid": "4420" 00:18:29.247 }, 00:18:29.247 "peer_address": { 00:18:29.247 "trtype": "TCP", 00:18:29.247 "adrfam": "IPv4", 00:18:29.247 "traddr": "10.0.0.1", 00:18:29.247 "trsvcid": "45856" 00:18:29.247 }, 00:18:29.247 "auth": { 00:18:29.247 "state": "completed", 00:18:29.247 "digest": "sha384", 00:18:29.247 "dhgroup": "ffdhe8192" 00:18:29.247 } 00:18:29.247 } 00:18:29.247 ]' 00:18:29.247 10:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:29.247 10:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:29.247 10:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:29.247 10:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:29.247 10:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:29.247 10:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:29.247 10:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:29.247 10:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:29.812 10:07:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:Mzk1NTg5ZGUxNWU4MDE3ZTIwYWVkNzhhODI3NGQ0NzMtq9te: --dhchap-ctrl-secret DHHC-1:02:NWM1M2M0OWQzZTQxM2M1ZmU4YWVkNDBmNGExZTQyMWEyYzM1NzNmNmY4ZjU3YjZlY8rP+g==: 00:18:31.184 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:31.184 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:31.184 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:31.184 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.184 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.184 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.184 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:31.184 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:31.184 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:31.442 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:18:31.442 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:31.442 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:31.442 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:31.442 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:31.442 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:31.442 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:31.442 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.442 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.442 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.442 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:31.442 10:07:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:32.813 00:18:32.813 10:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:32.813 10:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:32.813 10:07:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:33.070 10:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:33.070 10:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:33.070 10:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.070 10:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.070 10:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.070 10:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:33.070 { 00:18:33.070 "cntlid": 93, 00:18:33.070 "qid": 0, 00:18:33.070 "state": "enabled", 00:18:33.070 "thread": "nvmf_tgt_poll_group_000", 00:18:33.070 "listen_address": { 00:18:33.070 "trtype": "TCP", 00:18:33.070 "adrfam": "IPv4", 00:18:33.070 "traddr": "10.0.0.2", 00:18:33.070 "trsvcid": "4420" 00:18:33.070 }, 00:18:33.070 "peer_address": { 00:18:33.070 "trtype": "TCP", 00:18:33.070 "adrfam": "IPv4", 00:18:33.070 "traddr": "10.0.0.1", 00:18:33.070 "trsvcid": "45880" 00:18:33.070 }, 00:18:33.070 "auth": { 00:18:33.070 "state": "completed", 00:18:33.070 "digest": "sha384", 00:18:33.070 "dhgroup": "ffdhe8192" 00:18:33.070 } 00:18:33.070 } 00:18:33.070 ]' 00:18:33.070 10:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:33.328 10:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:33.328 10:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:33.328 10:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:33.328 10:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:33.328 10:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:33.328 10:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:33.328 10:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:33.891 10:07:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:NTZlZWM5ZmE4MThjZWIzYjUxMTBkZWY0ZGFhOGI2MDRlZWVhN2M1NzM5Y2VmN2Yy9TJSEw==: --dhchap-ctrl-secret DHHC-1:01:Y2RjYzgxMGZhOGFhNzkzOWJhZDc3MmQwOTcwODg1MTG0PvqR: 00:18:35.264 10:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:35.264 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:35.264 10:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:35.264 10:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.264 10:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.264 10:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.264 10:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:35.264 10:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:35.264 10:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:18:35.522 10:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:18:35.522 10:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:35.522 10:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:35.522 10:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:35.522 10:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:35.522 10:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:35.522 10:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:18:35.522 10:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:35.522 10:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.522 10:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:35.522 10:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:35.522 10:07:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:36.895 00:18:36.895 10:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:36.895 10:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:36.895 10:07:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:37.465 10:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:37.465 10:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:37.465 10:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.465 10:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.465 10:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.465 10:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:37.465 { 00:18:37.465 "cntlid": 95, 00:18:37.465 "qid": 0, 00:18:37.465 "state": "enabled", 00:18:37.465 "thread": "nvmf_tgt_poll_group_000", 00:18:37.465 "listen_address": { 00:18:37.465 "trtype": "TCP", 00:18:37.465 "adrfam": "IPv4", 00:18:37.465 "traddr": "10.0.0.2", 00:18:37.465 "trsvcid": "4420" 00:18:37.465 }, 00:18:37.465 "peer_address": { 00:18:37.465 "trtype": "TCP", 00:18:37.465 "adrfam": "IPv4", 00:18:37.465 "traddr": "10.0.0.1", 00:18:37.465 "trsvcid": "42766" 00:18:37.465 }, 00:18:37.465 "auth": { 00:18:37.465 "state": "completed", 00:18:37.465 "digest": "sha384", 00:18:37.465 "dhgroup": "ffdhe8192" 00:18:37.465 } 00:18:37.465 } 00:18:37.465 ]' 00:18:37.465 10:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:37.465 10:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:37.465 10:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:37.465 10:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:37.465 10:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:37.465 10:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:37.465 10:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:37.465 10:07:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:38.066 10:07:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:ODg3YzlhNDdhNmE2MWExZTAwNjcyZmM2ZTJmMjIwM2FkYzQwNjZhNTI5ODhjMTg1MDhjMWQ4ZGJiMDAxMTFhOZgH7UI=: 00:18:38.999 10:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:38.999 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:38.999 10:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:38.999 10:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.999 10:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.999 10:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.999 10:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:18:38.999 10:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:38.999 10:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:38.999 10:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:38.999 10:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:39.563 10:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:18:39.563 10:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:39.563 10:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:39.563 10:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:39.563 10:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:39.563 10:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:39.563 10:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:39.563 10:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:39.563 10:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.563 10:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:39.563 10:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:39.564 10:07:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:40.127 00:18:40.127 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:40.127 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:40.127 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:40.385 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:40.385 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:40.385 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.385 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.385 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.385 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:40.385 { 00:18:40.385 "cntlid": 97, 00:18:40.385 "qid": 0, 00:18:40.385 "state": "enabled", 00:18:40.385 "thread": "nvmf_tgt_poll_group_000", 00:18:40.385 "listen_address": { 00:18:40.385 "trtype": "TCP", 00:18:40.385 "adrfam": "IPv4", 00:18:40.385 "traddr": "10.0.0.2", 00:18:40.385 "trsvcid": "4420" 00:18:40.385 }, 00:18:40.385 "peer_address": { 00:18:40.385 "trtype": "TCP", 00:18:40.385 "adrfam": "IPv4", 00:18:40.385 "traddr": "10.0.0.1", 00:18:40.385 "trsvcid": "42804" 00:18:40.385 }, 00:18:40.385 "auth": { 00:18:40.385 "state": "completed", 00:18:40.385 "digest": "sha512", 00:18:40.385 "dhgroup": "null" 00:18:40.385 } 00:18:40.385 } 00:18:40.385 ]' 00:18:40.385 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:40.385 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:40.385 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:40.385 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:40.385 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:40.385 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:40.385 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:40.385 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:40.950 10:07:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:MmE3MTQ0ZmMyYmZkNWM2ZTgwM2QwMTVjZGMzZjRlNWRlNTcxYzFkMjgwMzY0YzUw+y0etQ==: --dhchap-ctrl-secret DHHC-1:03:YjQ4YTJiZjQ2NTRiODRlNjc4YjM4MDdlMzhmZGFlOTNhOGE0ZjIxMzE3YmZkZTdhYThmMmFiYmYzYzMxNWVjMysm5Vk=: 00:18:41.883 10:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:41.883 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:41.883 10:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:41.883 10:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.883 10:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.883 10:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.883 10:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:41.883 10:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:41.883 10:07:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:42.142 10:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:18:42.142 10:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:42.142 10:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:42.142 10:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:42.142 10:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:42.142 10:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:42.142 10:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:42.142 10:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:42.142 10:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.142 10:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:42.142 10:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:42.142 10:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:43.075 00:18:43.075 10:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:43.075 10:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:43.075 10:07:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:43.333 10:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:43.333 10:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:43.333 10:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.333 10:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.333 10:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.333 10:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:43.333 { 00:18:43.333 "cntlid": 99, 00:18:43.333 "qid": 0, 00:18:43.333 "state": "enabled", 00:18:43.333 "thread": "nvmf_tgt_poll_group_000", 00:18:43.333 "listen_address": { 00:18:43.333 "trtype": "TCP", 00:18:43.333 "adrfam": "IPv4", 00:18:43.333 "traddr": "10.0.0.2", 00:18:43.333 "trsvcid": "4420" 00:18:43.333 }, 00:18:43.333 "peer_address": { 00:18:43.333 "trtype": "TCP", 00:18:43.333 "adrfam": "IPv4", 00:18:43.333 "traddr": "10.0.0.1", 00:18:43.333 "trsvcid": "42842" 00:18:43.333 }, 00:18:43.333 "auth": { 00:18:43.333 "state": "completed", 00:18:43.333 "digest": "sha512", 00:18:43.333 "dhgroup": "null" 00:18:43.333 } 00:18:43.333 } 00:18:43.333 ]' 00:18:43.333 10:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:43.333 10:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:43.333 10:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:43.333 10:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:43.333 10:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:43.333 10:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:43.333 10:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:43.333 10:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:43.590 10:07:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:Mzk1NTg5ZGUxNWU4MDE3ZTIwYWVkNzhhODI3NGQ0NzMtq9te: --dhchap-ctrl-secret DHHC-1:02:NWM1M2M0OWQzZTQxM2M1ZmU4YWVkNDBmNGExZTQyMWEyYzM1NzNmNmY4ZjU3YjZlY8rP+g==: 00:18:44.964 10:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:44.964 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:44.964 10:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:44.964 10:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.964 10:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.964 10:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.964 10:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:44.964 10:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:44.964 10:07:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:45.222 10:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:18:45.222 10:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:45.222 10:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:45.222 10:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:45.222 10:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:45.222 10:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:45.222 10:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:45.222 10:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.222 10:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.222 10:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.222 10:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:45.222 10:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:45.787 00:18:45.787 10:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:45.787 10:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:45.787 10:07:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:46.353 10:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:46.353 10:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:46.353 10:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:46.353 10:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.353 10:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:46.353 10:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:46.353 { 00:18:46.353 "cntlid": 101, 00:18:46.353 "qid": 0, 00:18:46.353 "state": "enabled", 00:18:46.353 "thread": "nvmf_tgt_poll_group_000", 00:18:46.353 "listen_address": { 00:18:46.353 "trtype": "TCP", 00:18:46.353 "adrfam": "IPv4", 00:18:46.353 "traddr": "10.0.0.2", 00:18:46.353 "trsvcid": "4420" 00:18:46.353 }, 00:18:46.353 "peer_address": { 00:18:46.353 "trtype": "TCP", 00:18:46.353 "adrfam": "IPv4", 00:18:46.353 "traddr": "10.0.0.1", 00:18:46.353 "trsvcid": "33930" 00:18:46.353 }, 00:18:46.353 "auth": { 00:18:46.353 "state": "completed", 00:18:46.353 "digest": "sha512", 00:18:46.353 "dhgroup": "null" 00:18:46.353 } 00:18:46.353 } 00:18:46.353 ]' 00:18:46.353 10:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:46.353 10:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:46.353 10:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:46.353 10:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:46.353 10:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:46.610 10:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:46.610 10:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:46.610 10:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:46.867 10:07:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:NTZlZWM5ZmE4MThjZWIzYjUxMTBkZWY0ZGFhOGI2MDRlZWVhN2M1NzM5Y2VmN2Yy9TJSEw==: --dhchap-ctrl-secret DHHC-1:01:Y2RjYzgxMGZhOGFhNzkzOWJhZDc3MmQwOTcwODg1MTG0PvqR: 00:18:48.240 10:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:48.240 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:48.240 10:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:48.240 10:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.240 10:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.240 10:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.240 10:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:48.240 10:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:48.240 10:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:18:48.498 10:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:18:48.498 10:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:48.498 10:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:48.498 10:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:48.498 10:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:48.498 10:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:48.498 10:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:18:48.498 10:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.498 10:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.498 10:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.498 10:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:48.498 10:07:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:49.063 00:18:49.063 10:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:49.063 10:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:49.063 10:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:49.628 10:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:49.628 10:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:49.628 10:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.628 10:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.628 10:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.628 10:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:49.628 { 00:18:49.628 "cntlid": 103, 00:18:49.628 "qid": 0, 00:18:49.628 "state": "enabled", 00:18:49.628 "thread": "nvmf_tgt_poll_group_000", 00:18:49.628 "listen_address": { 00:18:49.628 "trtype": "TCP", 00:18:49.628 "adrfam": "IPv4", 00:18:49.628 "traddr": "10.0.0.2", 00:18:49.628 "trsvcid": "4420" 00:18:49.628 }, 00:18:49.628 "peer_address": { 00:18:49.628 "trtype": "TCP", 00:18:49.628 "adrfam": "IPv4", 00:18:49.628 "traddr": "10.0.0.1", 00:18:49.628 "trsvcid": "33972" 00:18:49.628 }, 00:18:49.628 "auth": { 00:18:49.628 "state": "completed", 00:18:49.628 "digest": "sha512", 00:18:49.628 "dhgroup": "null" 00:18:49.628 } 00:18:49.628 } 00:18:49.628 ]' 00:18:49.628 10:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:49.628 10:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:49.628 10:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:49.628 10:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:49.628 10:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:49.628 10:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:49.628 10:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:49.628 10:07:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:50.193 10:07:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:ODg3YzlhNDdhNmE2MWExZTAwNjcyZmM2ZTJmMjIwM2FkYzQwNjZhNTI5ODhjMTg1MDhjMWQ4ZGJiMDAxMTFhOZgH7UI=: 00:18:51.565 10:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:51.565 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:51.565 10:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:51.565 10:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.565 10:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.565 10:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.565 10:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:51.565 10:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:51.565 10:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:51.565 10:07:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:52.132 10:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:18:52.132 10:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:52.132 10:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:52.132 10:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:52.132 10:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:52.132 10:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:52.132 10:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:52.132 10:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.132 10:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.132 10:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.132 10:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:52.132 10:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:52.728 00:18:52.728 10:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:52.728 10:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:52.728 10:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:52.986 10:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:52.986 10:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:52.986 10:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.986 10:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.986 10:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.986 10:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:52.986 { 00:18:52.986 "cntlid": 105, 00:18:52.986 "qid": 0, 00:18:52.986 "state": "enabled", 00:18:52.986 "thread": "nvmf_tgt_poll_group_000", 00:18:52.986 "listen_address": { 00:18:52.986 "trtype": "TCP", 00:18:52.986 "adrfam": "IPv4", 00:18:52.986 "traddr": "10.0.0.2", 00:18:52.986 "trsvcid": "4420" 00:18:52.986 }, 00:18:52.986 "peer_address": { 00:18:52.986 "trtype": "TCP", 00:18:52.986 "adrfam": "IPv4", 00:18:52.986 "traddr": "10.0.0.1", 00:18:52.986 "trsvcid": "33988" 00:18:52.986 }, 00:18:52.986 "auth": { 00:18:52.986 "state": "completed", 00:18:52.986 "digest": "sha512", 00:18:52.986 "dhgroup": "ffdhe2048" 00:18:52.986 } 00:18:52.986 } 00:18:52.986 ]' 00:18:52.986 10:07:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:52.986 10:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:52.986 10:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:52.986 10:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:52.986 10:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:52.986 10:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:52.986 10:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:52.986 10:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:53.551 10:07:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:MmE3MTQ0ZmMyYmZkNWM2ZTgwM2QwMTVjZGMzZjRlNWRlNTcxYzFkMjgwMzY0YzUw+y0etQ==: --dhchap-ctrl-secret DHHC-1:03:YjQ4YTJiZjQ2NTRiODRlNjc4YjM4MDdlMzhmZGFlOTNhOGE0ZjIxMzE3YmZkZTdhYThmMmFiYmYzYzMxNWVjMysm5Vk=: 00:18:54.484 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:54.484 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:54.484 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:54.484 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.484 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.484 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.484 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:54.484 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:54.484 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:55.049 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:18:55.049 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:55.049 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:55.049 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:55.049 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:55.049 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:55.049 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:55.049 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.049 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.049 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.049 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:55.049 10:07:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:55.307 00:18:55.307 10:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:55.307 10:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:55.307 10:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:55.872 10:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:55.872 10:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:55.872 10:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.872 10:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.872 10:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.872 10:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:55.872 { 00:18:55.872 "cntlid": 107, 00:18:55.872 "qid": 0, 00:18:55.872 "state": "enabled", 00:18:55.872 "thread": "nvmf_tgt_poll_group_000", 00:18:55.872 "listen_address": { 00:18:55.872 "trtype": "TCP", 00:18:55.873 "adrfam": "IPv4", 00:18:55.873 "traddr": "10.0.0.2", 00:18:55.873 "trsvcid": "4420" 00:18:55.873 }, 00:18:55.873 "peer_address": { 00:18:55.873 "trtype": "TCP", 00:18:55.873 "adrfam": "IPv4", 00:18:55.873 "traddr": "10.0.0.1", 00:18:55.873 "trsvcid": "45276" 00:18:55.873 }, 00:18:55.873 "auth": { 00:18:55.873 "state": "completed", 00:18:55.873 "digest": "sha512", 00:18:55.873 "dhgroup": "ffdhe2048" 00:18:55.873 } 00:18:55.873 } 00:18:55.873 ]' 00:18:55.873 10:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:55.873 10:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:55.873 10:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:55.873 10:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:55.873 10:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:55.873 10:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:55.873 10:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:55.873 10:07:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:56.130 10:07:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:Mzk1NTg5ZGUxNWU4MDE3ZTIwYWVkNzhhODI3NGQ0NzMtq9te: --dhchap-ctrl-secret DHHC-1:02:NWM1M2M0OWQzZTQxM2M1ZmU4YWVkNDBmNGExZTQyMWEyYzM1NzNmNmY4ZjU3YjZlY8rP+g==: 00:18:57.502 10:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:57.502 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:57.502 10:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:18:57.502 10:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.502 10:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.502 10:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.502 10:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:57.502 10:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:57.502 10:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:18:57.760 10:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:18:57.760 10:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:57.760 10:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:57.760 10:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:57.760 10:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:57.760 10:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:57.760 10:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:57.760 10:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.760 10:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.760 10:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.760 10:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:57.760 10:07:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:58.327 00:18:58.327 10:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:58.327 10:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:58.327 10:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:58.891 10:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:58.891 10:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:58.891 10:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.891 10:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.891 10:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.891 10:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:58.891 { 00:18:58.891 "cntlid": 109, 00:18:58.891 "qid": 0, 00:18:58.891 "state": "enabled", 00:18:58.891 "thread": "nvmf_tgt_poll_group_000", 00:18:58.891 "listen_address": { 00:18:58.891 "trtype": "TCP", 00:18:58.891 "adrfam": "IPv4", 00:18:58.891 "traddr": "10.0.0.2", 00:18:58.891 "trsvcid": "4420" 00:18:58.891 }, 00:18:58.891 "peer_address": { 00:18:58.891 "trtype": "TCP", 00:18:58.891 "adrfam": "IPv4", 00:18:58.891 "traddr": "10.0.0.1", 00:18:58.891 "trsvcid": "45306" 00:18:58.891 }, 00:18:58.891 "auth": { 00:18:58.891 "state": "completed", 00:18:58.891 "digest": "sha512", 00:18:58.891 "dhgroup": "ffdhe2048" 00:18:58.891 } 00:18:58.891 } 00:18:58.891 ]' 00:18:58.891 10:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:58.891 10:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:58.891 10:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:58.891 10:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:58.891 10:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:58.891 10:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:58.891 10:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:58.891 10:07:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:59.456 10:07:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:NTZlZWM5ZmE4MThjZWIzYjUxMTBkZWY0ZGFhOGI2MDRlZWVhN2M1NzM5Y2VmN2Yy9TJSEw==: --dhchap-ctrl-secret DHHC-1:01:Y2RjYzgxMGZhOGFhNzkzOWJhZDc3MmQwOTcwODg1MTG0PvqR: 00:19:00.424 10:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:00.424 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:00.424 10:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:00.424 10:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.425 10:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.425 10:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.425 10:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:00.425 10:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:00.425 10:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:00.683 10:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:19:00.683 10:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:00.683 10:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:00.683 10:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:00.683 10:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:00.683 10:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:00.683 10:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:19:00.683 10:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:00.683 10:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.683 10:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:00.683 10:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:00.683 10:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:00.941 00:19:00.941 10:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:00.941 10:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:00.941 10:07:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:01.199 10:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:01.199 10:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:01.199 10:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.199 10:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.199 10:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.199 10:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:01.199 { 00:19:01.199 "cntlid": 111, 00:19:01.199 "qid": 0, 00:19:01.199 "state": "enabled", 00:19:01.199 "thread": "nvmf_tgt_poll_group_000", 00:19:01.199 "listen_address": { 00:19:01.199 "trtype": "TCP", 00:19:01.199 "adrfam": "IPv4", 00:19:01.199 "traddr": "10.0.0.2", 00:19:01.199 "trsvcid": "4420" 00:19:01.199 }, 00:19:01.199 "peer_address": { 00:19:01.199 "trtype": "TCP", 00:19:01.199 "adrfam": "IPv4", 00:19:01.199 "traddr": "10.0.0.1", 00:19:01.199 "trsvcid": "45342" 00:19:01.199 }, 00:19:01.199 "auth": { 00:19:01.199 "state": "completed", 00:19:01.199 "digest": "sha512", 00:19:01.199 "dhgroup": "ffdhe2048" 00:19:01.199 } 00:19:01.199 } 00:19:01.199 ]' 00:19:01.199 10:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:01.199 10:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:01.199 10:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:01.456 10:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:01.456 10:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:01.456 10:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:01.456 10:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:01.456 10:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:01.714 10:07:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:ODg3YzlhNDdhNmE2MWExZTAwNjcyZmM2ZTJmMjIwM2FkYzQwNjZhNTI5ODhjMTg1MDhjMWQ4ZGJiMDAxMTFhOZgH7UI=: 00:19:03.085 10:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:03.085 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:03.085 10:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:03.085 10:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.085 10:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.085 10:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.085 10:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:03.085 10:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:03.085 10:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:03.085 10:07:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:03.650 10:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:19:03.650 10:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:03.650 10:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:03.650 10:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:03.650 10:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:03.650 10:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:03.650 10:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:03.650 10:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.650 10:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.650 10:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.650 10:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:03.650 10:07:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:03.907 00:19:03.907 10:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:03.907 10:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:03.907 10:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:04.473 10:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:04.473 10:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:04.473 10:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.473 10:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.473 10:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.473 10:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:04.473 { 00:19:04.473 "cntlid": 113, 00:19:04.473 "qid": 0, 00:19:04.473 "state": "enabled", 00:19:04.473 "thread": "nvmf_tgt_poll_group_000", 00:19:04.473 "listen_address": { 00:19:04.473 "trtype": "TCP", 00:19:04.473 "adrfam": "IPv4", 00:19:04.473 "traddr": "10.0.0.2", 00:19:04.473 "trsvcid": "4420" 00:19:04.473 }, 00:19:04.473 "peer_address": { 00:19:04.473 "trtype": "TCP", 00:19:04.473 "adrfam": "IPv4", 00:19:04.473 "traddr": "10.0.0.1", 00:19:04.473 "trsvcid": "45360" 00:19:04.473 }, 00:19:04.473 "auth": { 00:19:04.473 "state": "completed", 00:19:04.473 "digest": "sha512", 00:19:04.473 "dhgroup": "ffdhe3072" 00:19:04.473 } 00:19:04.473 } 00:19:04.473 ]' 00:19:04.473 10:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:04.473 10:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:04.473 10:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:04.473 10:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:04.473 10:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:04.473 10:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:04.473 10:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:04.473 10:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:04.731 10:07:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:MmE3MTQ0ZmMyYmZkNWM2ZTgwM2QwMTVjZGMzZjRlNWRlNTcxYzFkMjgwMzY0YzUw+y0etQ==: --dhchap-ctrl-secret DHHC-1:03:YjQ4YTJiZjQ2NTRiODRlNjc4YjM4MDdlMzhmZGFlOTNhOGE0ZjIxMzE3YmZkZTdhYThmMmFiYmYzYzMxNWVjMysm5Vk=: 00:19:05.664 10:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:05.664 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:05.664 10:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:05.664 10:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.664 10:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.664 10:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.664 10:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:05.664 10:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:05.664 10:07:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:06.229 10:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:19:06.229 10:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:06.229 10:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:06.229 10:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:06.229 10:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:06.229 10:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:06.229 10:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:06.229 10:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.229 10:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.229 10:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.229 10:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:06.229 10:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:06.823 00:19:06.823 10:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:06.823 10:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:06.823 10:07:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:07.081 10:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:07.081 10:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:07.081 10:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.081 10:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.081 10:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.081 10:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:07.081 { 00:19:07.081 "cntlid": 115, 00:19:07.081 "qid": 0, 00:19:07.081 "state": "enabled", 00:19:07.081 "thread": "nvmf_tgt_poll_group_000", 00:19:07.081 "listen_address": { 00:19:07.081 "trtype": "TCP", 00:19:07.081 "adrfam": "IPv4", 00:19:07.081 "traddr": "10.0.0.2", 00:19:07.081 "trsvcid": "4420" 00:19:07.081 }, 00:19:07.081 "peer_address": { 00:19:07.081 "trtype": "TCP", 00:19:07.081 "adrfam": "IPv4", 00:19:07.081 "traddr": "10.0.0.1", 00:19:07.081 "trsvcid": "53106" 00:19:07.081 }, 00:19:07.081 "auth": { 00:19:07.081 "state": "completed", 00:19:07.081 "digest": "sha512", 00:19:07.081 "dhgroup": "ffdhe3072" 00:19:07.081 } 00:19:07.081 } 00:19:07.081 ]' 00:19:07.081 10:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:07.081 10:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:07.081 10:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:07.081 10:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:07.081 10:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:07.081 10:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:07.081 10:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:07.081 10:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:07.647 10:07:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:Mzk1NTg5ZGUxNWU4MDE3ZTIwYWVkNzhhODI3NGQ0NzMtq9te: --dhchap-ctrl-secret DHHC-1:02:NWM1M2M0OWQzZTQxM2M1ZmU4YWVkNDBmNGExZTQyMWEyYzM1NzNmNmY4ZjU3YjZlY8rP+g==: 00:19:08.579 10:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:08.579 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:08.579 10:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:08.579 10:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.579 10:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.579 10:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.579 10:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:08.579 10:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:08.579 10:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:08.837 10:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:19:08.837 10:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:08.837 10:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:08.837 10:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:08.837 10:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:08.837 10:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:08.837 10:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:08.837 10:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.837 10:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.837 10:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.837 10:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:08.837 10:07:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:09.402 00:19:09.402 10:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:09.402 10:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:09.402 10:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:09.659 10:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:09.659 10:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:09.659 10:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.659 10:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.660 10:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.660 10:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:09.660 { 00:19:09.660 "cntlid": 117, 00:19:09.660 "qid": 0, 00:19:09.660 "state": "enabled", 00:19:09.660 "thread": "nvmf_tgt_poll_group_000", 00:19:09.660 "listen_address": { 00:19:09.660 "trtype": "TCP", 00:19:09.660 "adrfam": "IPv4", 00:19:09.660 "traddr": "10.0.0.2", 00:19:09.660 "trsvcid": "4420" 00:19:09.660 }, 00:19:09.660 "peer_address": { 00:19:09.660 "trtype": "TCP", 00:19:09.660 "adrfam": "IPv4", 00:19:09.660 "traddr": "10.0.0.1", 00:19:09.660 "trsvcid": "53144" 00:19:09.660 }, 00:19:09.660 "auth": { 00:19:09.660 "state": "completed", 00:19:09.660 "digest": "sha512", 00:19:09.660 "dhgroup": "ffdhe3072" 00:19:09.660 } 00:19:09.660 } 00:19:09.660 ]' 00:19:09.660 10:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:09.660 10:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:09.660 10:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:09.917 10:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:09.917 10:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:09.917 10:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:09.917 10:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:09.917 10:07:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:10.483 10:07:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:NTZlZWM5ZmE4MThjZWIzYjUxMTBkZWY0ZGFhOGI2MDRlZWVhN2M1NzM5Y2VmN2Yy9TJSEw==: --dhchap-ctrl-secret DHHC-1:01:Y2RjYzgxMGZhOGFhNzkzOWJhZDc3MmQwOTcwODg1MTG0PvqR: 00:19:11.451 10:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:11.451 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:11.451 10:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:11.451 10:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:11.451 10:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.451 10:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:11.451 10:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:11.451 10:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:11.451 10:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:12.016 10:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:19:12.016 10:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:12.016 10:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:12.016 10:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:12.016 10:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:12.016 10:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:12.016 10:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:19:12.017 10:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.017 10:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.017 10:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.017 10:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:12.017 10:07:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:12.582 00:19:12.582 10:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:12.582 10:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:12.582 10:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:12.839 10:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:12.839 10:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:12.839 10:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.839 10:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.840 10:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.840 10:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:12.840 { 00:19:12.840 "cntlid": 119, 00:19:12.840 "qid": 0, 00:19:12.840 "state": "enabled", 00:19:12.840 "thread": "nvmf_tgt_poll_group_000", 00:19:12.840 "listen_address": { 00:19:12.840 "trtype": "TCP", 00:19:12.840 "adrfam": "IPv4", 00:19:12.840 "traddr": "10.0.0.2", 00:19:12.840 "trsvcid": "4420" 00:19:12.840 }, 00:19:12.840 "peer_address": { 00:19:12.840 "trtype": "TCP", 00:19:12.840 "adrfam": "IPv4", 00:19:12.840 "traddr": "10.0.0.1", 00:19:12.840 "trsvcid": "53164" 00:19:12.840 }, 00:19:12.840 "auth": { 00:19:12.840 "state": "completed", 00:19:12.840 "digest": "sha512", 00:19:12.840 "dhgroup": "ffdhe3072" 00:19:12.840 } 00:19:12.840 } 00:19:12.840 ]' 00:19:12.840 10:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:12.840 10:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:12.840 10:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:12.840 10:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:12.840 10:07:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:13.097 10:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:13.097 10:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:13.097 10:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:13.355 10:07:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:ODg3YzlhNDdhNmE2MWExZTAwNjcyZmM2ZTJmMjIwM2FkYzQwNjZhNTI5ODhjMTg1MDhjMWQ4ZGJiMDAxMTFhOZgH7UI=: 00:19:14.728 10:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:14.728 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:14.728 10:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:14.728 10:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.728 10:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.728 10:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.728 10:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:14.728 10:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:14.728 10:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:14.728 10:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:14.728 10:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:19:14.728 10:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:14.728 10:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:14.728 10:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:14.728 10:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:14.728 10:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:14.728 10:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:14.728 10:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.728 10:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.728 10:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.728 10:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:14.728 10:07:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:15.294 00:19:15.294 10:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:15.294 10:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:15.294 10:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:15.551 10:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:15.551 10:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:15.551 10:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.551 10:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.551 10:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.551 10:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:15.551 { 00:19:15.551 "cntlid": 121, 00:19:15.551 "qid": 0, 00:19:15.551 "state": "enabled", 00:19:15.551 "thread": "nvmf_tgt_poll_group_000", 00:19:15.551 "listen_address": { 00:19:15.551 "trtype": "TCP", 00:19:15.551 "adrfam": "IPv4", 00:19:15.551 "traddr": "10.0.0.2", 00:19:15.551 "trsvcid": "4420" 00:19:15.551 }, 00:19:15.551 "peer_address": { 00:19:15.551 "trtype": "TCP", 00:19:15.551 "adrfam": "IPv4", 00:19:15.551 "traddr": "10.0.0.1", 00:19:15.551 "trsvcid": "44090" 00:19:15.551 }, 00:19:15.551 "auth": { 00:19:15.551 "state": "completed", 00:19:15.551 "digest": "sha512", 00:19:15.551 "dhgroup": "ffdhe4096" 00:19:15.551 } 00:19:15.551 } 00:19:15.551 ]' 00:19:15.551 10:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:15.809 10:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:15.809 10:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:15.809 10:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:15.809 10:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:15.809 10:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:15.809 10:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:15.809 10:08:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:16.067 10:08:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:MmE3MTQ0ZmMyYmZkNWM2ZTgwM2QwMTVjZGMzZjRlNWRlNTcxYzFkMjgwMzY0YzUw+y0etQ==: --dhchap-ctrl-secret DHHC-1:03:YjQ4YTJiZjQ2NTRiODRlNjc4YjM4MDdlMzhmZGFlOTNhOGE0ZjIxMzE3YmZkZTdhYThmMmFiYmYzYzMxNWVjMysm5Vk=: 00:19:17.001 10:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:17.001 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:17.001 10:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:17.001 10:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.001 10:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.001 10:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.001 10:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:17.001 10:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:17.001 10:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:17.258 10:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:19:17.258 10:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:17.258 10:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:17.258 10:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:17.258 10:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:17.258 10:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:17.258 10:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:17.258 10:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.258 10:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.515 10:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.515 10:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:17.515 10:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:17.783 00:19:18.045 10:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:18.045 10:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:18.045 10:08:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:18.303 10:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:18.303 10:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:18.303 10:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:18.303 10:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.303 10:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:18.303 10:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:18.303 { 00:19:18.303 "cntlid": 123, 00:19:18.303 "qid": 0, 00:19:18.303 "state": "enabled", 00:19:18.303 "thread": "nvmf_tgt_poll_group_000", 00:19:18.303 "listen_address": { 00:19:18.303 "trtype": "TCP", 00:19:18.303 "adrfam": "IPv4", 00:19:18.303 "traddr": "10.0.0.2", 00:19:18.303 "trsvcid": "4420" 00:19:18.303 }, 00:19:18.303 "peer_address": { 00:19:18.303 "trtype": "TCP", 00:19:18.303 "adrfam": "IPv4", 00:19:18.303 "traddr": "10.0.0.1", 00:19:18.303 "trsvcid": "44124" 00:19:18.303 }, 00:19:18.303 "auth": { 00:19:18.303 "state": "completed", 00:19:18.303 "digest": "sha512", 00:19:18.303 "dhgroup": "ffdhe4096" 00:19:18.303 } 00:19:18.303 } 00:19:18.303 ]' 00:19:18.303 10:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:18.303 10:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:18.303 10:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:18.303 10:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:18.303 10:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:18.303 10:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:18.303 10:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:18.303 10:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:18.866 10:08:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:Mzk1NTg5ZGUxNWU4MDE3ZTIwYWVkNzhhODI3NGQ0NzMtq9te: --dhchap-ctrl-secret DHHC-1:02:NWM1M2M0OWQzZTQxM2M1ZmU4YWVkNDBmNGExZTQyMWEyYzM1NzNmNmY4ZjU3YjZlY8rP+g==: 00:19:20.237 10:08:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:20.237 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:20.237 10:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:20.237 10:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.237 10:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.237 10:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.237 10:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:20.237 10:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:20.237 10:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:20.237 10:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:19:20.237 10:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:20.237 10:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:20.237 10:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:20.237 10:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:20.237 10:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:20.237 10:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:20.237 10:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:20.237 10:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.237 10:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:20.237 10:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:20.237 10:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:20.802 00:19:20.802 10:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:20.802 10:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:20.802 10:08:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:21.059 10:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:21.059 10:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:21.059 10:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.059 10:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.059 10:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.059 10:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:21.059 { 00:19:21.059 "cntlid": 125, 00:19:21.059 "qid": 0, 00:19:21.059 "state": "enabled", 00:19:21.059 "thread": "nvmf_tgt_poll_group_000", 00:19:21.059 "listen_address": { 00:19:21.059 "trtype": "TCP", 00:19:21.059 "adrfam": "IPv4", 00:19:21.059 "traddr": "10.0.0.2", 00:19:21.059 "trsvcid": "4420" 00:19:21.059 }, 00:19:21.059 "peer_address": { 00:19:21.059 "trtype": "TCP", 00:19:21.059 "adrfam": "IPv4", 00:19:21.059 "traddr": "10.0.0.1", 00:19:21.059 "trsvcid": "44142" 00:19:21.059 }, 00:19:21.059 "auth": { 00:19:21.059 "state": "completed", 00:19:21.059 "digest": "sha512", 00:19:21.059 "dhgroup": "ffdhe4096" 00:19:21.059 } 00:19:21.059 } 00:19:21.059 ]' 00:19:21.059 10:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:21.316 10:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:21.316 10:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:21.316 10:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:21.316 10:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:21.316 10:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:21.316 10:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:21.316 10:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:21.573 10:08:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:NTZlZWM5ZmE4MThjZWIzYjUxMTBkZWY0ZGFhOGI2MDRlZWVhN2M1NzM5Y2VmN2Yy9TJSEw==: --dhchap-ctrl-secret DHHC-1:01:Y2RjYzgxMGZhOGFhNzkzOWJhZDc3MmQwOTcwODg1MTG0PvqR: 00:19:22.943 10:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:22.943 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:22.943 10:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:22.943 10:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.943 10:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.943 10:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.943 10:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:22.943 10:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:22.943 10:08:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:22.943 10:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:19:22.943 10:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:22.943 10:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:22.943 10:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:22.943 10:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:22.943 10:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:22.943 10:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:19:22.943 10:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.943 10:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.943 10:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.943 10:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:22.943 10:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:23.508 00:19:23.508 10:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:23.508 10:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:23.508 10:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:23.765 10:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:23.765 10:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:23.765 10:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.765 10:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.765 10:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.765 10:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:23.765 { 00:19:23.765 "cntlid": 127, 00:19:23.765 "qid": 0, 00:19:23.765 "state": "enabled", 00:19:23.765 "thread": "nvmf_tgt_poll_group_000", 00:19:23.765 "listen_address": { 00:19:23.765 "trtype": "TCP", 00:19:23.765 "adrfam": "IPv4", 00:19:23.765 "traddr": "10.0.0.2", 00:19:23.765 "trsvcid": "4420" 00:19:23.765 }, 00:19:23.765 "peer_address": { 00:19:23.765 "trtype": "TCP", 00:19:23.765 "adrfam": "IPv4", 00:19:23.765 "traddr": "10.0.0.1", 00:19:23.765 "trsvcid": "44168" 00:19:23.765 }, 00:19:23.765 "auth": { 00:19:23.765 "state": "completed", 00:19:23.765 "digest": "sha512", 00:19:23.765 "dhgroup": "ffdhe4096" 00:19:23.765 } 00:19:23.765 } 00:19:23.765 ]' 00:19:23.765 10:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:24.023 10:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:24.023 10:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:24.023 10:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:24.023 10:08:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:24.023 10:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:24.023 10:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:24.023 10:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:24.281 10:08:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:ODg3YzlhNDdhNmE2MWExZTAwNjcyZmM2ZTJmMjIwM2FkYzQwNjZhNTI5ODhjMTg1MDhjMWQ4ZGJiMDAxMTFhOZgH7UI=: 00:19:25.216 10:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:25.216 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:25.216 10:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:25.216 10:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.216 10:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.216 10:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.216 10:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:25.216 10:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:25.216 10:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:25.216 10:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:25.473 10:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:19:25.473 10:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:25.473 10:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:25.473 10:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:25.473 10:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:25.473 10:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:25.473 10:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:25.473 10:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.473 10:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.473 10:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.473 10:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:25.473 10:08:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:26.406 00:19:26.406 10:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:26.406 10:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:26.406 10:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:26.972 10:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:26.972 10:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:26.972 10:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:26.972 10:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.972 10:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:26.972 10:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:26.972 { 00:19:26.972 "cntlid": 129, 00:19:26.972 "qid": 0, 00:19:26.972 "state": "enabled", 00:19:26.972 "thread": "nvmf_tgt_poll_group_000", 00:19:26.972 "listen_address": { 00:19:26.972 "trtype": "TCP", 00:19:26.972 "adrfam": "IPv4", 00:19:26.972 "traddr": "10.0.0.2", 00:19:26.972 "trsvcid": "4420" 00:19:26.972 }, 00:19:26.972 "peer_address": { 00:19:26.972 "trtype": "TCP", 00:19:26.972 "adrfam": "IPv4", 00:19:26.972 "traddr": "10.0.0.1", 00:19:26.972 "trsvcid": "50682" 00:19:26.972 }, 00:19:26.972 "auth": { 00:19:26.972 "state": "completed", 00:19:26.972 "digest": "sha512", 00:19:26.972 "dhgroup": "ffdhe6144" 00:19:26.972 } 00:19:26.972 } 00:19:26.972 ]' 00:19:26.972 10:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:26.972 10:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:26.972 10:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:26.972 10:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:26.972 10:08:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:26.972 10:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:26.972 10:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:26.972 10:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:27.537 10:08:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:MmE3MTQ0ZmMyYmZkNWM2ZTgwM2QwMTVjZGMzZjRlNWRlNTcxYzFkMjgwMzY0YzUw+y0etQ==: --dhchap-ctrl-secret DHHC-1:03:YjQ4YTJiZjQ2NTRiODRlNjc4YjM4MDdlMzhmZGFlOTNhOGE0ZjIxMzE3YmZkZTdhYThmMmFiYmYzYzMxNWVjMysm5Vk=: 00:19:28.470 10:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:28.470 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:28.470 10:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:28.470 10:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.470 10:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.728 10:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.728 10:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:28.728 10:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:28.728 10:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:28.985 10:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:19:28.985 10:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:28.985 10:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:28.985 10:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:28.985 10:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:28.985 10:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:28.985 10:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:28.985 10:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.985 10:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.985 10:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.985 10:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:28.985 10:08:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:29.548 00:19:29.548 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:29.549 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:29.549 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:29.814 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:29.814 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:29.814 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.814 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.814 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.814 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:29.814 { 00:19:29.814 "cntlid": 131, 00:19:29.814 "qid": 0, 00:19:29.814 "state": "enabled", 00:19:29.814 "thread": "nvmf_tgt_poll_group_000", 00:19:29.814 "listen_address": { 00:19:29.814 "trtype": "TCP", 00:19:29.814 "adrfam": "IPv4", 00:19:29.814 "traddr": "10.0.0.2", 00:19:29.814 "trsvcid": "4420" 00:19:29.814 }, 00:19:29.814 "peer_address": { 00:19:29.814 "trtype": "TCP", 00:19:29.814 "adrfam": "IPv4", 00:19:29.814 "traddr": "10.0.0.1", 00:19:29.814 "trsvcid": "50698" 00:19:29.814 }, 00:19:29.814 "auth": { 00:19:29.814 "state": "completed", 00:19:29.814 "digest": "sha512", 00:19:29.814 "dhgroup": "ffdhe6144" 00:19:29.814 } 00:19:29.814 } 00:19:29.814 ]' 00:19:29.814 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:29.814 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:29.814 10:08:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:30.080 10:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:30.080 10:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:30.080 10:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:30.080 10:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:30.080 10:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:30.337 10:08:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:Mzk1NTg5ZGUxNWU4MDE3ZTIwYWVkNzhhODI3NGQ0NzMtq9te: --dhchap-ctrl-secret DHHC-1:02:NWM1M2M0OWQzZTQxM2M1ZmU4YWVkNDBmNGExZTQyMWEyYzM1NzNmNmY4ZjU3YjZlY8rP+g==: 00:19:31.271 10:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:31.271 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:31.271 10:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:31.271 10:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.271 10:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.271 10:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.271 10:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:31.271 10:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:31.271 10:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:31.837 10:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:19:31.837 10:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:31.837 10:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:31.837 10:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:31.837 10:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:31.837 10:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:31.837 10:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:31.837 10:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.837 10:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.837 10:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:31.837 10:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:31.837 10:08:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:32.403 00:19:32.403 10:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:32.403 10:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:32.403 10:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:32.970 10:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:32.970 10:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:32.970 10:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:32.970 10:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.970 10:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.970 10:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:32.970 { 00:19:32.970 "cntlid": 133, 00:19:32.970 "qid": 0, 00:19:32.970 "state": "enabled", 00:19:32.970 "thread": "nvmf_tgt_poll_group_000", 00:19:32.970 "listen_address": { 00:19:32.970 "trtype": "TCP", 00:19:32.970 "adrfam": "IPv4", 00:19:32.970 "traddr": "10.0.0.2", 00:19:32.970 "trsvcid": "4420" 00:19:32.970 }, 00:19:32.970 "peer_address": { 00:19:32.970 "trtype": "TCP", 00:19:32.970 "adrfam": "IPv4", 00:19:32.970 "traddr": "10.0.0.1", 00:19:32.970 "trsvcid": "50722" 00:19:32.970 }, 00:19:32.970 "auth": { 00:19:32.970 "state": "completed", 00:19:32.970 "digest": "sha512", 00:19:32.970 "dhgroup": "ffdhe6144" 00:19:32.970 } 00:19:32.970 } 00:19:32.970 ]' 00:19:32.970 10:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:32.970 10:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:32.970 10:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:32.970 10:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:32.970 10:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:32.970 10:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:32.970 10:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:32.970 10:08:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:33.573 10:08:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:NTZlZWM5ZmE4MThjZWIzYjUxMTBkZWY0ZGFhOGI2MDRlZWVhN2M1NzM5Y2VmN2Yy9TJSEw==: --dhchap-ctrl-secret DHHC-1:01:Y2RjYzgxMGZhOGFhNzkzOWJhZDc3MmQwOTcwODg1MTG0PvqR: 00:19:34.514 10:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:34.514 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:34.514 10:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:34.514 10:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.514 10:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.514 10:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.514 10:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:34.514 10:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:34.514 10:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:34.772 10:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:19:34.772 10:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:34.772 10:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:34.772 10:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:34.772 10:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:34.772 10:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:34.772 10:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:19:34.772 10:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.772 10:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.772 10:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.772 10:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:34.772 10:08:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:35.336 00:19:35.336 10:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:35.336 10:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:35.337 10:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:35.594 10:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:35.594 10:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:35.594 10:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.594 10:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.594 10:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.594 10:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:35.594 { 00:19:35.594 "cntlid": 135, 00:19:35.594 "qid": 0, 00:19:35.594 "state": "enabled", 00:19:35.594 "thread": "nvmf_tgt_poll_group_000", 00:19:35.594 "listen_address": { 00:19:35.594 "trtype": "TCP", 00:19:35.594 "adrfam": "IPv4", 00:19:35.594 "traddr": "10.0.0.2", 00:19:35.594 "trsvcid": "4420" 00:19:35.594 }, 00:19:35.594 "peer_address": { 00:19:35.594 "trtype": "TCP", 00:19:35.594 "adrfam": "IPv4", 00:19:35.594 "traddr": "10.0.0.1", 00:19:35.594 "trsvcid": "46994" 00:19:35.594 }, 00:19:35.594 "auth": { 00:19:35.594 "state": "completed", 00:19:35.594 "digest": "sha512", 00:19:35.594 "dhgroup": "ffdhe6144" 00:19:35.594 } 00:19:35.594 } 00:19:35.594 ]' 00:19:35.594 10:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:35.851 10:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:35.851 10:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:35.851 10:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:35.851 10:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:35.851 10:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:35.851 10:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:35.851 10:08:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:36.109 10:08:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:ODg3YzlhNDdhNmE2MWExZTAwNjcyZmM2ZTJmMjIwM2FkYzQwNjZhNTI5ODhjMTg1MDhjMWQ4ZGJiMDAxMTFhOZgH7UI=: 00:19:37.480 10:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:37.480 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:37.480 10:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:37.480 10:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.480 10:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.480 10:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.480 10:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:37.480 10:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:37.480 10:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:37.480 10:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:37.738 10:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:19:37.738 10:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:37.738 10:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:37.738 10:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:37.738 10:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:37.738 10:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:37.738 10:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:37.738 10:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:37.738 10:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.738 10:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:37.738 10:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:37.738 10:08:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:38.670 00:19:38.670 10:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:38.670 10:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:38.670 10:08:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:38.927 10:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:38.927 10:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:38.927 10:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.927 10:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.927 10:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.927 10:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:38.927 { 00:19:38.927 "cntlid": 137, 00:19:38.927 "qid": 0, 00:19:38.927 "state": "enabled", 00:19:38.927 "thread": "nvmf_tgt_poll_group_000", 00:19:38.927 "listen_address": { 00:19:38.927 "trtype": "TCP", 00:19:38.927 "adrfam": "IPv4", 00:19:38.927 "traddr": "10.0.0.2", 00:19:38.927 "trsvcid": "4420" 00:19:38.927 }, 00:19:38.927 "peer_address": { 00:19:38.927 "trtype": "TCP", 00:19:38.927 "adrfam": "IPv4", 00:19:38.927 "traddr": "10.0.0.1", 00:19:38.927 "trsvcid": "47020" 00:19:38.927 }, 00:19:38.927 "auth": { 00:19:38.927 "state": "completed", 00:19:38.927 "digest": "sha512", 00:19:38.927 "dhgroup": "ffdhe8192" 00:19:38.927 } 00:19:38.927 } 00:19:38.927 ]' 00:19:38.927 10:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:38.927 10:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:38.927 10:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:39.185 10:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:39.185 10:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:39.185 10:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:39.185 10:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:39.185 10:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:39.443 10:08:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:MmE3MTQ0ZmMyYmZkNWM2ZTgwM2QwMTVjZGMzZjRlNWRlNTcxYzFkMjgwMzY0YzUw+y0etQ==: --dhchap-ctrl-secret DHHC-1:03:YjQ4YTJiZjQ2NTRiODRlNjc4YjM4MDdlMzhmZGFlOTNhOGE0ZjIxMzE3YmZkZTdhYThmMmFiYmYzYzMxNWVjMysm5Vk=: 00:19:40.375 10:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:40.375 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:40.375 10:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:40.375 10:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.375 10:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.375 10:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.375 10:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:40.375 10:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:40.375 10:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:40.940 10:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:19:40.940 10:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:40.940 10:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:40.940 10:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:40.940 10:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:40.940 10:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:40.940 10:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:40.940 10:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:40.940 10:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.940 10:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.940 10:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:40.940 10:08:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:41.873 00:19:41.873 10:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:41.873 10:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:41.873 10:08:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:42.131 10:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:42.131 10:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:42.131 10:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.131 10:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.131 10:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.131 10:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:42.131 { 00:19:42.131 "cntlid": 139, 00:19:42.131 "qid": 0, 00:19:42.131 "state": "enabled", 00:19:42.131 "thread": "nvmf_tgt_poll_group_000", 00:19:42.131 "listen_address": { 00:19:42.131 "trtype": "TCP", 00:19:42.131 "adrfam": "IPv4", 00:19:42.131 "traddr": "10.0.0.2", 00:19:42.131 "trsvcid": "4420" 00:19:42.131 }, 00:19:42.131 "peer_address": { 00:19:42.131 "trtype": "TCP", 00:19:42.131 "adrfam": "IPv4", 00:19:42.131 "traddr": "10.0.0.1", 00:19:42.131 "trsvcid": "47056" 00:19:42.131 }, 00:19:42.131 "auth": { 00:19:42.131 "state": "completed", 00:19:42.131 "digest": "sha512", 00:19:42.131 "dhgroup": "ffdhe8192" 00:19:42.131 } 00:19:42.131 } 00:19:42.131 ]' 00:19:42.131 10:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:42.131 10:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:42.131 10:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:42.131 10:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:42.131 10:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:42.131 10:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:42.131 10:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:42.131 10:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:42.696 10:08:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:01:Mzk1NTg5ZGUxNWU4MDE3ZTIwYWVkNzhhODI3NGQ0NzMtq9te: --dhchap-ctrl-secret DHHC-1:02:NWM1M2M0OWQzZTQxM2M1ZmU4YWVkNDBmNGExZTQyMWEyYzM1NzNmNmY4ZjU3YjZlY8rP+g==: 00:19:43.628 10:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:43.628 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:43.628 10:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:43.628 10:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.628 10:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.628 10:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.628 10:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:43.628 10:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:43.628 10:08:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:43.886 10:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:19:43.886 10:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:43.886 10:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:43.886 10:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:43.886 10:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:43.886 10:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:43.886 10:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:43.886 10:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:43.886 10:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.886 10:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:43.886 10:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:43.886 10:08:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:45.256 00:19:45.256 10:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:45.256 10:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:45.256 10:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:45.514 10:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:45.514 10:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:45.514 10:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.514 10:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.514 10:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.514 10:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:45.514 { 00:19:45.514 "cntlid": 141, 00:19:45.514 "qid": 0, 00:19:45.514 "state": "enabled", 00:19:45.514 "thread": "nvmf_tgt_poll_group_000", 00:19:45.514 "listen_address": { 00:19:45.514 "trtype": "TCP", 00:19:45.514 "adrfam": "IPv4", 00:19:45.514 "traddr": "10.0.0.2", 00:19:45.514 "trsvcid": "4420" 00:19:45.514 }, 00:19:45.514 "peer_address": { 00:19:45.514 "trtype": "TCP", 00:19:45.514 "adrfam": "IPv4", 00:19:45.514 "traddr": "10.0.0.1", 00:19:45.514 "trsvcid": "56070" 00:19:45.514 }, 00:19:45.514 "auth": { 00:19:45.514 "state": "completed", 00:19:45.514 "digest": "sha512", 00:19:45.514 "dhgroup": "ffdhe8192" 00:19:45.514 } 00:19:45.514 } 00:19:45.514 ]' 00:19:45.514 10:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:45.514 10:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:45.514 10:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:45.514 10:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:45.514 10:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:45.514 10:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:45.514 10:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:45.514 10:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:45.771 10:08:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:02:NTZlZWM5ZmE4MThjZWIzYjUxMTBkZWY0ZGFhOGI2MDRlZWVhN2M1NzM5Y2VmN2Yy9TJSEw==: --dhchap-ctrl-secret DHHC-1:01:Y2RjYzgxMGZhOGFhNzkzOWJhZDc3MmQwOTcwODg1MTG0PvqR: 00:19:47.175 10:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:47.175 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:47.175 10:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:47.175 10:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.175 10:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.175 10:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.175 10:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:47.175 10:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:47.175 10:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:47.432 10:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:19:47.432 10:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:47.432 10:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:47.432 10:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:47.432 10:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:47.432 10:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:47.433 10:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:19:47.433 10:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.433 10:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.433 10:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.433 10:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:47.433 10:08:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:48.363 00:19:48.363 10:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:48.363 10:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:48.363 10:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:48.620 10:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:48.620 10:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:48.620 10:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.620 10:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.620 10:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.620 10:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:48.620 { 00:19:48.620 "cntlid": 143, 00:19:48.620 "qid": 0, 00:19:48.620 "state": "enabled", 00:19:48.620 "thread": "nvmf_tgt_poll_group_000", 00:19:48.620 "listen_address": { 00:19:48.620 "trtype": "TCP", 00:19:48.620 "adrfam": "IPv4", 00:19:48.620 "traddr": "10.0.0.2", 00:19:48.620 "trsvcid": "4420" 00:19:48.620 }, 00:19:48.620 "peer_address": { 00:19:48.620 "trtype": "TCP", 00:19:48.620 "adrfam": "IPv4", 00:19:48.620 "traddr": "10.0.0.1", 00:19:48.620 "trsvcid": "56114" 00:19:48.620 }, 00:19:48.620 "auth": { 00:19:48.620 "state": "completed", 00:19:48.620 "digest": "sha512", 00:19:48.620 "dhgroup": "ffdhe8192" 00:19:48.620 } 00:19:48.620 } 00:19:48.620 ]' 00:19:48.620 10:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:48.620 10:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:48.620 10:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:48.877 10:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:48.877 10:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:48.877 10:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:48.877 10:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:48.877 10:08:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:49.135 10:08:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:ODg3YzlhNDdhNmE2MWExZTAwNjcyZmM2ZTJmMjIwM2FkYzQwNjZhNTI5ODhjMTg1MDhjMWQ4ZGJiMDAxMTFhOZgH7UI=: 00:19:50.507 10:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:50.507 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:50.507 10:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:50.507 10:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.507 10:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.507 10:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.507 10:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:19:50.507 10:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:19:50.507 10:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:19:50.507 10:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:50.507 10:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:50.507 10:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:19:50.764 10:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:19:50.765 10:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:50.765 10:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:50.765 10:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:50.765 10:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:50.765 10:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:50.765 10:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:50.765 10:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:50.765 10:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.765 10:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.765 10:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:50.765 10:08:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:51.697 00:19:51.954 10:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:51.954 10:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:51.954 10:08:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:52.212 10:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:52.212 10:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:52.212 10:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.212 10:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.212 10:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.212 10:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:52.212 { 00:19:52.212 "cntlid": 145, 00:19:52.212 "qid": 0, 00:19:52.212 "state": "enabled", 00:19:52.212 "thread": "nvmf_tgt_poll_group_000", 00:19:52.212 "listen_address": { 00:19:52.212 "trtype": "TCP", 00:19:52.212 "adrfam": "IPv4", 00:19:52.212 "traddr": "10.0.0.2", 00:19:52.212 "trsvcid": "4420" 00:19:52.212 }, 00:19:52.212 "peer_address": { 00:19:52.212 "trtype": "TCP", 00:19:52.212 "adrfam": "IPv4", 00:19:52.212 "traddr": "10.0.0.1", 00:19:52.212 "trsvcid": "56128" 00:19:52.212 }, 00:19:52.212 "auth": { 00:19:52.212 "state": "completed", 00:19:52.212 "digest": "sha512", 00:19:52.212 "dhgroup": "ffdhe8192" 00:19:52.212 } 00:19:52.212 } 00:19:52.212 ]' 00:19:52.212 10:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:52.212 10:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:52.212 10:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:52.212 10:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:52.212 10:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:52.469 10:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:52.469 10:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:52.469 10:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:53.033 10:08:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:00:MmE3MTQ0ZmMyYmZkNWM2ZTgwM2QwMTVjZGMzZjRlNWRlNTcxYzFkMjgwMzY0YzUw+y0etQ==: --dhchap-ctrl-secret DHHC-1:03:YjQ4YTJiZjQ2NTRiODRlNjc4YjM4MDdlMzhmZGFlOTNhOGE0ZjIxMzE3YmZkZTdhYThmMmFiYmYzYzMxNWVjMysm5Vk=: 00:19:53.963 10:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:53.963 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:53.963 10:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:53.963 10:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.963 10:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.963 10:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.963 10:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 00:19:53.963 10:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.963 10:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.963 10:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.963 10:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:53.963 10:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:19:53.963 10:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:53.963 10:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:19:53.963 10:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:53.963 10:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:19:53.963 10:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:53.963 10:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:53.963 10:08:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:19:54.894 request: 00:19:54.894 { 00:19:54.894 "name": "nvme0", 00:19:54.894 "trtype": "tcp", 00:19:54.894 "traddr": "10.0.0.2", 00:19:54.894 "adrfam": "ipv4", 00:19:54.894 "trsvcid": "4420", 00:19:54.894 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:54.894 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:19:54.894 "prchk_reftag": false, 00:19:54.894 "prchk_guard": false, 00:19:54.894 "hdgst": false, 00:19:54.894 "ddgst": false, 00:19:54.894 "dhchap_key": "key2", 00:19:54.894 "method": "bdev_nvme_attach_controller", 00:19:54.894 "req_id": 1 00:19:54.894 } 00:19:54.894 Got JSON-RPC error response 00:19:54.894 response: 00:19:54.894 { 00:19:54.894 "code": -5, 00:19:54.894 "message": "Input/output error" 00:19:54.894 } 00:19:54.894 10:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:19:54.894 10:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:54.894 10:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:54.894 10:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:54.894 10:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:54.894 10:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.894 10:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.894 10:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.894 10:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:54.894 10:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.894 10:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.894 10:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.894 10:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:54.894 10:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:19:54.894 10:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:54.894 10:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:19:54.894 10:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:54.894 10:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:19:54.894 10:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:54.894 10:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:54.894 10:08:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:19:55.823 request: 00:19:55.823 { 00:19:55.823 "name": "nvme0", 00:19:55.823 "trtype": "tcp", 00:19:55.823 "traddr": "10.0.0.2", 00:19:55.823 "adrfam": "ipv4", 00:19:55.823 "trsvcid": "4420", 00:19:55.823 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:55.823 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:19:55.823 "prchk_reftag": false, 00:19:55.823 "prchk_guard": false, 00:19:55.823 "hdgst": false, 00:19:55.823 "ddgst": false, 00:19:55.823 "dhchap_key": "key1", 00:19:55.823 "dhchap_ctrlr_key": "ckey2", 00:19:55.823 "method": "bdev_nvme_attach_controller", 00:19:55.823 "req_id": 1 00:19:55.823 } 00:19:55.823 Got JSON-RPC error response 00:19:55.823 response: 00:19:55.823 { 00:19:55.823 "code": -5, 00:19:55.823 "message": "Input/output error" 00:19:55.823 } 00:19:55.823 10:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:19:55.823 10:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:55.823 10:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:55.824 10:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:55.824 10:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:55.824 10:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.824 10:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.824 10:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.824 10:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key1 00:19:55.824 10:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.824 10:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.824 10:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.824 10:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:55.824 10:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:19:55.824 10:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:55.824 10:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:19:55.824 10:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:55.824 10:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:19:55.824 10:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:19:55.824 10:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:55.824 10:08:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:57.193 request: 00:19:57.193 { 00:19:57.193 "name": "nvme0", 00:19:57.193 "trtype": "tcp", 00:19:57.193 "traddr": "10.0.0.2", 00:19:57.193 "adrfam": "ipv4", 00:19:57.193 "trsvcid": "4420", 00:19:57.193 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:19:57.193 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:19:57.193 "prchk_reftag": false, 00:19:57.193 "prchk_guard": false, 00:19:57.193 "hdgst": false, 00:19:57.193 "ddgst": false, 00:19:57.193 "dhchap_key": "key1", 00:19:57.193 "dhchap_ctrlr_key": "ckey1", 00:19:57.193 "method": "bdev_nvme_attach_controller", 00:19:57.193 "req_id": 1 00:19:57.193 } 00:19:57.193 Got JSON-RPC error response 00:19:57.193 response: 00:19:57.193 { 00:19:57.193 "code": -5, 00:19:57.193 "message": "Input/output error" 00:19:57.193 } 00:19:57.193 10:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:19:57.193 10:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:19:57.193 10:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:19:57.193 10:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:19:57.193 10:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:19:57.193 10:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.193 10:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.193 10:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.193 10:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 427306 00:19:57.193 10:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 427306 ']' 00:19:57.193 10:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 427306 00:19:57.193 10:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:19:57.193 10:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:57.193 10:08:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 427306 00:19:57.193 10:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:57.193 10:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:57.193 10:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 427306' 00:19:57.193 killing process with pid 427306 00:19:57.193 10:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 427306 00:19:57.193 10:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 427306 00:19:57.193 10:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:19:57.193 10:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:57.193 10:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:57.193 10:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.193 10:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=455369 00:19:57.193 10:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:19:57.193 10:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 455369 00:19:57.193 10:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 455369 ']' 00:19:57.193 10:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:57.193 10:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:57.193 10:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:57.193 10:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:57.193 10:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.759 10:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:57.759 10:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:19:57.759 10:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:57.759 10:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:57.759 10:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.759 10:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:57.759 10:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:19:57.759 10:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 455369 00:19:57.759 10:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 455369 ']' 00:19:57.759 10:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:57.759 10:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:57.759 10:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:57.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:57.759 10:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:57.759 10:08:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.016 10:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:58.016 10:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:19:58.016 10:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:19:58.016 10:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.016 10:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.274 10:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.274 10:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:19:58.274 10:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:58.274 10:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:58.274 10:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:58.274 10:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:58.274 10:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:58.274 10:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:19:58.274 10:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.274 10:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.274 10:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.274 10:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:58.274 10:08:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:59.203 00:19:59.203 10:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:59.203 10:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:59.203 10:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:59.768 10:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:59.768 10:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:59.768 10:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.768 10:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.768 10:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.768 10:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:59.768 { 00:19:59.768 "cntlid": 1, 00:19:59.768 "qid": 0, 00:19:59.768 "state": "enabled", 00:19:59.768 "thread": "nvmf_tgt_poll_group_000", 00:19:59.768 "listen_address": { 00:19:59.768 "trtype": "TCP", 00:19:59.768 "adrfam": "IPv4", 00:19:59.768 "traddr": "10.0.0.2", 00:19:59.768 "trsvcid": "4420" 00:19:59.768 }, 00:19:59.768 "peer_address": { 00:19:59.768 "trtype": "TCP", 00:19:59.768 "adrfam": "IPv4", 00:19:59.768 "traddr": "10.0.0.1", 00:19:59.768 "trsvcid": "57646" 00:19:59.768 }, 00:19:59.768 "auth": { 00:19:59.768 "state": "completed", 00:19:59.768 "digest": "sha512", 00:19:59.768 "dhgroup": "ffdhe8192" 00:19:59.768 } 00:19:59.768 } 00:19:59.768 ]' 00:19:59.768 10:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:59.768 10:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:59.768 10:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:59.768 10:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:59.768 10:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:59.768 10:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:59.768 10:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:59.768 10:08:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:00.026 10:08:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-secret DHHC-1:03:ODg3YzlhNDdhNmE2MWExZTAwNjcyZmM2ZTJmMjIwM2FkYzQwNjZhNTI5ODhjMTg1MDhjMWQ4ZGJiMDAxMTFhOZgH7UI=: 00:20:00.960 10:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:00.960 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:00.960 10:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:00.960 10:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:00.960 10:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.960 10:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:00.960 10:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --dhchap-key key3 00:20:00.960 10:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:00.960 10:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.960 10:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:00.960 10:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:20:00.960 10:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:20:01.526 10:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:01.526 10:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:20:01.526 10:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:01.526 10:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:20:01.526 10:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:01.526 10:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:20:01.526 10:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:01.526 10:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:01.526 10:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:01.526 request: 00:20:01.526 { 00:20:01.526 "name": "nvme0", 00:20:01.526 "trtype": "tcp", 00:20:01.526 "traddr": "10.0.0.2", 00:20:01.526 "adrfam": "ipv4", 00:20:01.526 "trsvcid": "4420", 00:20:01.526 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:01.526 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:20:01.526 "prchk_reftag": false, 00:20:01.526 "prchk_guard": false, 00:20:01.526 "hdgst": false, 00:20:01.526 "ddgst": false, 00:20:01.526 "dhchap_key": "key3", 00:20:01.526 "method": "bdev_nvme_attach_controller", 00:20:01.526 "req_id": 1 00:20:01.526 } 00:20:01.526 Got JSON-RPC error response 00:20:01.526 response: 00:20:01.526 { 00:20:01.526 "code": -5, 00:20:01.526 "message": "Input/output error" 00:20:01.526 } 00:20:01.783 10:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:20:01.783 10:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:01.783 10:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:01.783 10:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:01.783 10:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:20:01.783 10:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:20:01.783 10:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:20:01.783 10:08:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:20:02.041 10:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:02.041 10:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:20:02.041 10:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:02.041 10:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:20:02.041 10:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:02.041 10:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:20:02.041 10:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:02.041 10:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:02.041 10:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:02.299 request: 00:20:02.299 { 00:20:02.299 "name": "nvme0", 00:20:02.299 "trtype": "tcp", 00:20:02.299 "traddr": "10.0.0.2", 00:20:02.299 "adrfam": "ipv4", 00:20:02.299 "trsvcid": "4420", 00:20:02.299 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:02.299 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:20:02.299 "prchk_reftag": false, 00:20:02.299 "prchk_guard": false, 00:20:02.299 "hdgst": false, 00:20:02.299 "ddgst": false, 00:20:02.299 "dhchap_key": "key3", 00:20:02.299 "method": "bdev_nvme_attach_controller", 00:20:02.299 "req_id": 1 00:20:02.299 } 00:20:02.299 Got JSON-RPC error response 00:20:02.299 response: 00:20:02.299 { 00:20:02.299 "code": -5, 00:20:02.299 "message": "Input/output error" 00:20:02.299 } 00:20:02.299 10:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:20:02.299 10:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:02.299 10:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:02.299 10:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:02.299 10:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:20:02.299 10:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:20:02.299 10:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:20:02.299 10:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:02.299 10:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:02.299 10:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:02.557 10:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:02.557 10:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.557 10:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.557 10:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.557 10:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:02.557 10:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.557 10:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.557 10:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.557 10:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:02.557 10:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:20:02.557 10:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:02.557 10:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:20:02.557 10:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:02.557 10:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:20:02.557 10:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:02.557 10:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:02.557 10:08:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:03.122 request: 00:20:03.122 { 00:20:03.122 "name": "nvme0", 00:20:03.122 "trtype": "tcp", 00:20:03.122 "traddr": "10.0.0.2", 00:20:03.122 "adrfam": "ipv4", 00:20:03.122 "trsvcid": "4420", 00:20:03.122 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:03.122 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02", 00:20:03.122 "prchk_reftag": false, 00:20:03.122 "prchk_guard": false, 00:20:03.122 "hdgst": false, 00:20:03.122 "ddgst": false, 00:20:03.122 "dhchap_key": "key0", 00:20:03.122 "dhchap_ctrlr_key": "key1", 00:20:03.122 "method": "bdev_nvme_attach_controller", 00:20:03.122 "req_id": 1 00:20:03.122 } 00:20:03.122 Got JSON-RPC error response 00:20:03.122 response: 00:20:03.122 { 00:20:03.122 "code": -5, 00:20:03.122 "message": "Input/output error" 00:20:03.122 } 00:20:03.122 10:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:20:03.122 10:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:03.123 10:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:03.123 10:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:03.123 10:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:03.123 10:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:03.715 00:20:03.715 10:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:20:03.715 10:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:20:03.715 10:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:03.973 10:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:03.973 10:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:03.973 10:08:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:04.539 10:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:20:04.539 10:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:20:04.539 10:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 427432 00:20:04.539 10:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 427432 ']' 00:20:04.539 10:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 427432 00:20:04.539 10:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:20:04.539 10:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:04.539 10:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 427432 00:20:04.539 10:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:04.539 10:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:04.539 10:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 427432' 00:20:04.539 killing process with pid 427432 00:20:04.539 10:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 427432 00:20:04.539 10:08:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 427432 00:20:05.104 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:20:05.104 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:05.104 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:20:05.104 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:05.104 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:20:05.104 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:05.104 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:05.104 rmmod nvme_tcp 00:20:05.104 rmmod nvme_fabrics 00:20:05.104 rmmod nvme_keyring 00:20:05.104 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:05.104 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:20:05.104 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:20:05.104 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 455369 ']' 00:20:05.104 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 455369 00:20:05.104 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 455369 ']' 00:20:05.104 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 455369 00:20:05.104 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:20:05.104 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:05.104 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 455369 00:20:05.104 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:05.104 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:05.104 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 455369' 00:20:05.104 killing process with pid 455369 00:20:05.104 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 455369 00:20:05.104 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 455369 00:20:05.362 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:05.362 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:05.362 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:05.362 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:05.362 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:05.362 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:05.362 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:05.362 10:08:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:07.894 10:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:07.894 10:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.psD /tmp/spdk.key-sha256.lwz /tmp/spdk.key-sha384.wKS /tmp/spdk.key-sha512.vMI /tmp/spdk.key-sha512.kKa /tmp/spdk.key-sha384.T25 /tmp/spdk.key-sha256.hCx '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:20:07.894 00:20:07.894 real 4m2.880s 00:20:07.894 user 9m41.702s 00:20:07.894 sys 0m32.152s 00:20:07.894 10:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:07.894 10:08:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.894 ************************************ 00:20:07.894 END TEST nvmf_auth_target 00:20:07.894 ************************************ 00:20:07.894 10:08:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:20:07.894 10:08:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:20:07.894 10:08:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:20:07.894 10:08:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:07.894 10:08:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:07.894 ************************************ 00:20:07.894 START TEST nvmf_bdevio_no_huge 00:20:07.894 ************************************ 00:20:07.894 10:08:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:20:07.894 * Looking for test storage... 00:20:07.894 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:07.894 10:08:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:07.894 10:08:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:20:07.894 10:08:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:07.894 10:08:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:07.894 10:08:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:07.894 10:08:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:07.894 10:08:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:07.894 10:08:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:07.894 10:08:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:07.894 10:08:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:07.894 10:08:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:07.894 10:08:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:07.894 10:08:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:07.894 10:08:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:20:07.894 10:08:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:07.894 10:08:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:07.894 10:08:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:07.894 10:08:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:07.894 10:08:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:07.894 10:08:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:07.894 10:08:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:07.894 10:08:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:07.894 10:08:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:07.894 10:08:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:07.895 10:08:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:07.895 10:08:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:20:07.895 10:08:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:07.895 10:08:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:20:07.895 10:08:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:07.895 10:08:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:07.895 10:08:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:07.895 10:08:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:07.895 10:08:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:07.895 10:08:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:07.895 10:08:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:07.895 10:08:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:07.895 10:08:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:07.895 10:08:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:07.895 10:08:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:20:07.895 10:08:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:07.895 10:08:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:07.895 10:08:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:07.895 10:08:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:07.895 10:08:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:07.895 10:08:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:07.895 10:08:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:07.895 10:08:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:07.895 10:08:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:07.895 10:08:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:07.895 10:08:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:20:07.895 10:08:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:10.425 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:10.425 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:20:10.425 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:10.425 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:10.425 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:10.425 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:10.425 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:10.425 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:20:10.425 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:10.425 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:20:10.425 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:20:10.425 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:20:10.425 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:20:10.425 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:20:10.425 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:20:10.425 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:10.425 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:10.425 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:10.425 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:10.425 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:10.425 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:10.425 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:10.425 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:10.425 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:10.425 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:10.425 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:10.425 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:10.425 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:10.425 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:10.425 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:10.425 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:10.426 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:10.426 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:10.426 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:20:10.426 Found 0000:84:00.0 (0x8086 - 0x159b) 00:20:10.426 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:10.426 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:10.426 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:10.426 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:10.426 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:10.426 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:10.426 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:20:10.426 Found 0000:84:00.1 (0x8086 - 0x159b) 00:20:10.426 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:10.426 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:10.426 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:10.426 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:10.426 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:10.426 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:10.426 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:10.426 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:10.426 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:10.426 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:10.426 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:10.426 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:10.426 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:10.426 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:10.426 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:10.426 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:20:10.426 Found net devices under 0000:84:00.0: cvl_0_0 00:20:10.426 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:10.426 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:10.426 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:10.426 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:10.426 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:10.426 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:10.426 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:10.426 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:10.426 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:20:10.426 Found net devices under 0000:84:00.1: cvl_0_1 00:20:10.426 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:10.426 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:10.426 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:20:10.426 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:10.426 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:10.426 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:10.426 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:10.426 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:10.426 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:10.426 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:10.426 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:10.426 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:10.426 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:10.426 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:10.426 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:10.426 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:10.426 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:10.426 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:10.426 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:10.426 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:10.426 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:10.426 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:10.426 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:10.426 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:10.426 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:10.426 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:10.426 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:10.426 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.178 ms 00:20:10.426 00:20:10.426 --- 10.0.0.2 ping statistics --- 00:20:10.426 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:10.426 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:20:10.426 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:10.426 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:10.426 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.128 ms 00:20:10.426 00:20:10.426 --- 10.0.0.1 ping statistics --- 00:20:10.426 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:10.426 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:20:10.426 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:10.426 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:20:10.426 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:10.426 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:10.426 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:10.426 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:10.426 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:10.426 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:10.426 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:10.426 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:20:10.426 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:10.426 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:10.426 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:10.426 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=458297 00:20:10.426 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:20:10.426 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 458297 00:20:10.426 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # '[' -z 458297 ']' 00:20:10.426 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:10.426 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:10.426 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:10.426 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:10.427 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:10.427 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:10.427 [2024-07-25 10:08:55.361983] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:20:10.427 [2024-07-25 10:08:55.362172] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:20:10.427 [2024-07-25 10:08:55.465050] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:10.685 [2024-07-25 10:08:55.590897] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:10.685 [2024-07-25 10:08:55.590957] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:10.685 [2024-07-25 10:08:55.590974] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:10.685 [2024-07-25 10:08:55.590987] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:10.685 [2024-07-25 10:08:55.590998] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:10.685 [2024-07-25 10:08:55.591107] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:20:10.685 [2024-07-25 10:08:55.591210] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:10.685 [2024-07-25 10:08:55.591191] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:20:10.685 [2024-07-25 10:08:55.591169] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:20:10.685 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:10.685 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # return 0 00:20:10.685 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:10.685 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:10.685 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:10.685 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:10.685 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:10.685 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.685 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:10.685 [2024-07-25 10:08:55.724040] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:10.685 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.685 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:10.685 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.685 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:10.685 Malloc0 00:20:10.685 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.685 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:10.685 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.685 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:10.685 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.685 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:10.685 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.685 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:10.686 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.686 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:10.686 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.686 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:10.686 [2024-07-25 10:08:55.764553] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:10.686 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.686 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:20:10.686 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:20:10.686 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:20:10.686 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:20:10.686 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:10.686 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:10.686 { 00:20:10.686 "params": { 00:20:10.686 "name": "Nvme$subsystem", 00:20:10.686 "trtype": "$TEST_TRANSPORT", 00:20:10.686 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:10.686 "adrfam": "ipv4", 00:20:10.686 "trsvcid": "$NVMF_PORT", 00:20:10.686 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:10.686 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:10.686 "hdgst": ${hdgst:-false}, 00:20:10.686 "ddgst": ${ddgst:-false} 00:20:10.686 }, 00:20:10.686 "method": "bdev_nvme_attach_controller" 00:20:10.686 } 00:20:10.686 EOF 00:20:10.686 )") 00:20:10.686 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:20:10.686 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:20:10.686 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:20:10.686 10:08:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:10.686 "params": { 00:20:10.686 "name": "Nvme1", 00:20:10.686 "trtype": "tcp", 00:20:10.686 "traddr": "10.0.0.2", 00:20:10.686 "adrfam": "ipv4", 00:20:10.686 "trsvcid": "4420", 00:20:10.686 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:10.686 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:10.686 "hdgst": false, 00:20:10.686 "ddgst": false 00:20:10.686 }, 00:20:10.686 "method": "bdev_nvme_attach_controller" 00:20:10.686 }' 00:20:10.686 [2024-07-25 10:08:55.816551] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:20:10.686 [2024-07-25 10:08:55.816648] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid458441 ] 00:20:10.944 [2024-07-25 10:08:55.898278] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:10.944 [2024-07-25 10:08:56.021607] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:10.944 [2024-07-25 10:08:56.021658] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:10.944 [2024-07-25 10:08:56.021663] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:11.202 I/O targets: 00:20:11.202 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:20:11.202 00:20:11.202 00:20:11.202 CUnit - A unit testing framework for C - Version 2.1-3 00:20:11.202 http://cunit.sourceforge.net/ 00:20:11.202 00:20:11.202 00:20:11.202 Suite: bdevio tests on: Nvme1n1 00:20:11.202 Test: blockdev write read block ...passed 00:20:11.460 Test: blockdev write zeroes read block ...passed 00:20:11.460 Test: blockdev write zeroes read no split ...passed 00:20:11.460 Test: blockdev write zeroes read split ...passed 00:20:11.460 Test: blockdev write zeroes read split partial ...passed 00:20:11.460 Test: blockdev reset ...[2024-07-25 10:08:56.524228] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:11.460 [2024-07-25 10:08:56.524360] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2000670 (9): Bad file descriptor 00:20:11.460 [2024-07-25 10:08:56.537613] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:11.460 passed 00:20:11.460 Test: blockdev write read 8 blocks ...passed 00:20:11.460 Test: blockdev write read size > 128k ...passed 00:20:11.460 Test: blockdev write read invalid size ...passed 00:20:11.460 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:11.460 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:11.460 Test: blockdev write read max offset ...passed 00:20:11.717 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:11.717 Test: blockdev writev readv 8 blocks ...passed 00:20:11.717 Test: blockdev writev readv 30 x 1block ...passed 00:20:11.717 Test: blockdev writev readv block ...passed 00:20:11.717 Test: blockdev writev readv size > 128k ...passed 00:20:11.717 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:11.717 Test: blockdev comparev and writev ...[2024-07-25 10:08:56.797508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:11.717 [2024-07-25 10:08:56.797549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:11.717 [2024-07-25 10:08:56.797577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:11.717 [2024-07-25 10:08:56.797596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:11.717 [2024-07-25 10:08:56.798176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:11.717 [2024-07-25 10:08:56.798213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:11.717 [2024-07-25 10:08:56.798240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:11.717 [2024-07-25 10:08:56.798259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:11.717 [2024-07-25 10:08:56.798842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:11.717 [2024-07-25 10:08:56.798871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:11.717 [2024-07-25 10:08:56.798896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:11.717 [2024-07-25 10:08:56.798915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:11.717 [2024-07-25 10:08:56.799477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:11.717 [2024-07-25 10:08:56.799505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:11.717 [2024-07-25 10:08:56.799529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:11.717 [2024-07-25 10:08:56.799548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:11.717 passed 00:20:11.976 Test: blockdev nvme passthru rw ...passed 00:20:11.976 Test: blockdev nvme passthru vendor specific ...[2024-07-25 10:08:56.883946] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:11.976 [2024-07-25 10:08:56.883974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:11.976 [2024-07-25 10:08:56.884296] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:11.976 [2024-07-25 10:08:56.884330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:11.976 [2024-07-25 10:08:56.884625] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:11.976 [2024-07-25 10:08:56.884648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:11.976 [2024-07-25 10:08:56.884962] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:11.976 [2024-07-25 10:08:56.884986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:11.976 passed 00:20:11.976 Test: blockdev nvme admin passthru ...passed 00:20:11.976 Test: blockdev copy ...passed 00:20:11.976 00:20:11.976 Run Summary: Type Total Ran Passed Failed Inactive 00:20:11.976 suites 1 1 n/a 0 0 00:20:11.976 tests 23 23 23 0 0 00:20:11.976 asserts 152 152 152 0 n/a 00:20:11.976 00:20:11.976 Elapsed time = 1.257 seconds 00:20:12.234 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:12.234 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.234 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:12.234 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.234 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:20:12.234 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:20:12.234 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:12.234 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:20:12.234 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:12.234 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:20:12.234 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:12.234 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:12.234 rmmod nvme_tcp 00:20:12.234 rmmod nvme_fabrics 00:20:12.234 rmmod nvme_keyring 00:20:12.234 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:12.234 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:20:12.234 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:20:12.234 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 458297 ']' 00:20:12.234 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 458297 00:20:12.234 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # '[' -z 458297 ']' 00:20:12.234 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # kill -0 458297 00:20:12.234 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # uname 00:20:12.234 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:12.234 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 458297 00:20:12.493 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:20:12.493 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:20:12.493 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@968 -- # echo 'killing process with pid 458297' 00:20:12.493 killing process with pid 458297 00:20:12.493 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@969 -- # kill 458297 00:20:12.493 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@974 -- # wait 458297 00:20:12.752 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:12.752 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:12.752 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:12.752 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:12.752 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:12.752 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:12.752 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:12.752 10:08:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:15.285 10:08:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:15.285 00:20:15.285 real 0m7.347s 00:20:15.285 user 0m12.121s 00:20:15.285 sys 0m3.056s 00:20:15.285 10:08:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:15.285 10:08:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:15.285 ************************************ 00:20:15.285 END TEST nvmf_bdevio_no_huge 00:20:15.285 ************************************ 00:20:15.285 10:08:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:20:15.285 10:08:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:15.285 10:08:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:15.285 10:08:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:15.285 ************************************ 00:20:15.285 START TEST nvmf_tls 00:20:15.285 ************************************ 00:20:15.285 10:08:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:20:15.285 * Looking for test storage... 00:20:15.285 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:15.285 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:15.285 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:20:15.285 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:15.285 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:15.285 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:15.285 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:15.285 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:15.285 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:15.285 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:15.285 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:15.285 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:15.285 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:15.285 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:20:15.285 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:20:15.285 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:15.285 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:15.285 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:15.285 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:15.285 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:15.285 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:15.285 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:15.285 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:15.285 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:15.285 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:15.285 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:15.285 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:20:15.285 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:15.285 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:20:15.285 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:15.285 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:15.285 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:15.285 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:15.286 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:15.286 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:15.286 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:15.286 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:15.286 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:15.286 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:20:15.286 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:15.286 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:15.286 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:15.286 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:15.286 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:15.286 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:15.286 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:15.286 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:15.286 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:15.286 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:15.286 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:20:15.286 10:09:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:17.818 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:17.818 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:20:17.818 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:17.818 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:17.818 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:17.818 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:17.818 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:17.818 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:20:17.818 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:17.818 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:20:17.818 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:20:17.818 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:20:17.818 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:20:17.818 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:20:17.818 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:20:17.818 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:17.818 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:17.818 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:17.818 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:17.818 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:17.818 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:17.818 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:17.818 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:17.818 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:17.818 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:17.818 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:17.818 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:17.818 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:17.818 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:17.818 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:17.818 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:17.818 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:17.818 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:17.819 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:20:17.819 Found 0000:84:00.0 (0x8086 - 0x159b) 00:20:17.819 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:17.819 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:17.819 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:17.819 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:17.819 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:17.819 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:17.819 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:20:17.819 Found 0000:84:00.1 (0x8086 - 0x159b) 00:20:17.819 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:17.819 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:17.819 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:17.819 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:17.819 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:17.819 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:17.819 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:17.819 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:17.819 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:17.819 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:17.819 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:17.819 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:17.819 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:17.819 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:17.819 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:17.819 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:20:17.819 Found net devices under 0000:84:00.0: cvl_0_0 00:20:17.819 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:17.819 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:17.819 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:17.819 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:17.819 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:17.819 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:17.819 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:17.819 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:17.819 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:20:17.819 Found net devices under 0000:84:00.1: cvl_0_1 00:20:17.819 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:17.819 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:17.819 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:20:17.819 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:17.819 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:17.819 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:17.819 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:17.819 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:17.819 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:17.819 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:17.819 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:17.819 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:17.819 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:17.819 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:17.819 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:17.819 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:17.819 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:17.819 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:17.819 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:17.819 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:17.819 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:17.819 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:17.819 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:17.819 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:17.819 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:17.819 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:17.819 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:17.819 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.189 ms 00:20:17.819 00:20:17.819 --- 10.0.0.2 ping statistics --- 00:20:17.819 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:17.819 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:20:17.819 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:17.819 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:17.819 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.101 ms 00:20:17.819 00:20:17.819 --- 10.0.0.1 ping statistics --- 00:20:17.819 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:17.819 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:20:17.819 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:17.819 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:20:17.819 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:17.819 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:17.819 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:17.819 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:17.819 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:17.819 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:17.819 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:17.819 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:20:17.819 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:17.819 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:17.819 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:17.819 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=460760 00:20:17.819 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:20:17.819 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 460760 00:20:17.819 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 460760 ']' 00:20:17.819 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:17.819 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:17.819 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:17.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:17.819 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:17.819 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:17.819 [2024-07-25 10:09:02.641466] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:20:17.819 [2024-07-25 10:09:02.641573] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:17.819 EAL: No free 2048 kB hugepages reported on node 1 00:20:17.819 [2024-07-25 10:09:02.719337] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:17.819 [2024-07-25 10:09:02.844453] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:17.819 [2024-07-25 10:09:02.844518] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:17.819 [2024-07-25 10:09:02.844534] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:17.819 [2024-07-25 10:09:02.844547] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:17.819 [2024-07-25 10:09:02.844559] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:17.819 [2024-07-25 10:09:02.844589] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:17.819 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:17.819 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:17.819 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:17.819 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:17.819 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:17.820 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:17.820 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:20:17.820 10:09:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:20:18.385 true 00:20:18.385 10:09:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:18.385 10:09:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:20:18.950 10:09:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # version=0 00:20:18.950 10:09:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:20:18.950 10:09:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:19.208 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:19.208 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:20:19.465 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # version=13 00:20:19.465 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:20:19.465 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:20:19.723 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:19.723 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:20:19.981 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # version=7 00:20:19.981 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:20:19.981 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:19.982 10:09:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:20:20.240 10:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:20:20.240 10:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:20:20.240 10:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:20:20.805 10:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:20.805 10:09:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:20:21.071 10:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:20:21.071 10:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:20:21.071 10:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:20:21.392 10:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:21.392 10:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:20:21.650 10:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:20:21.650 10:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:20:21.650 10:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:20:21.650 10:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:20:21.650 10:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:20:21.650 10:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:20:21.650 10:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:20:21.650 10:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:20:21.650 10:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:20:21.650 10:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:21.650 10:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:20:21.650 10:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:20:21.650 10:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:20:21.650 10:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:20:21.650 10:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:20:21.650 10:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:20:21.650 10:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:20:21.908 10:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:21.908 10:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:20:21.908 10:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.euw6gdkCoq 00:20:21.908 10:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:20:21.908 10:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.VDWGoAKSag 00:20:21.908 10:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:21.908 10:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:21.908 10:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.euw6gdkCoq 00:20:21.908 10:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.VDWGoAKSag 00:20:21.908 10:09:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:22.165 10:09:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:20:23.097 10:09:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.euw6gdkCoq 00:20:23.097 10:09:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.euw6gdkCoq 00:20:23.097 10:09:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:23.355 [2024-07-25 10:09:08.435745] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:23.355 10:09:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:23.614 10:09:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:23.871 [2024-07-25 10:09:09.017292] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:23.871 [2024-07-25 10:09:09.017581] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:23.871 10:09:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:24.436 malloc0 00:20:24.436 10:09:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:24.694 10:09:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.euw6gdkCoq 00:20:24.952 [2024-07-25 10:09:10.032044] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:24.952 10:09:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.euw6gdkCoq 00:20:24.952 EAL: No free 2048 kB hugepages reported on node 1 00:20:37.146 Initializing NVMe Controllers 00:20:37.146 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:37.146 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:37.146 Initialization complete. Launching workers. 00:20:37.146 ======================================================== 00:20:37.146 Latency(us) 00:20:37.146 Device Information : IOPS MiB/s Average min max 00:20:37.146 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7357.68 28.74 8701.38 1330.18 13224.75 00:20:37.146 ======================================================== 00:20:37.146 Total : 7357.68 28.74 8701.38 1330.18 13224.75 00:20:37.146 00:20:37.146 10:09:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.euw6gdkCoq 00:20:37.146 10:09:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:37.146 10:09:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:37.146 10:09:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:37.146 10:09:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.euw6gdkCoq' 00:20:37.146 10:09:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:37.146 10:09:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=463290 00:20:37.146 10:09:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:37.146 10:09:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 463290 /var/tmp/bdevperf.sock 00:20:37.146 10:09:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 463290 ']' 00:20:37.146 10:09:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:37.146 10:09:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:37.146 10:09:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:37.146 10:09:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:37.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:37.146 10:09:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:37.146 10:09:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:37.146 [2024-07-25 10:09:20.240382] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:20:37.146 [2024-07-25 10:09:20.240500] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid463290 ] 00:20:37.146 EAL: No free 2048 kB hugepages reported on node 1 00:20:37.146 [2024-07-25 10:09:20.317677] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:37.146 [2024-07-25 10:09:20.442242] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:37.146 10:09:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:37.146 10:09:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:37.146 10:09:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.euw6gdkCoq 00:20:37.147 [2024-07-25 10:09:21.066611] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:37.147 [2024-07-25 10:09:21.066753] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:37.147 TLSTESTn1 00:20:37.147 10:09:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:37.147 Running I/O for 10 seconds... 00:20:47.107 00:20:47.107 Latency(us) 00:20:47.107 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:47.107 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:47.107 Verification LBA range: start 0x0 length 0x2000 00:20:47.107 TLSTESTn1 : 10.05 2419.27 9.45 0.00 0.00 52782.15 10922.67 75342.13 00:20:47.107 =================================================================================================================== 00:20:47.107 Total : 2419.27 9.45 0.00 0.00 52782.15 10922.67 75342.13 00:20:47.107 0 00:20:47.107 10:09:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:47.107 10:09:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 463290 00:20:47.107 10:09:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 463290 ']' 00:20:47.107 10:09:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 463290 00:20:47.107 10:09:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:47.107 10:09:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:47.107 10:09:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 463290 00:20:47.107 10:09:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:20:47.107 10:09:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:20:47.107 10:09:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 463290' 00:20:47.107 killing process with pid 463290 00:20:47.107 10:09:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 463290 00:20:47.107 Received shutdown signal, test time was about 10.000000 seconds 00:20:47.107 00:20:47.107 Latency(us) 00:20:47.107 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:47.107 =================================================================================================================== 00:20:47.107 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:47.107 [2024-07-25 10:09:31.450018] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:47.107 10:09:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 463290 00:20:47.107 10:09:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.VDWGoAKSag 00:20:47.107 10:09:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:20:47.107 10:09:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.VDWGoAKSag 00:20:47.107 10:09:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:20:47.107 10:09:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:47.107 10:09:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:20:47.107 10:09:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:47.107 10:09:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.VDWGoAKSag 00:20:47.107 10:09:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:47.107 10:09:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:47.107 10:09:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:47.107 10:09:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.VDWGoAKSag' 00:20:47.107 10:09:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:47.107 10:09:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=464555 00:20:47.107 10:09:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:47.107 10:09:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:47.107 10:09:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 464555 /var/tmp/bdevperf.sock 00:20:47.107 10:09:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 464555 ']' 00:20:47.107 10:09:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:47.107 10:09:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:47.107 10:09:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:47.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:47.107 10:09:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:47.107 10:09:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:47.107 [2024-07-25 10:09:31.772640] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:20:47.107 [2024-07-25 10:09:31.772740] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid464555 ] 00:20:47.107 EAL: No free 2048 kB hugepages reported on node 1 00:20:47.107 [2024-07-25 10:09:31.836519] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:47.107 [2024-07-25 10:09:31.942009] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:47.107 10:09:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:47.107 10:09:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:47.107 10:09:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.VDWGoAKSag 00:20:47.365 [2024-07-25 10:09:32.326391] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:47.365 [2024-07-25 10:09:32.326525] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:47.365 [2024-07-25 10:09:32.331686] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:47.365 [2024-07-25 10:09:32.332261] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22846d0 (107): Transport endpoint is not connected 00:20:47.365 [2024-07-25 10:09:32.333250] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22846d0 (9): Bad file descriptor 00:20:47.365 [2024-07-25 10:09:32.334249] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:47.365 [2024-07-25 10:09:32.334269] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:47.365 [2024-07-25 10:09:32.334286] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:47.365 request: 00:20:47.365 { 00:20:47.365 "name": "TLSTEST", 00:20:47.365 "trtype": "tcp", 00:20:47.365 "traddr": "10.0.0.2", 00:20:47.365 "adrfam": "ipv4", 00:20:47.365 "trsvcid": "4420", 00:20:47.365 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:47.365 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:47.365 "prchk_reftag": false, 00:20:47.365 "prchk_guard": false, 00:20:47.365 "hdgst": false, 00:20:47.365 "ddgst": false, 00:20:47.365 "psk": "/tmp/tmp.VDWGoAKSag", 00:20:47.365 "method": "bdev_nvme_attach_controller", 00:20:47.365 "req_id": 1 00:20:47.365 } 00:20:47.365 Got JSON-RPC error response 00:20:47.365 response: 00:20:47.365 { 00:20:47.365 "code": -5, 00:20:47.365 "message": "Input/output error" 00:20:47.365 } 00:20:47.365 10:09:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 464555 00:20:47.365 10:09:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 464555 ']' 00:20:47.365 10:09:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 464555 00:20:47.365 10:09:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:47.365 10:09:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:47.365 10:09:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 464555 00:20:47.365 10:09:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:20:47.365 10:09:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:20:47.365 10:09:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 464555' 00:20:47.365 killing process with pid 464555 00:20:47.365 10:09:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 464555 00:20:47.365 Received shutdown signal, test time was about 10.000000 seconds 00:20:47.365 00:20:47.365 Latency(us) 00:20:47.365 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:47.365 =================================================================================================================== 00:20:47.365 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:47.365 [2024-07-25 10:09:32.384397] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:47.365 10:09:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 464555 00:20:47.623 10:09:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:47.623 10:09:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:20:47.623 10:09:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:47.623 10:09:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:47.623 10:09:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:47.623 10:09:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.euw6gdkCoq 00:20:47.623 10:09:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:20:47.623 10:09:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.euw6gdkCoq 00:20:47.623 10:09:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:20:47.623 10:09:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:47.623 10:09:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:20:47.623 10:09:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:47.623 10:09:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.euw6gdkCoq 00:20:47.623 10:09:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:47.623 10:09:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:47.624 10:09:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:20:47.624 10:09:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.euw6gdkCoq' 00:20:47.624 10:09:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:47.624 10:09:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=464629 00:20:47.624 10:09:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:47.624 10:09:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:47.624 10:09:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 464629 /var/tmp/bdevperf.sock 00:20:47.624 10:09:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 464629 ']' 00:20:47.624 10:09:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:47.624 10:09:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:47.624 10:09:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:47.624 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:47.624 10:09:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:47.624 10:09:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:47.624 [2024-07-25 10:09:32.685129] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:20:47.624 [2024-07-25 10:09:32.685228] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid464629 ] 00:20:47.624 EAL: No free 2048 kB hugepages reported on node 1 00:20:47.624 [2024-07-25 10:09:32.757237] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:47.881 [2024-07-25 10:09:32.870288] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:47.881 10:09:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:47.881 10:09:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:47.881 10:09:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.euw6gdkCoq 00:20:48.445 [2024-07-25 10:09:33.533607] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:48.445 [2024-07-25 10:09:33.533769] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:48.445 [2024-07-25 10:09:33.544252] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:48.445 [2024-07-25 10:09:33.544281] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:20:48.445 [2024-07-25 10:09:33.544319] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:48.445 [2024-07-25 10:09:33.544585] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xeb76d0 (107): Transport endpoint is not connected 00:20:48.445 [2024-07-25 10:09:33.545575] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xeb76d0 (9): Bad file descriptor 00:20:48.445 [2024-07-25 10:09:33.546574] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:48.445 [2024-07-25 10:09:33.546594] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:48.445 [2024-07-25 10:09:33.546611] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:48.445 request: 00:20:48.445 { 00:20:48.445 "name": "TLSTEST", 00:20:48.445 "trtype": "tcp", 00:20:48.445 "traddr": "10.0.0.2", 00:20:48.445 "adrfam": "ipv4", 00:20:48.445 "trsvcid": "4420", 00:20:48.445 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:48.445 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:48.445 "prchk_reftag": false, 00:20:48.445 "prchk_guard": false, 00:20:48.445 "hdgst": false, 00:20:48.445 "ddgst": false, 00:20:48.445 "psk": "/tmp/tmp.euw6gdkCoq", 00:20:48.445 "method": "bdev_nvme_attach_controller", 00:20:48.445 "req_id": 1 00:20:48.445 } 00:20:48.445 Got JSON-RPC error response 00:20:48.445 response: 00:20:48.445 { 00:20:48.445 "code": -5, 00:20:48.445 "message": "Input/output error" 00:20:48.445 } 00:20:48.445 10:09:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 464629 00:20:48.445 10:09:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 464629 ']' 00:20:48.445 10:09:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 464629 00:20:48.445 10:09:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:48.445 10:09:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:48.445 10:09:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 464629 00:20:48.445 10:09:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:20:48.445 10:09:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:20:48.445 10:09:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 464629' 00:20:48.445 killing process with pid 464629 00:20:48.445 10:09:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 464629 00:20:48.445 Received shutdown signal, test time was about 10.000000 seconds 00:20:48.445 00:20:48.445 Latency(us) 00:20:48.445 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:48.445 =================================================================================================================== 00:20:48.445 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:48.445 [2024-07-25 10:09:33.594578] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:48.445 10:09:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 464629 00:20:48.702 10:09:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:48.702 10:09:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:20:48.702 10:09:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:48.702 10:09:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:48.702 10:09:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:48.702 10:09:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.euw6gdkCoq 00:20:48.702 10:09:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:20:48.702 10:09:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.euw6gdkCoq 00:20:48.702 10:09:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:20:48.702 10:09:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:48.702 10:09:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:20:48.702 10:09:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:48.702 10:09:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.euw6gdkCoq 00:20:48.702 10:09:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:48.702 10:09:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:20:48.702 10:09:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:48.702 10:09:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.euw6gdkCoq' 00:20:48.702 10:09:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:48.702 10:09:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=464778 00:20:48.702 10:09:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:48.702 10:09:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:48.702 10:09:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 464778 /var/tmp/bdevperf.sock 00:20:48.702 10:09:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 464778 ']' 00:20:48.702 10:09:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:48.702 10:09:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:48.702 10:09:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:48.702 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:48.702 10:09:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:48.702 10:09:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:48.959 [2024-07-25 10:09:33.908410] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:20:48.959 [2024-07-25 10:09:33.908505] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid464778 ] 00:20:48.959 EAL: No free 2048 kB hugepages reported on node 1 00:20:48.959 [2024-07-25 10:09:33.973135] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:48.959 [2024-07-25 10:09:34.082012] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:49.217 10:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:49.217 10:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:49.217 10:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.euw6gdkCoq 00:20:49.475 [2024-07-25 10:09:34.506541] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:49.475 [2024-07-25 10:09:34.506662] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:49.475 [2024-07-25 10:09:34.516836] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:49.475 [2024-07-25 10:09:34.516866] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:20:49.475 [2024-07-25 10:09:34.516905] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:49.475 [2024-07-25 10:09:34.517651] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22886d0 (107): Transport endpoint is not connected 00:20:49.475 [2024-07-25 10:09:34.518641] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22886d0 (9): Bad file descriptor 00:20:49.475 [2024-07-25 10:09:34.519640] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:20:49.475 [2024-07-25 10:09:34.519659] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:49.475 [2024-07-25 10:09:34.519677] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:20:49.475 request: 00:20:49.475 { 00:20:49.475 "name": "TLSTEST", 00:20:49.475 "trtype": "tcp", 00:20:49.475 "traddr": "10.0.0.2", 00:20:49.475 "adrfam": "ipv4", 00:20:49.475 "trsvcid": "4420", 00:20:49.475 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:49.475 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:49.475 "prchk_reftag": false, 00:20:49.475 "prchk_guard": false, 00:20:49.475 "hdgst": false, 00:20:49.475 "ddgst": false, 00:20:49.475 "psk": "/tmp/tmp.euw6gdkCoq", 00:20:49.475 "method": "bdev_nvme_attach_controller", 00:20:49.475 "req_id": 1 00:20:49.475 } 00:20:49.475 Got JSON-RPC error response 00:20:49.475 response: 00:20:49.475 { 00:20:49.475 "code": -5, 00:20:49.475 "message": "Input/output error" 00:20:49.475 } 00:20:49.475 10:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 464778 00:20:49.475 10:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 464778 ']' 00:20:49.475 10:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 464778 00:20:49.475 10:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:49.475 10:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:49.475 10:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 464778 00:20:49.475 10:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:20:49.475 10:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:20:49.475 10:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 464778' 00:20:49.475 killing process with pid 464778 00:20:49.475 10:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 464778 00:20:49.475 Received shutdown signal, test time was about 10.000000 seconds 00:20:49.475 00:20:49.475 Latency(us) 00:20:49.475 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:49.475 =================================================================================================================== 00:20:49.475 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:49.475 [2024-07-25 10:09:34.574624] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:49.475 10:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 464778 00:20:49.733 10:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:49.733 10:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:20:49.733 10:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:49.733 10:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:49.733 10:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:49.733 10:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:49.733 10:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:20:49.733 10:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:49.733 10:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:20:49.733 10:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:49.733 10:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:20:49.733 10:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:49.733 10:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:20:49.733 10:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:49.733 10:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:49.733 10:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:49.733 10:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:20:49.733 10:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:49.733 10:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=464914 00:20:49.733 10:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:49.733 10:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:49.733 10:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 464914 /var/tmp/bdevperf.sock 00:20:49.733 10:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 464914 ']' 00:20:49.733 10:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:49.733 10:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:49.733 10:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:49.733 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:49.733 10:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:49.733 10:09:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:49.733 [2024-07-25 10:09:34.896212] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:20:49.733 [2024-07-25 10:09:34.896311] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid464914 ] 00:20:49.991 EAL: No free 2048 kB hugepages reported on node 1 00:20:49.991 [2024-07-25 10:09:34.970265] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:49.991 [2024-07-25 10:09:35.090393] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:50.249 10:09:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:50.249 10:09:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:50.249 10:09:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:20:50.506 [2024-07-25 10:09:35.539625] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:20:50.506 [2024-07-25 10:09:35.541195] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2501e10 (9): Bad file descriptor 00:20:50.506 [2024-07-25 10:09:35.542189] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:50.506 [2024-07-25 10:09:35.542213] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:20:50.506 [2024-07-25 10:09:35.542234] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:50.506 request: 00:20:50.506 { 00:20:50.506 "name": "TLSTEST", 00:20:50.506 "trtype": "tcp", 00:20:50.506 "traddr": "10.0.0.2", 00:20:50.506 "adrfam": "ipv4", 00:20:50.506 "trsvcid": "4420", 00:20:50.506 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:50.506 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:50.507 "prchk_reftag": false, 00:20:50.507 "prchk_guard": false, 00:20:50.507 "hdgst": false, 00:20:50.507 "ddgst": false, 00:20:50.507 "method": "bdev_nvme_attach_controller", 00:20:50.507 "req_id": 1 00:20:50.507 } 00:20:50.507 Got JSON-RPC error response 00:20:50.507 response: 00:20:50.507 { 00:20:50.507 "code": -5, 00:20:50.507 "message": "Input/output error" 00:20:50.507 } 00:20:50.507 10:09:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 464914 00:20:50.507 10:09:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 464914 ']' 00:20:50.507 10:09:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 464914 00:20:50.507 10:09:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:50.507 10:09:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:50.507 10:09:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 464914 00:20:50.507 10:09:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:20:50.507 10:09:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:20:50.507 10:09:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 464914' 00:20:50.507 killing process with pid 464914 00:20:50.507 10:09:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 464914 00:20:50.507 Received shutdown signal, test time was about 10.000000 seconds 00:20:50.507 00:20:50.507 Latency(us) 00:20:50.507 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:50.507 =================================================================================================================== 00:20:50.507 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:50.507 10:09:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 464914 00:20:50.764 10:09:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:20:50.764 10:09:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:20:50.764 10:09:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:50.764 10:09:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:50.764 10:09:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:50.764 10:09:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@158 -- # killprocess 460760 00:20:50.764 10:09:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 460760 ']' 00:20:50.764 10:09:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 460760 00:20:50.764 10:09:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:20:50.764 10:09:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:50.764 10:09:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 460760 00:20:50.764 10:09:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:50.764 10:09:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:50.764 10:09:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 460760' 00:20:50.764 killing process with pid 460760 00:20:50.764 10:09:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 460760 00:20:50.764 [2024-07-25 10:09:35.864384] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:50.764 10:09:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 460760 00:20:51.022 10:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:20:51.022 10:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:20:51.022 10:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:20:51.022 10:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:20:51.022 10:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:20:51.022 10:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:20:51.022 10:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:20:51.306 10:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:51.306 10:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:20:51.306 10:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.4XVnji5A9K 00:20:51.306 10:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:20:51.306 10:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.4XVnji5A9K 00:20:51.306 10:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:20:51.306 10:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:51.306 10:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:51.306 10:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:51.306 10:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=465073 00:20:51.306 10:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:51.306 10:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 465073 00:20:51.306 10:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 465073 ']' 00:20:51.306 10:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:51.306 10:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:51.306 10:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:51.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:51.306 10:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:51.306 10:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:51.307 [2024-07-25 10:09:36.291556] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:20:51.307 [2024-07-25 10:09:36.291649] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:51.307 EAL: No free 2048 kB hugepages reported on node 1 00:20:51.307 [2024-07-25 10:09:36.366590] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:51.566 [2024-07-25 10:09:36.491732] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:51.566 [2024-07-25 10:09:36.491797] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:51.566 [2024-07-25 10:09:36.491813] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:51.566 [2024-07-25 10:09:36.491827] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:51.566 [2024-07-25 10:09:36.491838] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:51.566 [2024-07-25 10:09:36.491878] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:51.566 10:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:51.566 10:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:51.566 10:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:51.566 10:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:51.566 10:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:51.566 10:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:51.566 10:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.4XVnji5A9K 00:20:51.566 10:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.4XVnji5A9K 00:20:51.566 10:09:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:52.131 [2024-07-25 10:09:37.182600] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:52.131 10:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:52.696 10:09:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:53.261 [2024-07-25 10:09:38.125151] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:53.261 [2024-07-25 10:09:38.125438] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:53.261 10:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:53.519 malloc0 00:20:53.519 10:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:54.085 10:09:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.4XVnji5A9K 00:20:54.343 [2024-07-25 10:09:39.260401] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:54.343 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.4XVnji5A9K 00:20:54.343 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:20:54.343 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:20:54.343 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:20:54.343 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.4XVnji5A9K' 00:20:54.343 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:54.343 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=465479 00:20:54.343 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:54.343 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:20:54.343 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 465479 /var/tmp/bdevperf.sock 00:20:54.343 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 465479 ']' 00:20:54.343 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:54.343 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:54.343 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:54.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:54.343 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:54.343 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:54.343 [2024-07-25 10:09:39.327363] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:20:54.343 [2024-07-25 10:09:39.327461] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid465479 ] 00:20:54.343 EAL: No free 2048 kB hugepages reported on node 1 00:20:54.343 [2024-07-25 10:09:39.395501] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:54.601 [2024-07-25 10:09:39.518858] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:54.601 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:54.601 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:20:54.601 10:09:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.4XVnji5A9K 00:20:54.860 [2024-07-25 10:09:39.961817] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:54.860 [2024-07-25 10:09:39.961919] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:55.117 TLSTESTn1 00:20:55.117 10:09:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:20:55.117 Running I/O for 10 seconds... 00:21:07.325 00:21:07.325 Latency(us) 00:21:07.325 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:07.325 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:07.325 Verification LBA range: start 0x0 length 0x2000 00:21:07.325 TLSTESTn1 : 10.05 2328.58 9.10 0.00 0.00 54828.79 5606.97 80390.83 00:21:07.325 =================================================================================================================== 00:21:07.325 Total : 2328.58 9.10 0.00 0.00 54828.79 5606.97 80390.83 00:21:07.325 0 00:21:07.325 10:09:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:07.325 10:09:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 465479 00:21:07.325 10:09:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 465479 ']' 00:21:07.325 10:09:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 465479 00:21:07.325 10:09:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:07.325 10:09:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:07.325 10:09:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 465479 00:21:07.325 10:09:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:21:07.325 10:09:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:21:07.325 10:09:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 465479' 00:21:07.325 killing process with pid 465479 00:21:07.325 10:09:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 465479 00:21:07.325 Received shutdown signal, test time was about 10.000000 seconds 00:21:07.325 00:21:07.325 Latency(us) 00:21:07.325 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:07.325 =================================================================================================================== 00:21:07.325 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:07.325 [2024-07-25 10:09:50.314610] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:07.325 10:09:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 465479 00:21:07.325 10:09:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.4XVnji5A9K 00:21:07.325 10:09:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.4XVnji5A9K 00:21:07.325 10:09:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:21:07.325 10:09:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.4XVnji5A9K 00:21:07.325 10:09:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:21:07.325 10:09:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:07.325 10:09:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:21:07.325 10:09:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:07.325 10:09:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.4XVnji5A9K 00:21:07.325 10:09:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:07.325 10:09:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:07.325 10:09:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:07.325 10:09:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.4XVnji5A9K' 00:21:07.325 10:09:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:07.325 10:09:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=466793 00:21:07.325 10:09:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:07.325 10:09:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:07.325 10:09:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 466793 /var/tmp/bdevperf.sock 00:21:07.325 10:09:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 466793 ']' 00:21:07.325 10:09:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:07.325 10:09:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:07.325 10:09:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:07.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:07.325 10:09:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:07.325 10:09:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:07.325 [2024-07-25 10:09:50.632927] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:21:07.325 [2024-07-25 10:09:50.633017] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid466793 ] 00:21:07.325 EAL: No free 2048 kB hugepages reported on node 1 00:21:07.325 [2024-07-25 10:09:50.696171] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:07.325 [2024-07-25 10:09:50.800860] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:07.325 10:09:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:07.325 10:09:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:07.325 10:09:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.4XVnji5A9K 00:21:07.325 [2024-07-25 10:09:51.193596] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:07.325 [2024-07-25 10:09:51.193658] bdev_nvme.c:6153:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:21:07.325 [2024-07-25 10:09:51.193674] bdev_nvme.c:6258:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.4XVnji5A9K 00:21:07.325 request: 00:21:07.325 { 00:21:07.325 "name": "TLSTEST", 00:21:07.325 "trtype": "tcp", 00:21:07.325 "traddr": "10.0.0.2", 00:21:07.325 "adrfam": "ipv4", 00:21:07.325 "trsvcid": "4420", 00:21:07.325 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:07.325 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:07.325 "prchk_reftag": false, 00:21:07.325 "prchk_guard": false, 00:21:07.325 "hdgst": false, 00:21:07.325 "ddgst": false, 00:21:07.325 "psk": "/tmp/tmp.4XVnji5A9K", 00:21:07.325 "method": "bdev_nvme_attach_controller", 00:21:07.325 "req_id": 1 00:21:07.325 } 00:21:07.325 Got JSON-RPC error response 00:21:07.325 response: 00:21:07.325 { 00:21:07.325 "code": -1, 00:21:07.325 "message": "Operation not permitted" 00:21:07.325 } 00:21:07.325 10:09:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 466793 00:21:07.325 10:09:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 466793 ']' 00:21:07.325 10:09:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 466793 00:21:07.326 10:09:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:07.326 10:09:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:07.326 10:09:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 466793 00:21:07.326 10:09:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:21:07.326 10:09:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:21:07.326 10:09:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 466793' 00:21:07.326 killing process with pid 466793 00:21:07.326 10:09:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 466793 00:21:07.326 Received shutdown signal, test time was about 10.000000 seconds 00:21:07.326 00:21:07.326 Latency(us) 00:21:07.326 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:07.326 =================================================================================================================== 00:21:07.326 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:07.326 10:09:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 466793 00:21:07.326 10:09:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:21:07.326 10:09:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:21:07.326 10:09:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:07.326 10:09:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:07.326 10:09:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:07.326 10:09:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@174 -- # killprocess 465073 00:21:07.326 10:09:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 465073 ']' 00:21:07.326 10:09:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 465073 00:21:07.326 10:09:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:07.326 10:09:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:07.326 10:09:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 465073 00:21:07.326 10:09:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:07.326 10:09:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:07.326 10:09:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 465073' 00:21:07.326 killing process with pid 465073 00:21:07.326 10:09:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 465073 00:21:07.326 [2024-07-25 10:09:51.526320] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:07.326 10:09:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 465073 00:21:07.326 10:09:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:21:07.326 10:09:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:07.326 10:09:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:07.326 10:09:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:07.326 10:09:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=466936 00:21:07.326 10:09:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:07.326 10:09:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 466936 00:21:07.326 10:09:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 466936 ']' 00:21:07.326 10:09:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:07.326 10:09:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:07.326 10:09:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:07.326 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:07.326 10:09:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:07.326 10:09:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:07.326 [2024-07-25 10:09:51.893831] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:21:07.326 [2024-07-25 10:09:51.893926] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:07.326 EAL: No free 2048 kB hugepages reported on node 1 00:21:07.326 [2024-07-25 10:09:51.969664] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:07.326 [2024-07-25 10:09:52.094721] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:07.326 [2024-07-25 10:09:52.094791] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:07.326 [2024-07-25 10:09:52.094808] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:07.326 [2024-07-25 10:09:52.094821] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:07.326 [2024-07-25 10:09:52.094832] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:07.326 [2024-07-25 10:09:52.094877] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:07.326 10:09:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:07.326 10:09:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:07.326 10:09:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:07.326 10:09:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:07.326 10:09:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:07.326 10:09:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:07.326 10:09:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.4XVnji5A9K 00:21:07.326 10:09:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:21:07.326 10:09:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.4XVnji5A9K 00:21:07.326 10:09:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:21:07.326 10:09:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:07.326 10:09:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:21:07.326 10:09:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:07.326 10:09:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.4XVnji5A9K 00:21:07.326 10:09:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.4XVnji5A9K 00:21:07.326 10:09:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:07.584 [2024-07-25 10:09:52.677627] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:07.584 10:09:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:08.150 10:09:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:08.713 [2024-07-25 10:09:53.756484] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:08.713 [2024-07-25 10:09:53.756777] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:08.713 10:09:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:09.278 malloc0 00:21:09.278 10:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:09.842 10:09:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.4XVnji5A9K 00:21:10.100 [2024-07-25 10:09:55.076809] tcp.c:3635:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:21:10.100 [2024-07-25 10:09:55.076853] tcp.c:3721:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:21:10.100 [2024-07-25 10:09:55.076891] subsystem.c:1052:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:21:10.100 request: 00:21:10.100 { 00:21:10.100 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:10.100 "host": "nqn.2016-06.io.spdk:host1", 00:21:10.100 "psk": "/tmp/tmp.4XVnji5A9K", 00:21:10.100 "method": "nvmf_subsystem_add_host", 00:21:10.100 "req_id": 1 00:21:10.100 } 00:21:10.100 Got JSON-RPC error response 00:21:10.100 response: 00:21:10.100 { 00:21:10.100 "code": -32603, 00:21:10.100 "message": "Internal error" 00:21:10.100 } 00:21:10.100 10:09:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:21:10.100 10:09:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:10.100 10:09:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:10.100 10:09:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:10.100 10:09:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@180 -- # killprocess 466936 00:21:10.100 10:09:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 466936 ']' 00:21:10.100 10:09:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 466936 00:21:10.100 10:09:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:10.100 10:09:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:10.100 10:09:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 466936 00:21:10.100 10:09:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:10.100 10:09:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:10.100 10:09:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 466936' 00:21:10.100 killing process with pid 466936 00:21:10.100 10:09:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 466936 00:21:10.100 10:09:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 466936 00:21:10.358 10:09:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.4XVnji5A9K 00:21:10.358 10:09:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:21:10.358 10:09:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:10.358 10:09:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:10.358 10:09:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:10.358 10:09:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=467366 00:21:10.358 10:09:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:10.358 10:09:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 467366 00:21:10.358 10:09:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 467366 ']' 00:21:10.358 10:09:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:10.358 10:09:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:10.358 10:09:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:10.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:10.358 10:09:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:10.358 10:09:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:10.358 [2024-07-25 10:09:55.523439] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:21:10.358 [2024-07-25 10:09:55.523549] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:10.615 EAL: No free 2048 kB hugepages reported on node 1 00:21:10.615 [2024-07-25 10:09:55.600021] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:10.615 [2024-07-25 10:09:55.717606] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:10.615 [2024-07-25 10:09:55.717676] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:10.615 [2024-07-25 10:09:55.717693] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:10.615 [2024-07-25 10:09:55.717706] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:10.615 [2024-07-25 10:09:55.717718] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:10.615 [2024-07-25 10:09:55.717751] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:10.873 10:09:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:10.873 10:09:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:10.873 10:09:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:10.873 10:09:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:10.873 10:09:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:10.873 10:09:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:10.873 10:09:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.4XVnji5A9K 00:21:10.873 10:09:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.4XVnji5A9K 00:21:10.873 10:09:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:11.130 [2024-07-25 10:09:56.194074] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:11.130 10:09:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:11.388 10:09:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:11.954 [2024-07-25 10:09:56.835756] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:11.954 [2024-07-25 10:09:56.836030] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:11.954 10:09:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:12.211 malloc0 00:21:12.211 10:09:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:12.469 10:09:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.4XVnji5A9K 00:21:13.035 [2024-07-25 10:09:58.135882] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:13.035 10:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=467662 00:21:13.035 10:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:13.035 10:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:13.035 10:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 467662 /var/tmp/bdevperf.sock 00:21:13.035 10:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 467662 ']' 00:21:13.035 10:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:13.035 10:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:13.035 10:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:13.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:13.035 10:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:13.035 10:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:13.292 [2024-07-25 10:09:58.205285] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:21:13.293 [2024-07-25 10:09:58.205375] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid467662 ] 00:21:13.293 EAL: No free 2048 kB hugepages reported on node 1 00:21:13.293 [2024-07-25 10:09:58.272248] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:13.293 [2024-07-25 10:09:58.394838] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:13.550 10:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:13.550 10:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:13.551 10:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.4XVnji5A9K 00:21:13.808 [2024-07-25 10:09:58.838195] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:13.808 [2024-07-25 10:09:58.838326] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:13.808 TLSTESTn1 00:21:13.808 10:09:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:21:14.394 10:09:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:21:14.394 "subsystems": [ 00:21:14.394 { 00:21:14.394 "subsystem": "keyring", 00:21:14.394 "config": [] 00:21:14.394 }, 00:21:14.394 { 00:21:14.394 "subsystem": "iobuf", 00:21:14.394 "config": [ 00:21:14.394 { 00:21:14.394 "method": "iobuf_set_options", 00:21:14.394 "params": { 00:21:14.394 "small_pool_count": 8192, 00:21:14.394 "large_pool_count": 1024, 00:21:14.394 "small_bufsize": 8192, 00:21:14.394 "large_bufsize": 135168 00:21:14.394 } 00:21:14.394 } 00:21:14.394 ] 00:21:14.394 }, 00:21:14.394 { 00:21:14.394 "subsystem": "sock", 00:21:14.394 "config": [ 00:21:14.394 { 00:21:14.394 "method": "sock_set_default_impl", 00:21:14.394 "params": { 00:21:14.394 "impl_name": "posix" 00:21:14.394 } 00:21:14.394 }, 00:21:14.394 { 00:21:14.394 "method": "sock_impl_set_options", 00:21:14.394 "params": { 00:21:14.394 "impl_name": "ssl", 00:21:14.394 "recv_buf_size": 4096, 00:21:14.394 "send_buf_size": 4096, 00:21:14.394 "enable_recv_pipe": true, 00:21:14.394 "enable_quickack": false, 00:21:14.394 "enable_placement_id": 0, 00:21:14.394 "enable_zerocopy_send_server": true, 00:21:14.394 "enable_zerocopy_send_client": false, 00:21:14.394 "zerocopy_threshold": 0, 00:21:14.394 "tls_version": 0, 00:21:14.394 "enable_ktls": false 00:21:14.395 } 00:21:14.395 }, 00:21:14.395 { 00:21:14.395 "method": "sock_impl_set_options", 00:21:14.395 "params": { 00:21:14.395 "impl_name": "posix", 00:21:14.395 "recv_buf_size": 2097152, 00:21:14.395 "send_buf_size": 2097152, 00:21:14.395 "enable_recv_pipe": true, 00:21:14.395 "enable_quickack": false, 00:21:14.395 "enable_placement_id": 0, 00:21:14.395 "enable_zerocopy_send_server": true, 00:21:14.395 "enable_zerocopy_send_client": false, 00:21:14.395 "zerocopy_threshold": 0, 00:21:14.395 "tls_version": 0, 00:21:14.395 "enable_ktls": false 00:21:14.395 } 00:21:14.395 } 00:21:14.395 ] 00:21:14.395 }, 00:21:14.395 { 00:21:14.395 "subsystem": "vmd", 00:21:14.395 "config": [] 00:21:14.395 }, 00:21:14.395 { 00:21:14.395 "subsystem": "accel", 00:21:14.395 "config": [ 00:21:14.395 { 00:21:14.395 "method": "accel_set_options", 00:21:14.395 "params": { 00:21:14.395 "small_cache_size": 128, 00:21:14.395 "large_cache_size": 16, 00:21:14.395 "task_count": 2048, 00:21:14.395 "sequence_count": 2048, 00:21:14.395 "buf_count": 2048 00:21:14.395 } 00:21:14.395 } 00:21:14.395 ] 00:21:14.395 }, 00:21:14.395 { 00:21:14.395 "subsystem": "bdev", 00:21:14.395 "config": [ 00:21:14.395 { 00:21:14.395 "method": "bdev_set_options", 00:21:14.395 "params": { 00:21:14.395 "bdev_io_pool_size": 65535, 00:21:14.395 "bdev_io_cache_size": 256, 00:21:14.395 "bdev_auto_examine": true, 00:21:14.395 "iobuf_small_cache_size": 128, 00:21:14.395 "iobuf_large_cache_size": 16 00:21:14.395 } 00:21:14.395 }, 00:21:14.395 { 00:21:14.395 "method": "bdev_raid_set_options", 00:21:14.395 "params": { 00:21:14.395 "process_window_size_kb": 1024, 00:21:14.395 "process_max_bandwidth_mb_sec": 0 00:21:14.395 } 00:21:14.395 }, 00:21:14.395 { 00:21:14.395 "method": "bdev_iscsi_set_options", 00:21:14.395 "params": { 00:21:14.395 "timeout_sec": 30 00:21:14.395 } 00:21:14.395 }, 00:21:14.395 { 00:21:14.395 "method": "bdev_nvme_set_options", 00:21:14.395 "params": { 00:21:14.395 "action_on_timeout": "none", 00:21:14.395 "timeout_us": 0, 00:21:14.395 "timeout_admin_us": 0, 00:21:14.395 "keep_alive_timeout_ms": 10000, 00:21:14.395 "arbitration_burst": 0, 00:21:14.395 "low_priority_weight": 0, 00:21:14.395 "medium_priority_weight": 0, 00:21:14.395 "high_priority_weight": 0, 00:21:14.395 "nvme_adminq_poll_period_us": 10000, 00:21:14.395 "nvme_ioq_poll_period_us": 0, 00:21:14.395 "io_queue_requests": 0, 00:21:14.395 "delay_cmd_submit": true, 00:21:14.395 "transport_retry_count": 4, 00:21:14.395 "bdev_retry_count": 3, 00:21:14.395 "transport_ack_timeout": 0, 00:21:14.395 "ctrlr_loss_timeout_sec": 0, 00:21:14.395 "reconnect_delay_sec": 0, 00:21:14.395 "fast_io_fail_timeout_sec": 0, 00:21:14.395 "disable_auto_failback": false, 00:21:14.395 "generate_uuids": false, 00:21:14.395 "transport_tos": 0, 00:21:14.395 "nvme_error_stat": false, 00:21:14.395 "rdma_srq_size": 0, 00:21:14.395 "io_path_stat": false, 00:21:14.395 "allow_accel_sequence": false, 00:21:14.395 "rdma_max_cq_size": 0, 00:21:14.395 "rdma_cm_event_timeout_ms": 0, 00:21:14.395 "dhchap_digests": [ 00:21:14.395 "sha256", 00:21:14.395 "sha384", 00:21:14.395 "sha512" 00:21:14.395 ], 00:21:14.395 "dhchap_dhgroups": [ 00:21:14.395 "null", 00:21:14.395 "ffdhe2048", 00:21:14.395 "ffdhe3072", 00:21:14.395 "ffdhe4096", 00:21:14.395 "ffdhe6144", 00:21:14.395 "ffdhe8192" 00:21:14.395 ] 00:21:14.395 } 00:21:14.395 }, 00:21:14.395 { 00:21:14.395 "method": "bdev_nvme_set_hotplug", 00:21:14.395 "params": { 00:21:14.395 "period_us": 100000, 00:21:14.395 "enable": false 00:21:14.395 } 00:21:14.395 }, 00:21:14.395 { 00:21:14.395 "method": "bdev_malloc_create", 00:21:14.395 "params": { 00:21:14.395 "name": "malloc0", 00:21:14.395 "num_blocks": 8192, 00:21:14.395 "block_size": 4096, 00:21:14.395 "physical_block_size": 4096, 00:21:14.395 "uuid": "ae5d9050-27cc-4002-ba68-beaf39a6807d", 00:21:14.395 "optimal_io_boundary": 0, 00:21:14.395 "md_size": 0, 00:21:14.395 "dif_type": 0, 00:21:14.395 "dif_is_head_of_md": false, 00:21:14.395 "dif_pi_format": 0 00:21:14.395 } 00:21:14.395 }, 00:21:14.395 { 00:21:14.395 "method": "bdev_wait_for_examine" 00:21:14.395 } 00:21:14.395 ] 00:21:14.395 }, 00:21:14.395 { 00:21:14.395 "subsystem": "nbd", 00:21:14.395 "config": [] 00:21:14.395 }, 00:21:14.395 { 00:21:14.395 "subsystem": "scheduler", 00:21:14.395 "config": [ 00:21:14.395 { 00:21:14.395 "method": "framework_set_scheduler", 00:21:14.395 "params": { 00:21:14.395 "name": "static" 00:21:14.395 } 00:21:14.395 } 00:21:14.395 ] 00:21:14.395 }, 00:21:14.395 { 00:21:14.395 "subsystem": "nvmf", 00:21:14.395 "config": [ 00:21:14.395 { 00:21:14.395 "method": "nvmf_set_config", 00:21:14.395 "params": { 00:21:14.395 "discovery_filter": "match_any", 00:21:14.395 "admin_cmd_passthru": { 00:21:14.395 "identify_ctrlr": false 00:21:14.395 } 00:21:14.395 } 00:21:14.395 }, 00:21:14.395 { 00:21:14.395 "method": "nvmf_set_max_subsystems", 00:21:14.395 "params": { 00:21:14.395 "max_subsystems": 1024 00:21:14.395 } 00:21:14.395 }, 00:21:14.395 { 00:21:14.395 "method": "nvmf_set_crdt", 00:21:14.395 "params": { 00:21:14.395 "crdt1": 0, 00:21:14.395 "crdt2": 0, 00:21:14.395 "crdt3": 0 00:21:14.395 } 00:21:14.395 }, 00:21:14.395 { 00:21:14.395 "method": "nvmf_create_transport", 00:21:14.395 "params": { 00:21:14.395 "trtype": "TCP", 00:21:14.395 "max_queue_depth": 128, 00:21:14.395 "max_io_qpairs_per_ctrlr": 127, 00:21:14.395 "in_capsule_data_size": 4096, 00:21:14.395 "max_io_size": 131072, 00:21:14.395 "io_unit_size": 131072, 00:21:14.395 "max_aq_depth": 128, 00:21:14.395 "num_shared_buffers": 511, 00:21:14.395 "buf_cache_size": 4294967295, 00:21:14.395 "dif_insert_or_strip": false, 00:21:14.395 "zcopy": false, 00:21:14.395 "c2h_success": false, 00:21:14.395 "sock_priority": 0, 00:21:14.395 "abort_timeout_sec": 1, 00:21:14.395 "ack_timeout": 0, 00:21:14.395 "data_wr_pool_size": 0 00:21:14.395 } 00:21:14.395 }, 00:21:14.395 { 00:21:14.395 "method": "nvmf_create_subsystem", 00:21:14.395 "params": { 00:21:14.395 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:14.395 "allow_any_host": false, 00:21:14.395 "serial_number": "SPDK00000000000001", 00:21:14.395 "model_number": "SPDK bdev Controller", 00:21:14.395 "max_namespaces": 10, 00:21:14.395 "min_cntlid": 1, 00:21:14.395 "max_cntlid": 65519, 00:21:14.395 "ana_reporting": false 00:21:14.395 } 00:21:14.395 }, 00:21:14.395 { 00:21:14.395 "method": "nvmf_subsystem_add_host", 00:21:14.395 "params": { 00:21:14.395 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:14.395 "host": "nqn.2016-06.io.spdk:host1", 00:21:14.395 "psk": "/tmp/tmp.4XVnji5A9K" 00:21:14.395 } 00:21:14.395 }, 00:21:14.395 { 00:21:14.395 "method": "nvmf_subsystem_add_ns", 00:21:14.395 "params": { 00:21:14.395 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:14.395 "namespace": { 00:21:14.395 "nsid": 1, 00:21:14.395 "bdev_name": "malloc0", 00:21:14.395 "nguid": "AE5D905027CC4002BA68BEAF39A6807D", 00:21:14.395 "uuid": "ae5d9050-27cc-4002-ba68-beaf39a6807d", 00:21:14.395 "no_auto_visible": false 00:21:14.395 } 00:21:14.395 } 00:21:14.395 }, 00:21:14.395 { 00:21:14.395 "method": "nvmf_subsystem_add_listener", 00:21:14.395 "params": { 00:21:14.395 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:14.396 "listen_address": { 00:21:14.396 "trtype": "TCP", 00:21:14.396 "adrfam": "IPv4", 00:21:14.396 "traddr": "10.0.0.2", 00:21:14.396 "trsvcid": "4420" 00:21:14.396 }, 00:21:14.396 "secure_channel": true 00:21:14.396 } 00:21:14.396 } 00:21:14.396 ] 00:21:14.396 } 00:21:14.396 ] 00:21:14.396 }' 00:21:14.396 10:09:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:21:14.663 10:09:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:21:14.663 "subsystems": [ 00:21:14.663 { 00:21:14.663 "subsystem": "keyring", 00:21:14.663 "config": [] 00:21:14.663 }, 00:21:14.663 { 00:21:14.663 "subsystem": "iobuf", 00:21:14.663 "config": [ 00:21:14.663 { 00:21:14.663 "method": "iobuf_set_options", 00:21:14.663 "params": { 00:21:14.663 "small_pool_count": 8192, 00:21:14.663 "large_pool_count": 1024, 00:21:14.663 "small_bufsize": 8192, 00:21:14.663 "large_bufsize": 135168 00:21:14.663 } 00:21:14.663 } 00:21:14.663 ] 00:21:14.663 }, 00:21:14.663 { 00:21:14.663 "subsystem": "sock", 00:21:14.663 "config": [ 00:21:14.663 { 00:21:14.663 "method": "sock_set_default_impl", 00:21:14.663 "params": { 00:21:14.663 "impl_name": "posix" 00:21:14.663 } 00:21:14.663 }, 00:21:14.663 { 00:21:14.663 "method": "sock_impl_set_options", 00:21:14.663 "params": { 00:21:14.663 "impl_name": "ssl", 00:21:14.663 "recv_buf_size": 4096, 00:21:14.663 "send_buf_size": 4096, 00:21:14.663 "enable_recv_pipe": true, 00:21:14.663 "enable_quickack": false, 00:21:14.663 "enable_placement_id": 0, 00:21:14.663 "enable_zerocopy_send_server": true, 00:21:14.663 "enable_zerocopy_send_client": false, 00:21:14.663 "zerocopy_threshold": 0, 00:21:14.663 "tls_version": 0, 00:21:14.663 "enable_ktls": false 00:21:14.663 } 00:21:14.663 }, 00:21:14.663 { 00:21:14.663 "method": "sock_impl_set_options", 00:21:14.663 "params": { 00:21:14.663 "impl_name": "posix", 00:21:14.663 "recv_buf_size": 2097152, 00:21:14.663 "send_buf_size": 2097152, 00:21:14.663 "enable_recv_pipe": true, 00:21:14.663 "enable_quickack": false, 00:21:14.663 "enable_placement_id": 0, 00:21:14.663 "enable_zerocopy_send_server": true, 00:21:14.663 "enable_zerocopy_send_client": false, 00:21:14.663 "zerocopy_threshold": 0, 00:21:14.663 "tls_version": 0, 00:21:14.663 "enable_ktls": false 00:21:14.663 } 00:21:14.663 } 00:21:14.663 ] 00:21:14.663 }, 00:21:14.663 { 00:21:14.663 "subsystem": "vmd", 00:21:14.663 "config": [] 00:21:14.663 }, 00:21:14.663 { 00:21:14.663 "subsystem": "accel", 00:21:14.663 "config": [ 00:21:14.663 { 00:21:14.663 "method": "accel_set_options", 00:21:14.663 "params": { 00:21:14.663 "small_cache_size": 128, 00:21:14.663 "large_cache_size": 16, 00:21:14.663 "task_count": 2048, 00:21:14.663 "sequence_count": 2048, 00:21:14.663 "buf_count": 2048 00:21:14.663 } 00:21:14.663 } 00:21:14.663 ] 00:21:14.663 }, 00:21:14.663 { 00:21:14.663 "subsystem": "bdev", 00:21:14.663 "config": [ 00:21:14.663 { 00:21:14.663 "method": "bdev_set_options", 00:21:14.663 "params": { 00:21:14.663 "bdev_io_pool_size": 65535, 00:21:14.663 "bdev_io_cache_size": 256, 00:21:14.663 "bdev_auto_examine": true, 00:21:14.663 "iobuf_small_cache_size": 128, 00:21:14.663 "iobuf_large_cache_size": 16 00:21:14.663 } 00:21:14.663 }, 00:21:14.663 { 00:21:14.663 "method": "bdev_raid_set_options", 00:21:14.663 "params": { 00:21:14.663 "process_window_size_kb": 1024, 00:21:14.663 "process_max_bandwidth_mb_sec": 0 00:21:14.663 } 00:21:14.663 }, 00:21:14.663 { 00:21:14.663 "method": "bdev_iscsi_set_options", 00:21:14.663 "params": { 00:21:14.663 "timeout_sec": 30 00:21:14.663 } 00:21:14.663 }, 00:21:14.663 { 00:21:14.663 "method": "bdev_nvme_set_options", 00:21:14.663 "params": { 00:21:14.663 "action_on_timeout": "none", 00:21:14.663 "timeout_us": 0, 00:21:14.664 "timeout_admin_us": 0, 00:21:14.664 "keep_alive_timeout_ms": 10000, 00:21:14.664 "arbitration_burst": 0, 00:21:14.664 "low_priority_weight": 0, 00:21:14.664 "medium_priority_weight": 0, 00:21:14.664 "high_priority_weight": 0, 00:21:14.664 "nvme_adminq_poll_period_us": 10000, 00:21:14.664 "nvme_ioq_poll_period_us": 0, 00:21:14.664 "io_queue_requests": 512, 00:21:14.664 "delay_cmd_submit": true, 00:21:14.664 "transport_retry_count": 4, 00:21:14.664 "bdev_retry_count": 3, 00:21:14.664 "transport_ack_timeout": 0, 00:21:14.664 "ctrlr_loss_timeout_sec": 0, 00:21:14.664 "reconnect_delay_sec": 0, 00:21:14.664 "fast_io_fail_timeout_sec": 0, 00:21:14.664 "disable_auto_failback": false, 00:21:14.664 "generate_uuids": false, 00:21:14.664 "transport_tos": 0, 00:21:14.664 "nvme_error_stat": false, 00:21:14.664 "rdma_srq_size": 0, 00:21:14.664 "io_path_stat": false, 00:21:14.664 "allow_accel_sequence": false, 00:21:14.664 "rdma_max_cq_size": 0, 00:21:14.664 "rdma_cm_event_timeout_ms": 0, 00:21:14.664 "dhchap_digests": [ 00:21:14.664 "sha256", 00:21:14.664 "sha384", 00:21:14.664 "sha512" 00:21:14.664 ], 00:21:14.664 "dhchap_dhgroups": [ 00:21:14.664 "null", 00:21:14.664 "ffdhe2048", 00:21:14.664 "ffdhe3072", 00:21:14.664 "ffdhe4096", 00:21:14.664 "ffdhe6144", 00:21:14.664 "ffdhe8192" 00:21:14.664 ] 00:21:14.664 } 00:21:14.664 }, 00:21:14.664 { 00:21:14.664 "method": "bdev_nvme_attach_controller", 00:21:14.664 "params": { 00:21:14.664 "name": "TLSTEST", 00:21:14.664 "trtype": "TCP", 00:21:14.664 "adrfam": "IPv4", 00:21:14.664 "traddr": "10.0.0.2", 00:21:14.664 "trsvcid": "4420", 00:21:14.664 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:14.664 "prchk_reftag": false, 00:21:14.664 "prchk_guard": false, 00:21:14.664 "ctrlr_loss_timeout_sec": 0, 00:21:14.664 "reconnect_delay_sec": 0, 00:21:14.664 "fast_io_fail_timeout_sec": 0, 00:21:14.664 "psk": "/tmp/tmp.4XVnji5A9K", 00:21:14.664 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:14.664 "hdgst": false, 00:21:14.664 "ddgst": false 00:21:14.664 } 00:21:14.664 }, 00:21:14.664 { 00:21:14.664 "method": "bdev_nvme_set_hotplug", 00:21:14.664 "params": { 00:21:14.664 "period_us": 100000, 00:21:14.664 "enable": false 00:21:14.664 } 00:21:14.664 }, 00:21:14.664 { 00:21:14.664 "method": "bdev_wait_for_examine" 00:21:14.664 } 00:21:14.664 ] 00:21:14.664 }, 00:21:14.664 { 00:21:14.664 "subsystem": "nbd", 00:21:14.664 "config": [] 00:21:14.664 } 00:21:14.664 ] 00:21:14.664 }' 00:21:14.664 10:09:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # killprocess 467662 00:21:14.664 10:09:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 467662 ']' 00:21:14.664 10:09:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 467662 00:21:14.664 10:09:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:14.664 10:09:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:14.664 10:09:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 467662 00:21:14.664 10:09:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:21:14.664 10:09:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:21:14.664 10:09:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 467662' 00:21:14.664 killing process with pid 467662 00:21:14.664 10:09:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 467662 00:21:14.664 Received shutdown signal, test time was about 10.000000 seconds 00:21:14.664 00:21:14.664 Latency(us) 00:21:14.664 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:14.664 =================================================================================================================== 00:21:14.664 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:14.664 [2024-07-25 10:09:59.788043] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:14.664 10:09:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 467662 00:21:14.922 10:10:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@200 -- # killprocess 467366 00:21:14.922 10:10:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 467366 ']' 00:21:14.922 10:10:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 467366 00:21:14.922 10:10:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:14.922 10:10:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:14.922 10:10:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 467366 00:21:14.922 10:10:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:14.922 10:10:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:14.922 10:10:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 467366' 00:21:14.922 killing process with pid 467366 00:21:14.922 10:10:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 467366 00:21:14.922 [2024-07-25 10:10:00.083833] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:14.922 10:10:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 467366 00:21:15.487 10:10:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:21:15.487 10:10:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:15.487 10:10:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:21:15.487 "subsystems": [ 00:21:15.487 { 00:21:15.487 "subsystem": "keyring", 00:21:15.487 "config": [] 00:21:15.487 }, 00:21:15.487 { 00:21:15.487 "subsystem": "iobuf", 00:21:15.487 "config": [ 00:21:15.487 { 00:21:15.487 "method": "iobuf_set_options", 00:21:15.487 "params": { 00:21:15.487 "small_pool_count": 8192, 00:21:15.487 "large_pool_count": 1024, 00:21:15.487 "small_bufsize": 8192, 00:21:15.487 "large_bufsize": 135168 00:21:15.487 } 00:21:15.487 } 00:21:15.487 ] 00:21:15.487 }, 00:21:15.487 { 00:21:15.487 "subsystem": "sock", 00:21:15.487 "config": [ 00:21:15.487 { 00:21:15.487 "method": "sock_set_default_impl", 00:21:15.487 "params": { 00:21:15.487 "impl_name": "posix" 00:21:15.487 } 00:21:15.487 }, 00:21:15.487 { 00:21:15.487 "method": "sock_impl_set_options", 00:21:15.487 "params": { 00:21:15.487 "impl_name": "ssl", 00:21:15.487 "recv_buf_size": 4096, 00:21:15.487 "send_buf_size": 4096, 00:21:15.487 "enable_recv_pipe": true, 00:21:15.487 "enable_quickack": false, 00:21:15.487 "enable_placement_id": 0, 00:21:15.487 "enable_zerocopy_send_server": true, 00:21:15.487 "enable_zerocopy_send_client": false, 00:21:15.487 "zerocopy_threshold": 0, 00:21:15.487 "tls_version": 0, 00:21:15.487 "enable_ktls": false 00:21:15.487 } 00:21:15.487 }, 00:21:15.487 { 00:21:15.487 "method": "sock_impl_set_options", 00:21:15.487 "params": { 00:21:15.487 "impl_name": "posix", 00:21:15.487 "recv_buf_size": 2097152, 00:21:15.487 "send_buf_size": 2097152, 00:21:15.487 "enable_recv_pipe": true, 00:21:15.487 "enable_quickack": false, 00:21:15.487 "enable_placement_id": 0, 00:21:15.487 "enable_zerocopy_send_server": true, 00:21:15.487 "enable_zerocopy_send_client": false, 00:21:15.487 "zerocopy_threshold": 0, 00:21:15.487 "tls_version": 0, 00:21:15.487 "enable_ktls": false 00:21:15.487 } 00:21:15.487 } 00:21:15.487 ] 00:21:15.487 }, 00:21:15.487 { 00:21:15.487 "subsystem": "vmd", 00:21:15.487 "config": [] 00:21:15.487 }, 00:21:15.487 { 00:21:15.487 "subsystem": "accel", 00:21:15.487 "config": [ 00:21:15.487 { 00:21:15.487 "method": "accel_set_options", 00:21:15.487 "params": { 00:21:15.487 "small_cache_size": 128, 00:21:15.487 "large_cache_size": 16, 00:21:15.487 "task_count": 2048, 00:21:15.487 "sequence_count": 2048, 00:21:15.487 "buf_count": 2048 00:21:15.487 } 00:21:15.487 } 00:21:15.487 ] 00:21:15.487 }, 00:21:15.487 { 00:21:15.487 "subsystem": "bdev", 00:21:15.487 "config": [ 00:21:15.487 { 00:21:15.487 "method": "bdev_set_options", 00:21:15.487 "params": { 00:21:15.487 "bdev_io_pool_size": 65535, 00:21:15.487 "bdev_io_cache_size": 256, 00:21:15.487 "bdev_auto_examine": true, 00:21:15.487 "iobuf_small_cache_size": 128, 00:21:15.487 "iobuf_large_cache_size": 16 00:21:15.487 } 00:21:15.487 }, 00:21:15.487 { 00:21:15.487 "method": "bdev_raid_set_options", 00:21:15.487 "params": { 00:21:15.487 "process_window_size_kb": 1024, 00:21:15.487 "process_max_bandwidth_mb_sec": 0 00:21:15.487 } 00:21:15.487 }, 00:21:15.487 { 00:21:15.487 "method": "bdev_iscsi_set_options", 00:21:15.487 "params": { 00:21:15.487 "timeout_sec": 30 00:21:15.487 } 00:21:15.487 }, 00:21:15.487 { 00:21:15.487 "method": "bdev_nvme_set_options", 00:21:15.487 "params": { 00:21:15.487 "action_on_timeout": "none", 00:21:15.487 "timeout_us": 0, 00:21:15.487 "timeout_admin_us": 0, 00:21:15.487 "keep_alive_timeout_ms": 10000, 00:21:15.487 "arbitration_burst": 0, 00:21:15.487 "low_priority_weight": 0, 00:21:15.487 "medium_priority_weight": 0, 00:21:15.487 "high_priority_weight": 0, 00:21:15.487 "nvme_adminq_poll_period_us": 10000, 00:21:15.487 "nvme_ioq_poll_period_us": 0, 00:21:15.487 "io_queue_requests": 0, 00:21:15.487 "delay_cmd_submit": true, 00:21:15.487 "transport_retry_count": 4, 00:21:15.487 "bdev_retry_count": 3, 00:21:15.487 "transport_ack_timeout": 0, 00:21:15.487 "ctrlr_loss_timeout_sec": 0, 00:21:15.487 "reconnect_delay_sec": 0, 00:21:15.487 "fast_io_fail_timeout_sec": 0, 00:21:15.487 "disable_auto_failback": false, 00:21:15.487 "generate_uuids": false, 00:21:15.487 "transport_tos": 0, 00:21:15.487 "nvme_error_stat": false, 00:21:15.487 "rdma_srq_size": 0, 00:21:15.487 "io_path_stat": false, 00:21:15.487 "allow_accel_sequence": false, 00:21:15.487 "rdma_max_cq_size": 0, 00:21:15.487 "rdma_cm_event_timeout_ms": 0, 00:21:15.487 "dhchap_digests": [ 00:21:15.487 "sha256", 00:21:15.487 "sha384", 00:21:15.487 "sha512" 00:21:15.487 ], 00:21:15.487 "dhchap_dhgroups": [ 00:21:15.487 "null", 00:21:15.487 "ffdhe2048", 00:21:15.487 "ffdhe3072", 00:21:15.487 "ffdhe4096", 00:21:15.487 "ffdhe6144", 00:21:15.487 "ffdhe8192" 00:21:15.487 ] 00:21:15.487 } 00:21:15.487 }, 00:21:15.487 { 00:21:15.487 "method": "bdev_nvme_set_hotplug", 00:21:15.487 "params": { 00:21:15.487 "period_us": 100000, 00:21:15.487 "enable": false 00:21:15.487 } 00:21:15.487 }, 00:21:15.487 { 00:21:15.487 "method": "bdev_malloc_create", 00:21:15.488 "params": { 00:21:15.488 "name": "malloc0", 00:21:15.488 "num_blocks": 8192, 00:21:15.488 "block_size": 4096, 00:21:15.488 "physical_block_size": 4096, 00:21:15.488 "uuid": "ae5d9050-27cc-4002-ba68-beaf39a6807d", 00:21:15.488 "optimal_io_boundary": 0, 00:21:15.488 "md_size": 0, 00:21:15.488 "dif_type": 0, 00:21:15.488 "dif_is_head_of_md": false, 00:21:15.488 "dif_pi_format": 0 00:21:15.488 } 00:21:15.488 }, 00:21:15.488 { 00:21:15.488 "method": "bdev_wait_for_examine" 00:21:15.488 } 00:21:15.488 ] 00:21:15.488 }, 00:21:15.488 { 00:21:15.488 "subsystem": "nbd", 00:21:15.488 "config": [] 00:21:15.488 }, 00:21:15.488 { 00:21:15.488 "subsystem": "scheduler", 00:21:15.488 "config": [ 00:21:15.488 { 00:21:15.488 "method": "framework_set_scheduler", 00:21:15.488 "params": { 00:21:15.488 "name": "static" 00:21:15.488 } 00:21:15.488 } 00:21:15.488 ] 00:21:15.488 }, 00:21:15.488 { 00:21:15.488 "subsystem": "nvmf", 00:21:15.488 "config": [ 00:21:15.488 { 00:21:15.488 "method": "nvmf_set_config", 00:21:15.488 "params": { 00:21:15.488 "discovery_filter": "match_any", 00:21:15.488 "admin_cmd_passthru": { 00:21:15.488 "identify_ctrlr": false 00:21:15.488 } 00:21:15.488 } 00:21:15.488 }, 00:21:15.488 { 00:21:15.488 "method": "nvmf_set_max_subsystems", 00:21:15.488 "params": { 00:21:15.488 "max_subsystems": 1024 00:21:15.488 } 00:21:15.488 }, 00:21:15.488 { 00:21:15.488 "method": "nvmf_set_crdt", 00:21:15.488 "params": { 00:21:15.488 "crdt1": 0, 00:21:15.488 "crdt2": 0, 00:21:15.488 "crdt3": 0 00:21:15.488 } 00:21:15.488 }, 00:21:15.488 { 00:21:15.488 "method": "nvmf_create_transport", 00:21:15.488 "params": { 00:21:15.488 "trtype": "TCP", 00:21:15.488 "max_queue_depth": 128, 00:21:15.488 "max_io_qpairs_per_ctrlr": 127, 00:21:15.488 "in_capsule_data_size": 4096, 00:21:15.488 "max_io_size": 131072, 00:21:15.488 "io_unit_size": 131072, 00:21:15.488 "max_aq_depth": 128, 00:21:15.488 "num_shared_buffers": 511, 00:21:15.488 "buf_cache_size": 4294967295, 00:21:15.488 "dif_insert_or_strip": false, 00:21:15.488 "zcopy": false, 00:21:15.488 "c2h_success": false, 00:21:15.488 "sock_priority": 0, 00:21:15.488 "abort_timeout_sec": 1, 00:21:15.488 "ack_timeout": 0, 00:21:15.488 "data_wr_pool_size": 0 00:21:15.488 } 00:21:15.488 }, 00:21:15.488 { 00:21:15.488 "method": "nvmf_create_subsystem", 00:21:15.488 "params": { 00:21:15.488 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:15.488 "allow_any_host": false, 00:21:15.488 "serial_number": "SPDK00000000000001", 00:21:15.488 "model_number": "SPDK bdev Controller", 00:21:15.488 "max_namespaces": 10, 00:21:15.488 "min_cntlid": 1, 00:21:15.488 "max_cntlid": 65519, 00:21:15.488 "ana_reporting": false 00:21:15.488 } 00:21:15.488 }, 00:21:15.488 { 00:21:15.488 "method": "nvmf_subsystem_add_host", 00:21:15.488 "params": { 00:21:15.488 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:15.488 "host": "nqn.2016-06.io.spdk:host1", 00:21:15.488 "psk": "/tmp/tmp.4XVnji5A9K" 00:21:15.488 } 00:21:15.488 }, 00:21:15.488 { 00:21:15.488 "method": "nvmf_subsystem_add_ns", 00:21:15.488 "params": { 00:21:15.488 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:15.488 "namespace": { 00:21:15.488 "nsid": 1, 00:21:15.488 "bdev_name": "malloc0", 00:21:15.488 "nguid": "AE5D905027CC4002BA68BEAF39A6807D", 00:21:15.488 "uuid": "ae5d9050-27cc-4002-ba68-beaf39a6807d", 00:21:15.488 "no_auto_visible": false 00:21:15.488 } 00:21:15.488 } 00:21:15.488 }, 00:21:15.488 { 00:21:15.488 "method": "nvmf_subsystem_add_listener", 00:21:15.488 "params": { 00:21:15.488 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:15.488 "listen_address": { 00:21:15.488 "trtype": "TCP", 00:21:15.488 "adrfam": "IPv4", 00:21:15.488 "traddr": "10.0.0.2", 00:21:15.488 "trsvcid": "4420" 00:21:15.488 }, 00:21:15.488 "secure_channel": true 00:21:15.488 } 00:21:15.488 } 00:21:15.488 ] 00:21:15.488 } 00:21:15.488 ] 00:21:15.488 }' 00:21:15.488 10:10:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:15.488 10:10:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:15.488 10:10:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=467940 00:21:15.488 10:10:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:21:15.488 10:10:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 467940 00:21:15.488 10:10:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 467940 ']' 00:21:15.488 10:10:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:15.488 10:10:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:15.488 10:10:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:15.488 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:15.488 10:10:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:15.488 10:10:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:15.488 [2024-07-25 10:10:00.439914] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:21:15.488 [2024-07-25 10:10:00.440009] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:15.488 EAL: No free 2048 kB hugepages reported on node 1 00:21:15.488 [2024-07-25 10:10:00.514505] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:15.488 [2024-07-25 10:10:00.637582] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:15.488 [2024-07-25 10:10:00.637641] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:15.488 [2024-07-25 10:10:00.637659] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:15.488 [2024-07-25 10:10:00.637672] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:15.488 [2024-07-25 10:10:00.637684] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:15.488 [2024-07-25 10:10:00.637763] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:15.746 [2024-07-25 10:10:00.882449] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:15.746 [2024-07-25 10:10:00.907190] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:16.003 [2024-07-25 10:10:00.923255] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:16.003 [2024-07-25 10:10:00.923540] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:16.567 10:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:16.567 10:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:16.567 10:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:16.567 10:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:16.567 10:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:16.567 10:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:16.567 10:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=468088 00:21:16.567 10:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 468088 /var/tmp/bdevperf.sock 00:21:16.567 10:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 468088 ']' 00:21:16.567 10:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:16.567 10:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:21:16.567 10:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:16.567 10:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:21:16.567 "subsystems": [ 00:21:16.567 { 00:21:16.567 "subsystem": "keyring", 00:21:16.567 "config": [] 00:21:16.567 }, 00:21:16.567 { 00:21:16.567 "subsystem": "iobuf", 00:21:16.567 "config": [ 00:21:16.567 { 00:21:16.567 "method": "iobuf_set_options", 00:21:16.567 "params": { 00:21:16.567 "small_pool_count": 8192, 00:21:16.567 "large_pool_count": 1024, 00:21:16.567 "small_bufsize": 8192, 00:21:16.567 "large_bufsize": 135168 00:21:16.567 } 00:21:16.567 } 00:21:16.567 ] 00:21:16.567 }, 00:21:16.567 { 00:21:16.567 "subsystem": "sock", 00:21:16.567 "config": [ 00:21:16.567 { 00:21:16.567 "method": "sock_set_default_impl", 00:21:16.567 "params": { 00:21:16.567 "impl_name": "posix" 00:21:16.567 } 00:21:16.567 }, 00:21:16.567 { 00:21:16.567 "method": "sock_impl_set_options", 00:21:16.567 "params": { 00:21:16.567 "impl_name": "ssl", 00:21:16.567 "recv_buf_size": 4096, 00:21:16.567 "send_buf_size": 4096, 00:21:16.567 "enable_recv_pipe": true, 00:21:16.567 "enable_quickack": false, 00:21:16.567 "enable_placement_id": 0, 00:21:16.567 "enable_zerocopy_send_server": true, 00:21:16.567 "enable_zerocopy_send_client": false, 00:21:16.567 "zerocopy_threshold": 0, 00:21:16.567 "tls_version": 0, 00:21:16.567 "enable_ktls": false 00:21:16.567 } 00:21:16.567 }, 00:21:16.567 { 00:21:16.567 "method": "sock_impl_set_options", 00:21:16.567 "params": { 00:21:16.567 "impl_name": "posix", 00:21:16.567 "recv_buf_size": 2097152, 00:21:16.567 "send_buf_size": 2097152, 00:21:16.567 "enable_recv_pipe": true, 00:21:16.567 "enable_quickack": false, 00:21:16.567 "enable_placement_id": 0, 00:21:16.567 "enable_zerocopy_send_server": true, 00:21:16.567 "enable_zerocopy_send_client": false, 00:21:16.567 "zerocopy_threshold": 0, 00:21:16.567 "tls_version": 0, 00:21:16.567 "enable_ktls": false 00:21:16.567 } 00:21:16.567 } 00:21:16.567 ] 00:21:16.567 }, 00:21:16.567 { 00:21:16.568 "subsystem": "vmd", 00:21:16.568 "config": [] 00:21:16.568 }, 00:21:16.568 { 00:21:16.568 "subsystem": "accel", 00:21:16.568 "config": [ 00:21:16.568 { 00:21:16.568 "method": "accel_set_options", 00:21:16.568 "params": { 00:21:16.568 "small_cache_size": 128, 00:21:16.568 "large_cache_size": 16, 00:21:16.568 "task_count": 2048, 00:21:16.568 "sequence_count": 2048, 00:21:16.568 "buf_count": 2048 00:21:16.568 } 00:21:16.568 } 00:21:16.568 ] 00:21:16.568 }, 00:21:16.568 { 00:21:16.568 "subsystem": "bdev", 00:21:16.568 "config": [ 00:21:16.568 { 00:21:16.568 "method": "bdev_set_options", 00:21:16.568 "params": { 00:21:16.568 "bdev_io_pool_size": 65535, 00:21:16.568 "bdev_io_cache_size": 256, 00:21:16.568 "bdev_auto_examine": true, 00:21:16.568 "iobuf_small_cache_size": 128, 00:21:16.568 "iobuf_large_cache_size": 16 00:21:16.568 } 00:21:16.568 }, 00:21:16.568 { 00:21:16.568 "method": "bdev_raid_set_options", 00:21:16.568 "params": { 00:21:16.568 "process_window_size_kb": 1024, 00:21:16.568 "process_max_bandwidth_mb_sec": 0 00:21:16.568 } 00:21:16.568 }, 00:21:16.568 { 00:21:16.568 "method": "bdev_iscsi_set_options", 00:21:16.568 "params": { 00:21:16.568 "timeout_sec": 30 00:21:16.568 } 00:21:16.568 }, 00:21:16.568 { 00:21:16.568 "method": "bdev_nvme_set_options", 00:21:16.568 "params": { 00:21:16.568 "action_on_timeout": "none", 00:21:16.568 "timeout_us": 0, 00:21:16.568 "timeout_admin_us": 0, 00:21:16.568 "keep_alive_timeout_ms": 10000, 00:21:16.568 "arbitration_burst": 0, 00:21:16.568 "low_priority_weight": 0, 00:21:16.568 "medium_priority_weight": 0, 00:21:16.568 "high_priority_weight": 0, 00:21:16.568 "nvme_adminq_poll_period_us": 10000, 00:21:16.568 "nvme_ioq_poll_period_us": 0, 00:21:16.568 "io_queue_requests": 512, 00:21:16.568 "delay_cmd_submit": true, 00:21:16.568 "transport_retry_count": 4, 00:21:16.568 "bdev_retry_count": 3, 00:21:16.568 "transport_ack_timeout": 0, 00:21:16.568 "ctrlr_loss_timeout_sec": 0, 00:21:16.568 "reconnect_delay_sec": 0, 00:21:16.568 "fast_io_fail_timeout_sec": 0, 00:21:16.568 "disable_auto_failback": false, 00:21:16.568 "generate_uuids": false, 00:21:16.568 "transport_tos": 0, 00:21:16.568 "nvme_error_stat": false, 00:21:16.568 "rdma_srq_size": 0, 00:21:16.568 "io_path_stat": false, 00:21:16.568 "allow_accel_sequence": false, 00:21:16.568 "rdma_max_cq_size": 0, 00:21:16.568 "rdma_cm_event_timeout_ms": 0, 00:21:16.568 "dhchap_digests": [ 00:21:16.568 "sha256", 00:21:16.568 "sha384", 00:21:16.568 "sha512" 00:21:16.568 ], 00:21:16.568 "dhchap_dhgroups": [ 00:21:16.568 "null", 00:21:16.568 "ffdhe2048", 00:21:16.568 "ffdhe3072", 00:21:16.568 "ffdhe4096", 00:21:16.568 "ffdhe6144", 00:21:16.568 "ffdhe8192" 00:21:16.568 ] 00:21:16.568 } 00:21:16.568 }, 00:21:16.568 { 00:21:16.568 "method": "bdev_nvme_attach_controller", 00:21:16.568 "params": { 00:21:16.568 "name": "TLSTEST", 00:21:16.568 "trtype": "TCP", 00:21:16.568 "adrfam": "IPv4", 00:21:16.568 "traddr": "10.0.0.2", 00:21:16.568 "trsvcid": "4420", 00:21:16.568 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:16.568 "prchk_reftag": false, 00:21:16.568 "prchk_guard": false, 00:21:16.568 "ctrlr_loss_timeout_sec": 0, 00:21:16.568 "reconnect_delay_sec": 0, 00:21:16.568 "fast_io_fail_timeout_sec": 0, 00:21:16.568 "psk": "/tmp/tmp.4XVnji5A9K", 00:21:16.568 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:16.568 "hdgst": false, 00:21:16.568 "ddgst": false 00:21:16.568 } 00:21:16.568 }, 00:21:16.568 { 00:21:16.568 "method": "bdev_nvme_set_hotplug", 00:21:16.568 "params": { 00:21:16.568 "period_us": 100000, 00:21:16.568 "enable": false 00:21:16.568 } 00:21:16.568 }, 00:21:16.568 { 00:21:16.568 "method": "bdev_wait_for_examine" 00:21:16.568 } 00:21:16.568 ] 00:21:16.568 }, 00:21:16.568 { 00:21:16.568 "subsystem": "nbd", 00:21:16.568 "config": [] 00:21:16.568 } 00:21:16.568 ] 00:21:16.568 }' 00:21:16.568 10:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:16.568 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:16.568 10:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:16.568 10:10:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:16.568 [2024-07-25 10:10:01.552992] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:21:16.568 [2024-07-25 10:10:01.553097] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid468088 ] 00:21:16.568 EAL: No free 2048 kB hugepages reported on node 1 00:21:16.568 [2024-07-25 10:10:01.629746] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:16.825 [2024-07-25 10:10:01.757897] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:16.825 [2024-07-25 10:10:01.925105] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:16.825 [2024-07-25 10:10:01.925230] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:17.083 10:10:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:17.083 10:10:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:17.083 10:10:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:17.083 Running I/O for 10 seconds... 00:21:29.275 00:21:29.275 Latency(us) 00:21:29.275 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:29.275 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:29.275 Verification LBA range: start 0x0 length 0x2000 00:21:29.275 TLSTESTn1 : 10.03 3376.28 13.19 0.00 0.00 37825.27 6213.78 65633.09 00:21:29.275 =================================================================================================================== 00:21:29.275 Total : 3376.28 13.19 0.00 0.00 37825.27 6213.78 65633.09 00:21:29.275 0 00:21:29.275 10:10:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:29.275 10:10:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@214 -- # killprocess 468088 00:21:29.275 10:10:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 468088 ']' 00:21:29.275 10:10:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 468088 00:21:29.275 10:10:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:29.275 10:10:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:29.275 10:10:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 468088 00:21:29.275 10:10:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:21:29.275 10:10:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:21:29.275 10:10:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 468088' 00:21:29.275 killing process with pid 468088 00:21:29.275 10:10:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 468088 00:21:29.275 Received shutdown signal, test time was about 10.000000 seconds 00:21:29.275 00:21:29.275 Latency(us) 00:21:29.275 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:29.275 =================================================================================================================== 00:21:29.275 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:29.275 [2024-07-25 10:10:12.283595] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:29.275 10:10:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 468088 00:21:29.275 10:10:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # killprocess 467940 00:21:29.275 10:10:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 467940 ']' 00:21:29.275 10:10:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 467940 00:21:29.275 10:10:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:29.275 10:10:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:29.275 10:10:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 467940 00:21:29.275 10:10:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:29.275 10:10:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:29.275 10:10:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 467940' 00:21:29.275 killing process with pid 467940 00:21:29.275 10:10:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 467940 00:21:29.275 [2024-07-25 10:10:12.586092] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:29.275 10:10:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 467940 00:21:29.275 10:10:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:21:29.275 10:10:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:29.275 10:10:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:29.275 10:10:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:29.275 10:10:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=469412 00:21:29.275 10:10:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:29.275 10:10:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 469412 00:21:29.275 10:10:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 469412 ']' 00:21:29.275 10:10:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:29.275 10:10:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:29.275 10:10:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:29.275 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:29.275 10:10:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:29.275 10:10:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:29.275 [2024-07-25 10:10:12.957922] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:21:29.275 [2024-07-25 10:10:12.958016] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:29.275 EAL: No free 2048 kB hugepages reported on node 1 00:21:29.275 [2024-07-25 10:10:13.033129] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:29.275 [2024-07-25 10:10:13.154075] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:29.275 [2024-07-25 10:10:13.154137] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:29.275 [2024-07-25 10:10:13.154154] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:29.275 [2024-07-25 10:10:13.154168] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:29.275 [2024-07-25 10:10:13.154179] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:29.275 [2024-07-25 10:10:13.154211] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:29.275 10:10:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:29.275 10:10:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:29.275 10:10:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:29.275 10:10:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:29.275 10:10:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:29.275 10:10:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:29.275 10:10:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.4XVnji5A9K 00:21:29.275 10:10:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.4XVnji5A9K 00:21:29.275 10:10:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:29.275 [2024-07-25 10:10:13.578872] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:29.275 10:10:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:29.276 10:10:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:29.276 [2024-07-25 10:10:14.256680] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:29.276 [2024-07-25 10:10:14.256945] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:29.276 10:10:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:29.533 malloc0 00:21:29.791 10:10:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:30.365 10:10:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.4XVnji5A9K 00:21:30.931 [2024-07-25 10:10:15.921606] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:30.931 10:10:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=469823 00:21:30.931 10:10:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:21:30.931 10:10:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:30.931 10:10:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 469823 /var/tmp/bdevperf.sock 00:21:30.931 10:10:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 469823 ']' 00:21:30.931 10:10:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:30.931 10:10:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:30.931 10:10:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:30.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:30.931 10:10:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:30.931 10:10:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:30.931 [2024-07-25 10:10:15.992265] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:21:30.931 [2024-07-25 10:10:15.992348] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid469823 ] 00:21:30.931 EAL: No free 2048 kB hugepages reported on node 1 00:21:30.931 [2024-07-25 10:10:16.060063] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:31.189 [2024-07-25 10:10:16.181610] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:31.189 10:10:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:31.189 10:10:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:31.189 10:10:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.4XVnji5A9K 00:21:31.755 10:10:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:32.013 [2024-07-25 10:10:17.116281] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:32.271 nvme0n1 00:21:32.271 10:10:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:32.271 Running I/O for 1 seconds... 00:21:33.644 00:21:33.644 Latency(us) 00:21:33.644 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:33.644 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:33.644 Verification LBA range: start 0x0 length 0x2000 00:21:33.644 nvme0n1 : 1.04 2392.82 9.35 0.00 0.00 52455.90 11456.66 73400.32 00:21:33.644 =================================================================================================================== 00:21:33.644 Total : 2392.82 9.35 0.00 0.00 52455.90 11456.66 73400.32 00:21:33.644 0 00:21:33.644 10:10:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # killprocess 469823 00:21:33.644 10:10:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 469823 ']' 00:21:33.644 10:10:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 469823 00:21:33.644 10:10:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:33.644 10:10:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:33.644 10:10:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 469823 00:21:33.644 10:10:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:33.644 10:10:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:33.644 10:10:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 469823' 00:21:33.644 killing process with pid 469823 00:21:33.644 10:10:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 469823 00:21:33.644 Received shutdown signal, test time was about 1.000000 seconds 00:21:33.644 00:21:33.644 Latency(us) 00:21:33.644 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:33.644 =================================================================================================================== 00:21:33.644 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:33.644 10:10:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 469823 00:21:33.644 10:10:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@235 -- # killprocess 469412 00:21:33.644 10:10:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 469412 ']' 00:21:33.644 10:10:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 469412 00:21:33.644 10:10:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:33.644 10:10:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:33.644 10:10:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 469412 00:21:33.644 10:10:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:33.644 10:10:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:33.644 10:10:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 469412' 00:21:33.644 killing process with pid 469412 00:21:33.644 10:10:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 469412 00:21:33.644 [2024-07-25 10:10:18.772946] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:33.644 10:10:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 469412 00:21:34.210 10:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@240 -- # nvmfappstart 00:21:34.210 10:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:34.210 10:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:34.210 10:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:34.210 10:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=470224 00:21:34.210 10:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:34.210 10:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 470224 00:21:34.210 10:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 470224 ']' 00:21:34.210 10:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:34.210 10:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:34.210 10:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:34.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:34.210 10:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:34.210 10:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:34.210 [2024-07-25 10:10:19.141751] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:21:34.210 [2024-07-25 10:10:19.141868] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:34.210 EAL: No free 2048 kB hugepages reported on node 1 00:21:34.210 [2024-07-25 10:10:19.223960] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:34.210 [2024-07-25 10:10:19.344554] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:34.210 [2024-07-25 10:10:19.344624] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:34.210 [2024-07-25 10:10:19.344641] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:34.210 [2024-07-25 10:10:19.344655] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:34.210 [2024-07-25 10:10:19.344667] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:34.210 [2024-07-25 10:10:19.344699] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:34.468 10:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:34.468 10:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:34.468 10:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:34.468 10:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:34.468 10:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:34.468 10:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:34.468 10:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@241 -- # rpc_cmd 00:21:34.468 10:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:34.468 10:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:34.468 [2024-07-25 10:10:19.505192] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:34.468 malloc0 00:21:34.468 [2024-07-25 10:10:19.538285] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:34.468 [2024-07-25 10:10:19.545673] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:34.468 10:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:34.468 10:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # bdevperf_pid=470256 00:21:34.468 10:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@252 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:21:34.468 10:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # waitforlisten 470256 /var/tmp/bdevperf.sock 00:21:34.468 10:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 470256 ']' 00:21:34.468 10:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:34.468 10:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:34.468 10:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:34.468 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:34.468 10:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:34.468 10:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:34.468 [2024-07-25 10:10:19.616570] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:21:34.468 [2024-07-25 10:10:19.616646] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid470256 ] 00:21:34.726 EAL: No free 2048 kB hugepages reported on node 1 00:21:34.726 [2024-07-25 10:10:19.684212] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:34.726 [2024-07-25 10:10:19.805652] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:34.984 10:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:34.984 10:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:34.984 10:10:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@257 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.4XVnji5A9K 00:21:35.241 10:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:35.498 [2024-07-25 10:10:20.573828] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:35.499 nvme0n1 00:21:35.499 10:10:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@262 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:35.756 Running I/O for 1 seconds... 00:21:36.689 00:21:36.689 Latency(us) 00:21:36.689 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:36.689 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:36.689 Verification LBA range: start 0x0 length 0x2000 00:21:36.689 nvme0n1 : 1.04 2849.25 11.13 0.00 0.00 44129.99 10971.21 64079.64 00:21:36.689 =================================================================================================================== 00:21:36.689 Total : 2849.25 11.13 0.00 0.00 44129.99 10971.21 64079.64 00:21:36.689 0 00:21:36.689 10:10:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # rpc_cmd save_config 00:21:36.689 10:10:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:36.689 10:10:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:36.947 10:10:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:36.947 10:10:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # tgtcfg='{ 00:21:36.947 "subsystems": [ 00:21:36.947 { 00:21:36.947 "subsystem": "keyring", 00:21:36.947 "config": [ 00:21:36.947 { 00:21:36.947 "method": "keyring_file_add_key", 00:21:36.947 "params": { 00:21:36.947 "name": "key0", 00:21:36.947 "path": "/tmp/tmp.4XVnji5A9K" 00:21:36.947 } 00:21:36.947 } 00:21:36.947 ] 00:21:36.947 }, 00:21:36.947 { 00:21:36.947 "subsystem": "iobuf", 00:21:36.947 "config": [ 00:21:36.947 { 00:21:36.947 "method": "iobuf_set_options", 00:21:36.947 "params": { 00:21:36.947 "small_pool_count": 8192, 00:21:36.947 "large_pool_count": 1024, 00:21:36.947 "small_bufsize": 8192, 00:21:36.947 "large_bufsize": 135168 00:21:36.947 } 00:21:36.947 } 00:21:36.947 ] 00:21:36.947 }, 00:21:36.947 { 00:21:36.947 "subsystem": "sock", 00:21:36.947 "config": [ 00:21:36.947 { 00:21:36.947 "method": "sock_set_default_impl", 00:21:36.947 "params": { 00:21:36.947 "impl_name": "posix" 00:21:36.947 } 00:21:36.947 }, 00:21:36.947 { 00:21:36.947 "method": "sock_impl_set_options", 00:21:36.947 "params": { 00:21:36.947 "impl_name": "ssl", 00:21:36.947 "recv_buf_size": 4096, 00:21:36.947 "send_buf_size": 4096, 00:21:36.947 "enable_recv_pipe": true, 00:21:36.947 "enable_quickack": false, 00:21:36.947 "enable_placement_id": 0, 00:21:36.947 "enable_zerocopy_send_server": true, 00:21:36.947 "enable_zerocopy_send_client": false, 00:21:36.947 "zerocopy_threshold": 0, 00:21:36.947 "tls_version": 0, 00:21:36.947 "enable_ktls": false 00:21:36.947 } 00:21:36.947 }, 00:21:36.947 { 00:21:36.947 "method": "sock_impl_set_options", 00:21:36.947 "params": { 00:21:36.947 "impl_name": "posix", 00:21:36.947 "recv_buf_size": 2097152, 00:21:36.947 "send_buf_size": 2097152, 00:21:36.947 "enable_recv_pipe": true, 00:21:36.947 "enable_quickack": false, 00:21:36.947 "enable_placement_id": 0, 00:21:36.947 "enable_zerocopy_send_server": true, 00:21:36.947 "enable_zerocopy_send_client": false, 00:21:36.947 "zerocopy_threshold": 0, 00:21:36.947 "tls_version": 0, 00:21:36.947 "enable_ktls": false 00:21:36.947 } 00:21:36.947 } 00:21:36.947 ] 00:21:36.947 }, 00:21:36.947 { 00:21:36.947 "subsystem": "vmd", 00:21:36.947 "config": [] 00:21:36.947 }, 00:21:36.947 { 00:21:36.947 "subsystem": "accel", 00:21:36.947 "config": [ 00:21:36.947 { 00:21:36.947 "method": "accel_set_options", 00:21:36.947 "params": { 00:21:36.947 "small_cache_size": 128, 00:21:36.947 "large_cache_size": 16, 00:21:36.947 "task_count": 2048, 00:21:36.947 "sequence_count": 2048, 00:21:36.947 "buf_count": 2048 00:21:36.947 } 00:21:36.947 } 00:21:36.947 ] 00:21:36.947 }, 00:21:36.947 { 00:21:36.947 "subsystem": "bdev", 00:21:36.947 "config": [ 00:21:36.947 { 00:21:36.947 "method": "bdev_set_options", 00:21:36.947 "params": { 00:21:36.947 "bdev_io_pool_size": 65535, 00:21:36.947 "bdev_io_cache_size": 256, 00:21:36.947 "bdev_auto_examine": true, 00:21:36.947 "iobuf_small_cache_size": 128, 00:21:36.947 "iobuf_large_cache_size": 16 00:21:36.947 } 00:21:36.947 }, 00:21:36.947 { 00:21:36.947 "method": "bdev_raid_set_options", 00:21:36.947 "params": { 00:21:36.947 "process_window_size_kb": 1024, 00:21:36.947 "process_max_bandwidth_mb_sec": 0 00:21:36.947 } 00:21:36.947 }, 00:21:36.947 { 00:21:36.947 "method": "bdev_iscsi_set_options", 00:21:36.947 "params": { 00:21:36.947 "timeout_sec": 30 00:21:36.947 } 00:21:36.947 }, 00:21:36.947 { 00:21:36.947 "method": "bdev_nvme_set_options", 00:21:36.947 "params": { 00:21:36.947 "action_on_timeout": "none", 00:21:36.947 "timeout_us": 0, 00:21:36.947 "timeout_admin_us": 0, 00:21:36.947 "keep_alive_timeout_ms": 10000, 00:21:36.947 "arbitration_burst": 0, 00:21:36.947 "low_priority_weight": 0, 00:21:36.947 "medium_priority_weight": 0, 00:21:36.947 "high_priority_weight": 0, 00:21:36.947 "nvme_adminq_poll_period_us": 10000, 00:21:36.947 "nvme_ioq_poll_period_us": 0, 00:21:36.947 "io_queue_requests": 0, 00:21:36.947 "delay_cmd_submit": true, 00:21:36.947 "transport_retry_count": 4, 00:21:36.947 "bdev_retry_count": 3, 00:21:36.947 "transport_ack_timeout": 0, 00:21:36.947 "ctrlr_loss_timeout_sec": 0, 00:21:36.947 "reconnect_delay_sec": 0, 00:21:36.947 "fast_io_fail_timeout_sec": 0, 00:21:36.947 "disable_auto_failback": false, 00:21:36.947 "generate_uuids": false, 00:21:36.948 "transport_tos": 0, 00:21:36.948 "nvme_error_stat": false, 00:21:36.948 "rdma_srq_size": 0, 00:21:36.948 "io_path_stat": false, 00:21:36.948 "allow_accel_sequence": false, 00:21:36.948 "rdma_max_cq_size": 0, 00:21:36.948 "rdma_cm_event_timeout_ms": 0, 00:21:36.948 "dhchap_digests": [ 00:21:36.948 "sha256", 00:21:36.948 "sha384", 00:21:36.948 "sha512" 00:21:36.948 ], 00:21:36.948 "dhchap_dhgroups": [ 00:21:36.948 "null", 00:21:36.948 "ffdhe2048", 00:21:36.948 "ffdhe3072", 00:21:36.948 "ffdhe4096", 00:21:36.948 "ffdhe6144", 00:21:36.948 "ffdhe8192" 00:21:36.948 ] 00:21:36.948 } 00:21:36.948 }, 00:21:36.948 { 00:21:36.948 "method": "bdev_nvme_set_hotplug", 00:21:36.948 "params": { 00:21:36.948 "period_us": 100000, 00:21:36.948 "enable": false 00:21:36.948 } 00:21:36.948 }, 00:21:36.948 { 00:21:36.948 "method": "bdev_malloc_create", 00:21:36.948 "params": { 00:21:36.948 "name": "malloc0", 00:21:36.948 "num_blocks": 8192, 00:21:36.948 "block_size": 4096, 00:21:36.948 "physical_block_size": 4096, 00:21:36.948 "uuid": "8179d275-6e5b-4a7f-a13f-f43583d636c8", 00:21:36.948 "optimal_io_boundary": 0, 00:21:36.948 "md_size": 0, 00:21:36.948 "dif_type": 0, 00:21:36.948 "dif_is_head_of_md": false, 00:21:36.948 "dif_pi_format": 0 00:21:36.948 } 00:21:36.948 }, 00:21:36.948 { 00:21:36.948 "method": "bdev_wait_for_examine" 00:21:36.948 } 00:21:36.948 ] 00:21:36.948 }, 00:21:36.948 { 00:21:36.948 "subsystem": "nbd", 00:21:36.948 "config": [] 00:21:36.948 }, 00:21:36.948 { 00:21:36.948 "subsystem": "scheduler", 00:21:36.948 "config": [ 00:21:36.948 { 00:21:36.948 "method": "framework_set_scheduler", 00:21:36.948 "params": { 00:21:36.948 "name": "static" 00:21:36.948 } 00:21:36.948 } 00:21:36.948 ] 00:21:36.948 }, 00:21:36.948 { 00:21:36.948 "subsystem": "nvmf", 00:21:36.948 "config": [ 00:21:36.948 { 00:21:36.948 "method": "nvmf_set_config", 00:21:36.948 "params": { 00:21:36.948 "discovery_filter": "match_any", 00:21:36.948 "admin_cmd_passthru": { 00:21:36.948 "identify_ctrlr": false 00:21:36.948 } 00:21:36.948 } 00:21:36.948 }, 00:21:36.948 { 00:21:36.948 "method": "nvmf_set_max_subsystems", 00:21:36.948 "params": { 00:21:36.948 "max_subsystems": 1024 00:21:36.948 } 00:21:36.948 }, 00:21:36.948 { 00:21:36.948 "method": "nvmf_set_crdt", 00:21:36.948 "params": { 00:21:36.948 "crdt1": 0, 00:21:36.948 "crdt2": 0, 00:21:36.948 "crdt3": 0 00:21:36.948 } 00:21:36.948 }, 00:21:36.948 { 00:21:36.948 "method": "nvmf_create_transport", 00:21:36.948 "params": { 00:21:36.948 "trtype": "TCP", 00:21:36.948 "max_queue_depth": 128, 00:21:36.948 "max_io_qpairs_per_ctrlr": 127, 00:21:36.948 "in_capsule_data_size": 4096, 00:21:36.948 "max_io_size": 131072, 00:21:36.948 "io_unit_size": 131072, 00:21:36.948 "max_aq_depth": 128, 00:21:36.948 "num_shared_buffers": 511, 00:21:36.948 "buf_cache_size": 4294967295, 00:21:36.948 "dif_insert_or_strip": false, 00:21:36.948 "zcopy": false, 00:21:36.948 "c2h_success": false, 00:21:36.948 "sock_priority": 0, 00:21:36.948 "abort_timeout_sec": 1, 00:21:36.948 "ack_timeout": 0, 00:21:36.948 "data_wr_pool_size": 0 00:21:36.948 } 00:21:36.948 }, 00:21:36.948 { 00:21:36.948 "method": "nvmf_create_subsystem", 00:21:36.948 "params": { 00:21:36.948 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:36.948 "allow_any_host": false, 00:21:36.948 "serial_number": "00000000000000000000", 00:21:36.948 "model_number": "SPDK bdev Controller", 00:21:36.948 "max_namespaces": 32, 00:21:36.948 "min_cntlid": 1, 00:21:36.948 "max_cntlid": 65519, 00:21:36.948 "ana_reporting": false 00:21:36.948 } 00:21:36.948 }, 00:21:36.948 { 00:21:36.948 "method": "nvmf_subsystem_add_host", 00:21:36.948 "params": { 00:21:36.948 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:36.948 "host": "nqn.2016-06.io.spdk:host1", 00:21:36.948 "psk": "key0" 00:21:36.948 } 00:21:36.948 }, 00:21:36.948 { 00:21:36.948 "method": "nvmf_subsystem_add_ns", 00:21:36.948 "params": { 00:21:36.948 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:36.948 "namespace": { 00:21:36.948 "nsid": 1, 00:21:36.948 "bdev_name": "malloc0", 00:21:36.948 "nguid": "8179D2756E5B4A7FA13FF43583D636C8", 00:21:36.948 "uuid": "8179d275-6e5b-4a7f-a13f-f43583d636c8", 00:21:36.948 "no_auto_visible": false 00:21:36.948 } 00:21:36.948 } 00:21:36.948 }, 00:21:36.948 { 00:21:36.948 "method": "nvmf_subsystem_add_listener", 00:21:36.948 "params": { 00:21:36.948 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:36.948 "listen_address": { 00:21:36.948 "trtype": "TCP", 00:21:36.948 "adrfam": "IPv4", 00:21:36.948 "traddr": "10.0.0.2", 00:21:36.948 "trsvcid": "4420" 00:21:36.948 }, 00:21:36.948 "secure_channel": false, 00:21:36.948 "sock_impl": "ssl" 00:21:36.948 } 00:21:36.948 } 00:21:36.948 ] 00:21:36.948 } 00:21:36.948 ] 00:21:36.948 }' 00:21:36.948 10:10:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:21:37.206 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # bperfcfg='{ 00:21:37.206 "subsystems": [ 00:21:37.207 { 00:21:37.207 "subsystem": "keyring", 00:21:37.207 "config": [ 00:21:37.207 { 00:21:37.207 "method": "keyring_file_add_key", 00:21:37.207 "params": { 00:21:37.207 "name": "key0", 00:21:37.207 "path": "/tmp/tmp.4XVnji5A9K" 00:21:37.207 } 00:21:37.207 } 00:21:37.207 ] 00:21:37.207 }, 00:21:37.207 { 00:21:37.207 "subsystem": "iobuf", 00:21:37.207 "config": [ 00:21:37.207 { 00:21:37.207 "method": "iobuf_set_options", 00:21:37.207 "params": { 00:21:37.207 "small_pool_count": 8192, 00:21:37.207 "large_pool_count": 1024, 00:21:37.207 "small_bufsize": 8192, 00:21:37.207 "large_bufsize": 135168 00:21:37.207 } 00:21:37.207 } 00:21:37.207 ] 00:21:37.207 }, 00:21:37.207 { 00:21:37.207 "subsystem": "sock", 00:21:37.207 "config": [ 00:21:37.207 { 00:21:37.207 "method": "sock_set_default_impl", 00:21:37.207 "params": { 00:21:37.207 "impl_name": "posix" 00:21:37.207 } 00:21:37.207 }, 00:21:37.207 { 00:21:37.207 "method": "sock_impl_set_options", 00:21:37.207 "params": { 00:21:37.207 "impl_name": "ssl", 00:21:37.207 "recv_buf_size": 4096, 00:21:37.207 "send_buf_size": 4096, 00:21:37.207 "enable_recv_pipe": true, 00:21:37.207 "enable_quickack": false, 00:21:37.207 "enable_placement_id": 0, 00:21:37.207 "enable_zerocopy_send_server": true, 00:21:37.207 "enable_zerocopy_send_client": false, 00:21:37.207 "zerocopy_threshold": 0, 00:21:37.207 "tls_version": 0, 00:21:37.207 "enable_ktls": false 00:21:37.207 } 00:21:37.207 }, 00:21:37.207 { 00:21:37.207 "method": "sock_impl_set_options", 00:21:37.207 "params": { 00:21:37.207 "impl_name": "posix", 00:21:37.207 "recv_buf_size": 2097152, 00:21:37.207 "send_buf_size": 2097152, 00:21:37.207 "enable_recv_pipe": true, 00:21:37.207 "enable_quickack": false, 00:21:37.207 "enable_placement_id": 0, 00:21:37.207 "enable_zerocopy_send_server": true, 00:21:37.207 "enable_zerocopy_send_client": false, 00:21:37.207 "zerocopy_threshold": 0, 00:21:37.207 "tls_version": 0, 00:21:37.207 "enable_ktls": false 00:21:37.207 } 00:21:37.207 } 00:21:37.207 ] 00:21:37.207 }, 00:21:37.207 { 00:21:37.207 "subsystem": "vmd", 00:21:37.207 "config": [] 00:21:37.207 }, 00:21:37.207 { 00:21:37.207 "subsystem": "accel", 00:21:37.207 "config": [ 00:21:37.207 { 00:21:37.207 "method": "accel_set_options", 00:21:37.207 "params": { 00:21:37.207 "small_cache_size": 128, 00:21:37.207 "large_cache_size": 16, 00:21:37.207 "task_count": 2048, 00:21:37.207 "sequence_count": 2048, 00:21:37.207 "buf_count": 2048 00:21:37.207 } 00:21:37.207 } 00:21:37.207 ] 00:21:37.207 }, 00:21:37.207 { 00:21:37.207 "subsystem": "bdev", 00:21:37.207 "config": [ 00:21:37.207 { 00:21:37.207 "method": "bdev_set_options", 00:21:37.207 "params": { 00:21:37.207 "bdev_io_pool_size": 65535, 00:21:37.207 "bdev_io_cache_size": 256, 00:21:37.207 "bdev_auto_examine": true, 00:21:37.207 "iobuf_small_cache_size": 128, 00:21:37.207 "iobuf_large_cache_size": 16 00:21:37.207 } 00:21:37.207 }, 00:21:37.207 { 00:21:37.207 "method": "bdev_raid_set_options", 00:21:37.207 "params": { 00:21:37.207 "process_window_size_kb": 1024, 00:21:37.207 "process_max_bandwidth_mb_sec": 0 00:21:37.207 } 00:21:37.207 }, 00:21:37.207 { 00:21:37.207 "method": "bdev_iscsi_set_options", 00:21:37.207 "params": { 00:21:37.207 "timeout_sec": 30 00:21:37.207 } 00:21:37.207 }, 00:21:37.207 { 00:21:37.207 "method": "bdev_nvme_set_options", 00:21:37.207 "params": { 00:21:37.207 "action_on_timeout": "none", 00:21:37.207 "timeout_us": 0, 00:21:37.207 "timeout_admin_us": 0, 00:21:37.207 "keep_alive_timeout_ms": 10000, 00:21:37.207 "arbitration_burst": 0, 00:21:37.207 "low_priority_weight": 0, 00:21:37.207 "medium_priority_weight": 0, 00:21:37.207 "high_priority_weight": 0, 00:21:37.207 "nvme_adminq_poll_period_us": 10000, 00:21:37.207 "nvme_ioq_poll_period_us": 0, 00:21:37.207 "io_queue_requests": 512, 00:21:37.207 "delay_cmd_submit": true, 00:21:37.207 "transport_retry_count": 4, 00:21:37.207 "bdev_retry_count": 3, 00:21:37.207 "transport_ack_timeout": 0, 00:21:37.207 "ctrlr_loss_timeout_sec": 0, 00:21:37.207 "reconnect_delay_sec": 0, 00:21:37.207 "fast_io_fail_timeout_sec": 0, 00:21:37.207 "disable_auto_failback": false, 00:21:37.207 "generate_uuids": false, 00:21:37.207 "transport_tos": 0, 00:21:37.207 "nvme_error_stat": false, 00:21:37.207 "rdma_srq_size": 0, 00:21:37.207 "io_path_stat": false, 00:21:37.207 "allow_accel_sequence": false, 00:21:37.207 "rdma_max_cq_size": 0, 00:21:37.207 "rdma_cm_event_timeout_ms": 0, 00:21:37.207 "dhchap_digests": [ 00:21:37.207 "sha256", 00:21:37.207 "sha384", 00:21:37.207 "sha512" 00:21:37.207 ], 00:21:37.207 "dhchap_dhgroups": [ 00:21:37.207 "null", 00:21:37.207 "ffdhe2048", 00:21:37.207 "ffdhe3072", 00:21:37.207 "ffdhe4096", 00:21:37.207 "ffdhe6144", 00:21:37.207 "ffdhe8192" 00:21:37.207 ] 00:21:37.207 } 00:21:37.207 }, 00:21:37.207 { 00:21:37.207 "method": "bdev_nvme_attach_controller", 00:21:37.207 "params": { 00:21:37.207 "name": "nvme0", 00:21:37.207 "trtype": "TCP", 00:21:37.207 "adrfam": "IPv4", 00:21:37.207 "traddr": "10.0.0.2", 00:21:37.207 "trsvcid": "4420", 00:21:37.207 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:37.207 "prchk_reftag": false, 00:21:37.207 "prchk_guard": false, 00:21:37.207 "ctrlr_loss_timeout_sec": 0, 00:21:37.207 "reconnect_delay_sec": 0, 00:21:37.207 "fast_io_fail_timeout_sec": 0, 00:21:37.207 "psk": "key0", 00:21:37.207 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:37.207 "hdgst": false, 00:21:37.207 "ddgst": false 00:21:37.207 } 00:21:37.207 }, 00:21:37.207 { 00:21:37.207 "method": "bdev_nvme_set_hotplug", 00:21:37.207 "params": { 00:21:37.207 "period_us": 100000, 00:21:37.207 "enable": false 00:21:37.207 } 00:21:37.207 }, 00:21:37.207 { 00:21:37.207 "method": "bdev_enable_histogram", 00:21:37.207 "params": { 00:21:37.207 "name": "nvme0n1", 00:21:37.207 "enable": true 00:21:37.207 } 00:21:37.207 }, 00:21:37.207 { 00:21:37.207 "method": "bdev_wait_for_examine" 00:21:37.207 } 00:21:37.207 ] 00:21:37.207 }, 00:21:37.207 { 00:21:37.207 "subsystem": "nbd", 00:21:37.207 "config": [] 00:21:37.207 } 00:21:37.207 ] 00:21:37.207 }' 00:21:37.207 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # killprocess 470256 00:21:37.207 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 470256 ']' 00:21:37.207 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 470256 00:21:37.207 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:37.207 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:37.207 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 470256 00:21:37.207 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:37.207 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:37.207 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 470256' 00:21:37.207 killing process with pid 470256 00:21:37.208 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 470256 00:21:37.208 Received shutdown signal, test time was about 1.000000 seconds 00:21:37.208 00:21:37.208 Latency(us) 00:21:37.208 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:37.208 =================================================================================================================== 00:21:37.208 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:37.208 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 470256 00:21:37.801 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@269 -- # killprocess 470224 00:21:37.801 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 470224 ']' 00:21:37.801 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 470224 00:21:37.801 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:37.801 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:37.801 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 470224 00:21:37.801 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:37.801 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:37.801 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 470224' 00:21:37.801 killing process with pid 470224 00:21:37.801 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 470224 00:21:37.801 10:10:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 470224 00:21:38.060 10:10:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # nvmfappstart -c /dev/fd/62 00:21:38.060 10:10:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:38.060 10:10:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # echo '{ 00:21:38.060 "subsystems": [ 00:21:38.060 { 00:21:38.060 "subsystem": "keyring", 00:21:38.060 "config": [ 00:21:38.060 { 00:21:38.060 "method": "keyring_file_add_key", 00:21:38.060 "params": { 00:21:38.060 "name": "key0", 00:21:38.060 "path": "/tmp/tmp.4XVnji5A9K" 00:21:38.060 } 00:21:38.060 } 00:21:38.060 ] 00:21:38.060 }, 00:21:38.060 { 00:21:38.060 "subsystem": "iobuf", 00:21:38.060 "config": [ 00:21:38.060 { 00:21:38.060 "method": "iobuf_set_options", 00:21:38.060 "params": { 00:21:38.060 "small_pool_count": 8192, 00:21:38.060 "large_pool_count": 1024, 00:21:38.060 "small_bufsize": 8192, 00:21:38.060 "large_bufsize": 135168 00:21:38.060 } 00:21:38.060 } 00:21:38.060 ] 00:21:38.060 }, 00:21:38.060 { 00:21:38.060 "subsystem": "sock", 00:21:38.060 "config": [ 00:21:38.060 { 00:21:38.060 "method": "sock_set_default_impl", 00:21:38.060 "params": { 00:21:38.060 "impl_name": "posix" 00:21:38.060 } 00:21:38.060 }, 00:21:38.060 { 00:21:38.060 "method": "sock_impl_set_options", 00:21:38.060 "params": { 00:21:38.060 "impl_name": "ssl", 00:21:38.060 "recv_buf_size": 4096, 00:21:38.060 "send_buf_size": 4096, 00:21:38.060 "enable_recv_pipe": true, 00:21:38.060 "enable_quickack": false, 00:21:38.060 "enable_placement_id": 0, 00:21:38.060 "enable_zerocopy_send_server": true, 00:21:38.060 "enable_zerocopy_send_client": false, 00:21:38.060 "zerocopy_threshold": 0, 00:21:38.060 "tls_version": 0, 00:21:38.060 "enable_ktls": false 00:21:38.060 } 00:21:38.060 }, 00:21:38.060 { 00:21:38.060 "method": "sock_impl_set_options", 00:21:38.060 "params": { 00:21:38.060 "impl_name": "posix", 00:21:38.060 "recv_buf_size": 2097152, 00:21:38.060 "send_buf_size": 2097152, 00:21:38.060 "enable_recv_pipe": true, 00:21:38.060 "enable_quickack": false, 00:21:38.060 "enable_placement_id": 0, 00:21:38.060 "enable_zerocopy_send_server": true, 00:21:38.060 "enable_zerocopy_send_client": false, 00:21:38.060 "zerocopy_threshold": 0, 00:21:38.060 "tls_version": 0, 00:21:38.060 "enable_ktls": false 00:21:38.060 } 00:21:38.060 } 00:21:38.060 ] 00:21:38.060 }, 00:21:38.060 { 00:21:38.060 "subsystem": "vmd", 00:21:38.060 "config": [] 00:21:38.060 }, 00:21:38.060 { 00:21:38.060 "subsystem": "accel", 00:21:38.060 "config": [ 00:21:38.060 { 00:21:38.060 "method": "accel_set_options", 00:21:38.060 "params": { 00:21:38.060 "small_cache_size": 128, 00:21:38.060 "large_cache_size": 16, 00:21:38.060 "task_count": 2048, 00:21:38.060 "sequence_count": 2048, 00:21:38.060 "buf_count": 2048 00:21:38.060 } 00:21:38.060 } 00:21:38.060 ] 00:21:38.060 }, 00:21:38.060 { 00:21:38.060 "subsystem": "bdev", 00:21:38.060 "config": [ 00:21:38.060 { 00:21:38.060 "method": "bdev_set_options", 00:21:38.060 "params": { 00:21:38.060 "bdev_io_pool_size": 65535, 00:21:38.060 "bdev_io_cache_size": 256, 00:21:38.060 "bdev_auto_examine": true, 00:21:38.060 "iobuf_small_cache_size": 128, 00:21:38.060 "iobuf_large_cache_size": 16 00:21:38.060 } 00:21:38.060 }, 00:21:38.060 { 00:21:38.060 "method": "bdev_raid_set_options", 00:21:38.060 "params": { 00:21:38.060 "process_window_size_kb": 1024, 00:21:38.060 "process_max_bandwidth_mb_sec": 0 00:21:38.060 } 00:21:38.060 }, 00:21:38.060 { 00:21:38.060 "method": "bdev_iscsi_set_options", 00:21:38.060 "params": { 00:21:38.060 "timeout_sec": 30 00:21:38.060 } 00:21:38.060 }, 00:21:38.060 { 00:21:38.060 "method": "bdev_nvme_set_options", 00:21:38.060 "params": { 00:21:38.060 "action_on_timeout": "none", 00:21:38.060 "timeout_us": 0, 00:21:38.060 "timeout_admin_us": 0, 00:21:38.060 "keep_alive_timeout_ms": 10000, 00:21:38.060 "arbitration_burst": 0, 00:21:38.060 "low_priority_weight": 0, 00:21:38.060 "medium_priority_weight": 0, 00:21:38.060 "high_priority_weight": 0, 00:21:38.060 "nvme_adminq_poll_period_us": 10000, 00:21:38.060 "nvme_ioq_poll_period_us": 0, 00:21:38.060 "io_queue_requests": 0, 00:21:38.060 "delay_cmd_submit": true, 00:21:38.060 "transport_retry_count": 4, 00:21:38.060 "bdev_retry_count": 3, 00:21:38.060 "transport_ack_timeout": 0, 00:21:38.060 "ctrlr_loss_timeout_sec": 0, 00:21:38.060 "reconnect_delay_sec": 0, 00:21:38.060 "fast_io_fail_timeout_sec": 0, 00:21:38.060 "disable_auto_failback": false, 00:21:38.060 "generate_uuids": false, 00:21:38.060 "transport_tos": 0, 00:21:38.060 "nvme_error_stat": false, 00:21:38.060 "rdma_srq_size": 0, 00:21:38.060 "io_path_stat": false, 00:21:38.060 "allow_accel_sequence": false, 00:21:38.060 "rdma_max_cq_size": 0, 00:21:38.060 "rdma_cm_event_timeout_ms": 0, 00:21:38.060 "dhchap_digests": [ 00:21:38.060 "sha256", 00:21:38.060 "sha384", 00:21:38.061 "sha512" 00:21:38.061 ], 00:21:38.061 "dhchap_dhgroups": [ 00:21:38.061 "null", 00:21:38.061 "ffdhe2048", 00:21:38.061 "ffdhe3072", 00:21:38.061 "ffdhe4096", 00:21:38.061 "ffdhe6144", 00:21:38.061 "ffdhe8192" 00:21:38.061 ] 00:21:38.061 } 00:21:38.061 }, 00:21:38.061 { 00:21:38.061 "method": "bdev_nvme_set_hotplug", 00:21:38.061 "params": { 00:21:38.061 "period_us": 100000, 00:21:38.061 "enable": false 00:21:38.061 } 00:21:38.061 }, 00:21:38.061 { 00:21:38.061 "method": "bdev_malloc_create", 00:21:38.061 "params": { 00:21:38.061 "name": "malloc0", 00:21:38.061 "num_blocks": 8192, 00:21:38.061 "block_size": 4096, 00:21:38.061 "physical_block_size": 4096, 00:21:38.061 "uuid": "8179d275-6e5b-4a7f-a13f-f43583d636c8", 00:21:38.061 "optimal_io_boundary": 0, 00:21:38.061 "md_size": 0, 00:21:38.061 "dif_type": 0, 00:21:38.061 "dif_is_head_of_md": false, 00:21:38.061 "dif_pi_format": 0 00:21:38.061 } 00:21:38.061 }, 00:21:38.061 { 00:21:38.061 "method": "bdev_wait_for_examine" 00:21:38.061 } 00:21:38.061 ] 00:21:38.061 }, 00:21:38.061 { 00:21:38.061 "subsystem": "nbd", 00:21:38.061 "config": [] 00:21:38.061 }, 00:21:38.061 { 00:21:38.061 "subsystem": "scheduler", 00:21:38.061 "config": [ 00:21:38.061 { 00:21:38.061 "method": "framework_set_scheduler", 00:21:38.061 "params": { 00:21:38.061 "name": "static" 00:21:38.061 } 00:21:38.061 } 00:21:38.061 ] 00:21:38.061 }, 00:21:38.061 { 00:21:38.061 "subsystem": "nvmf", 00:21:38.061 "config": [ 00:21:38.061 { 00:21:38.061 "method": "nvmf_set_config", 00:21:38.061 "params": { 00:21:38.061 "discovery_filter": "match_any", 00:21:38.061 "admin_cmd_passthru": { 00:21:38.061 "identify_ctrlr": false 00:21:38.061 } 00:21:38.061 } 00:21:38.061 }, 00:21:38.061 { 00:21:38.061 "method": "nvmf_set_max_subsystems", 00:21:38.061 "params": { 00:21:38.061 "max_subsystems": 1024 00:21:38.061 } 00:21:38.061 }, 00:21:38.061 { 00:21:38.061 "method": "nvmf_set_crdt", 00:21:38.061 "params": { 00:21:38.061 "crdt1": 0, 00:21:38.061 "crdt2": 0, 00:21:38.061 "crdt3": 0 00:21:38.061 } 00:21:38.061 }, 00:21:38.061 { 00:21:38.061 "method": "nvmf_create_transport", 00:21:38.061 "params": { 00:21:38.061 "trtype": "TCP", 00:21:38.061 "max_queue_depth": 128, 00:21:38.061 "max_io_qpairs_per_ctrlr": 127, 00:21:38.061 "in_capsule_data_size": 4096, 00:21:38.061 "max_io_size": 131072, 00:21:38.061 "io_unit_size": 131072, 00:21:38.061 "max_aq_depth": 128, 00:21:38.061 "num_shared_buffers": 511, 00:21:38.061 "buf_cache_size": 4294967295, 00:21:38.061 "dif_insert_or_strip": false, 00:21:38.061 "zcopy": false, 00:21:38.061 "c2h_success": false, 00:21:38.061 "sock_priority": 0, 00:21:38.061 "abort_timeout_sec": 1, 00:21:38.061 "ack_timeout": 0, 00:21:38.061 "data_wr_pool_size": 0 00:21:38.061 } 00:21:38.061 }, 00:21:38.061 { 00:21:38.061 "method": "nvmf_create_subsystem", 00:21:38.061 "params": { 00:21:38.061 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:38.061 "allow_any_host": false, 00:21:38.061 "serial_number": "00000000000000000000", 00:21:38.061 "model_number": "SPDK bdev Controller", 00:21:38.061 "max_namespaces": 32, 00:21:38.061 "min_cntlid": 1, 00:21:38.061 "max_cntlid": 65519, 00:21:38.061 "ana_reporting": false 00:21:38.061 } 00:21:38.061 }, 00:21:38.061 { 00:21:38.061 "method": "nvmf_subsystem_add_host", 00:21:38.061 "params": { 00:21:38.061 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:38.061 "host": "nqn.2016-06.io.spdk:host1", 00:21:38.061 "psk": "key0" 00:21:38.061 } 00:21:38.061 }, 00:21:38.061 { 00:21:38.061 "method": "nvmf_subsystem_add_ns", 00:21:38.061 "params": { 00:21:38.061 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:38.061 "namespace": { 00:21:38.061 "nsid": 1, 00:21:38.061 "bdev_name": "malloc0", 00:21:38.061 "nguid": "8179D2756E5B4A7FA13FF43583D636C8", 00:21:38.061 "uuid": "8179d275-6e5b-4a7f-a13f-f43583d636c8", 00:21:38.061 "no_auto_visible": false 00:21:38.061 } 00:21:38.061 } 00:21:38.061 }, 00:21:38.061 { 00:21:38.061 "method": "nvmf_subsystem_add_listener", 00:21:38.061 "params": { 00:21:38.061 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:38.061 "listen_address": { 00:21:38.061 "trtype": "TCP", 00:21:38.061 "adrfam": "IPv4", 00:21:38.061 "traddr": "10.0.0.2", 00:21:38.061 "trsvcid": "4420" 00:21:38.061 }, 00:21:38.061 "secure_channel": false, 00:21:38.061 "sock_impl": "ssl" 00:21:38.061 } 00:21:38.061 } 00:21:38.061 ] 00:21:38.061 } 00:21:38.061 ] 00:21:38.061 }' 00:21:38.061 10:10:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:38.061 10:10:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:38.061 10:10:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=470668 00:21:38.061 10:10:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:21:38.061 10:10:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 470668 00:21:38.061 10:10:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 470668 ']' 00:21:38.061 10:10:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:38.061 10:10:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:38.061 10:10:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:38.061 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:38.061 10:10:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:38.061 10:10:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:38.061 [2024-07-25 10:10:23.094676] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:21:38.061 [2024-07-25 10:10:23.094790] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:38.061 EAL: No free 2048 kB hugepages reported on node 1 00:21:38.061 [2024-07-25 10:10:23.179993] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:38.319 [2024-07-25 10:10:23.303814] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:38.319 [2024-07-25 10:10:23.303875] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:38.319 [2024-07-25 10:10:23.303891] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:38.319 [2024-07-25 10:10:23.303905] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:38.319 [2024-07-25 10:10:23.303916] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:38.319 [2024-07-25 10:10:23.304002] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:38.577 [2024-07-25 10:10:23.556030] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:38.577 [2024-07-25 10:10:23.601212] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:38.577 [2024-07-25 10:10:23.601510] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:39.143 10:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:39.143 10:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:39.143 10:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:39.143 10:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:39.143 10:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:39.143 10:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:39.143 10:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # bdevperf_pid=470836 00:21:39.143 10:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@275 -- # waitforlisten 470836 /var/tmp/bdevperf.sock 00:21:39.144 10:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 470836 ']' 00:21:39.144 10:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:39.144 10:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:21:39.144 10:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:39.144 10:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # echo '{ 00:21:39.144 "subsystems": [ 00:21:39.144 { 00:21:39.144 "subsystem": "keyring", 00:21:39.144 "config": [ 00:21:39.144 { 00:21:39.144 "method": "keyring_file_add_key", 00:21:39.144 "params": { 00:21:39.144 "name": "key0", 00:21:39.144 "path": "/tmp/tmp.4XVnji5A9K" 00:21:39.144 } 00:21:39.144 } 00:21:39.144 ] 00:21:39.144 }, 00:21:39.144 { 00:21:39.144 "subsystem": "iobuf", 00:21:39.144 "config": [ 00:21:39.144 { 00:21:39.144 "method": "iobuf_set_options", 00:21:39.144 "params": { 00:21:39.144 "small_pool_count": 8192, 00:21:39.144 "large_pool_count": 1024, 00:21:39.144 "small_bufsize": 8192, 00:21:39.144 "large_bufsize": 135168 00:21:39.144 } 00:21:39.144 } 00:21:39.144 ] 00:21:39.144 }, 00:21:39.144 { 00:21:39.144 "subsystem": "sock", 00:21:39.144 "config": [ 00:21:39.144 { 00:21:39.144 "method": "sock_set_default_impl", 00:21:39.144 "params": { 00:21:39.144 "impl_name": "posix" 00:21:39.144 } 00:21:39.144 }, 00:21:39.144 { 00:21:39.144 "method": "sock_impl_set_options", 00:21:39.144 "params": { 00:21:39.144 "impl_name": "ssl", 00:21:39.144 "recv_buf_size": 4096, 00:21:39.144 "send_buf_size": 4096, 00:21:39.144 "enable_recv_pipe": true, 00:21:39.144 "enable_quickack": false, 00:21:39.144 "enable_placement_id": 0, 00:21:39.144 "enable_zerocopy_send_server": true, 00:21:39.144 "enable_zerocopy_send_client": false, 00:21:39.144 "zerocopy_threshold": 0, 00:21:39.144 "tls_version": 0, 00:21:39.144 "enable_ktls": false 00:21:39.144 } 00:21:39.144 }, 00:21:39.144 { 00:21:39.144 "method": "sock_impl_set_options", 00:21:39.144 "params": { 00:21:39.144 "impl_name": "posix", 00:21:39.144 "recv_buf_size": 2097152, 00:21:39.144 "send_buf_size": 2097152, 00:21:39.144 "enable_recv_pipe": true, 00:21:39.144 "enable_quickack": false, 00:21:39.144 "enable_placement_id": 0, 00:21:39.144 "enable_zerocopy_send_server": true, 00:21:39.144 "enable_zerocopy_send_client": false, 00:21:39.144 "zerocopy_threshold": 0, 00:21:39.144 "tls_version": 0, 00:21:39.144 "enable_ktls": false 00:21:39.144 } 00:21:39.144 } 00:21:39.144 ] 00:21:39.144 }, 00:21:39.144 { 00:21:39.144 "subsystem": "vmd", 00:21:39.144 "config": [] 00:21:39.144 }, 00:21:39.144 { 00:21:39.144 "subsystem": "accel", 00:21:39.144 "config": [ 00:21:39.144 { 00:21:39.144 "method": "accel_set_options", 00:21:39.144 "params": { 00:21:39.144 "small_cache_size": 128, 00:21:39.144 "large_cache_size": 16, 00:21:39.144 "task_count": 2048, 00:21:39.144 "sequence_count": 2048, 00:21:39.144 "buf_count": 2048 00:21:39.144 } 00:21:39.144 } 00:21:39.144 ] 00:21:39.144 }, 00:21:39.144 { 00:21:39.144 "subsystem": "bdev", 00:21:39.144 "config": [ 00:21:39.144 { 00:21:39.144 "method": "bdev_set_options", 00:21:39.144 "params": { 00:21:39.144 "bdev_io_pool_size": 65535, 00:21:39.144 "bdev_io_cache_size": 256, 00:21:39.144 "bdev_auto_examine": true, 00:21:39.144 "iobuf_small_cache_size": 128, 00:21:39.144 "iobuf_large_cache_size": 16 00:21:39.144 } 00:21:39.144 }, 00:21:39.144 { 00:21:39.144 "method": "bdev_raid_set_options", 00:21:39.144 "params": { 00:21:39.144 "process_window_size_kb": 1024, 00:21:39.144 "process_max_bandwidth_mb_sec": 0 00:21:39.144 } 00:21:39.144 }, 00:21:39.144 { 00:21:39.144 "method": "bdev_iscsi_set_options", 00:21:39.144 "params": { 00:21:39.144 "timeout_sec": 30 00:21:39.144 } 00:21:39.144 }, 00:21:39.144 { 00:21:39.144 "method": "bdev_nvme_set_options", 00:21:39.144 "params": { 00:21:39.144 "action_on_timeout": "none", 00:21:39.144 "timeout_us": 0, 00:21:39.144 "timeout_admin_us": 0, 00:21:39.144 "keep_alive_timeout_ms": 10000, 00:21:39.144 "arbitration_burst": 0, 00:21:39.144 "low_priority_weight": 0, 00:21:39.144 "medium_priority_weight": 0, 00:21:39.144 "high_priority_weight": 0, 00:21:39.144 "nvme_adminq_poll_period_us": 10000, 00:21:39.144 "nvme_ioq_poll_period_us": 0, 00:21:39.144 "io_queue_requests": 512, 00:21:39.144 "delay_cmd_submit": true, 00:21:39.144 "transport_retry_count": 4, 00:21:39.144 "bdev_retry_count": 3, 00:21:39.144 "transport_ack_timeout": 0, 00:21:39.144 "ctrlr_loss_timeout_sec": 0, 00:21:39.144 "reconnect_delay_sec": 0, 00:21:39.144 "fast_io_fail_timeout_sec": 0, 00:21:39.144 "disable_auto_failback": false, 00:21:39.144 "generate_uuids": false, 00:21:39.144 "transport_tos": 0, 00:21:39.144 "nvme_error_stat": false, 00:21:39.144 "rdma_srq_size": 0, 00:21:39.144 "io_path_stat": false, 00:21:39.144 "allow_accel_sequence": false, 00:21:39.144 "rdma_max_cq_size": 0, 00:21:39.144 "rdma_cm_event_timeout_ms": 0, 00:21:39.144 "dhchap_digests": [ 00:21:39.144 "sha256", 00:21:39.144 "sha384", 00:21:39.144 "sha512" 00:21:39.144 ], 00:21:39.144 "dhchap_dhgroups": [ 00:21:39.144 "null", 00:21:39.144 "ffdhe2048", 00:21:39.144 "ffdhe3072", 00:21:39.144 "ffdhe4096", 00:21:39.144 "ffdhe6144", 00:21:39.144 "ffdhe8192" 00:21:39.144 ] 00:21:39.144 } 00:21:39.144 }, 00:21:39.144 { 00:21:39.144 "method": "bdev_nvme_attach_controller", 00:21:39.144 "params": { 00:21:39.144 "name": "nvme0", 00:21:39.144 "trtype": "TCP", 00:21:39.144 "adrfam": "IPv4", 00:21:39.144 "traddr": "10.0.0.2", 00:21:39.144 "trsvcid": "4420", 00:21:39.144 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:39.144 "prchk_reftag": false, 00:21:39.144 "prchk_guard": false, 00:21:39.144 "ctrlr_loss_timeout_sec": 0, 00:21:39.144 "reconnect_delay_sec": 0, 00:21:39.144 "fast_io_fail_timeout_sec": 0, 00:21:39.144 "psk": "key0", 00:21:39.144 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:39.144 "hdgst": false, 00:21:39.145 "ddgst": false 00:21:39.145 } 00:21:39.145 }, 00:21:39.145 { 00:21:39.145 "method": "bdev_nvme_set_hotplug", 00:21:39.145 "params": { 00:21:39.145 "period_us": 100000, 00:21:39.145 "enable": false 00:21:39.145 } 00:21:39.145 }, 00:21:39.145 { 00:21:39.145 "method": "bdev_enable_histogram", 00:21:39.145 "params": { 00:21:39.145 "name": "nvme0n1", 00:21:39.145 "enable": true 00:21:39.145 } 00:21:39.145 }, 00:21:39.145 { 00:21:39.145 "method": "bdev_wait_for_examine" 00:21:39.145 } 00:21:39.145 ] 00:21:39.145 }, 00:21:39.145 { 00:21:39.145 "subsystem": "nbd", 00:21:39.145 "config": [] 00:21:39.145 } 00:21:39.145 ] 00:21:39.145 }' 00:21:39.145 10:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:39.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:39.145 10:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:39.145 10:10:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:39.145 [2024-07-25 10:10:24.281630] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:21:39.145 [2024-07-25 10:10:24.281726] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid470836 ] 00:21:39.403 EAL: No free 2048 kB hugepages reported on node 1 00:21:39.403 [2024-07-25 10:10:24.355581] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:39.403 [2024-07-25 10:10:24.478222] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:39.661 [2024-07-25 10:10:24.658421] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:40.595 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:40.595 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:21:40.595 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:40.595 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # jq -r '.[].name' 00:21:40.853 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:40.853 10:10:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@278 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:41.111 Running I/O for 1 seconds... 00:21:42.044 00:21:42.044 Latency(us) 00:21:42.044 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:42.044 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:42.044 Verification LBA range: start 0x0 length 0x2000 00:21:42.044 nvme0n1 : 1.04 2740.65 10.71 0.00 0.00 45814.68 8495.41 69128.34 00:21:42.044 =================================================================================================================== 00:21:42.044 Total : 2740.65 10.71 0.00 0.00 45814.68 8495.41 69128.34 00:21:42.044 0 00:21:42.044 10:10:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # trap - SIGINT SIGTERM EXIT 00:21:42.044 10:10:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@281 -- # cleanup 00:21:42.044 10:10:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:21:42.044 10:10:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@808 -- # type=--id 00:21:42.044 10:10:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@809 -- # id=0 00:21:42.044 10:10:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:21:42.044 10:10:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:42.044 10:10:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:21:42.044 10:10:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:21:42.044 10:10:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # for n in $shm_files 00:21:42.044 10:10:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:42.044 nvmf_trace.0 00:21:42.302 10:10:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # return 0 00:21:42.302 10:10:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 470836 00:21:42.302 10:10:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 470836 ']' 00:21:42.302 10:10:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 470836 00:21:42.302 10:10:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:42.302 10:10:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:42.302 10:10:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 470836 00:21:42.302 10:10:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:42.302 10:10:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:42.302 10:10:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 470836' 00:21:42.302 killing process with pid 470836 00:21:42.302 10:10:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 470836 00:21:42.302 Received shutdown signal, test time was about 1.000000 seconds 00:21:42.302 00:21:42.302 Latency(us) 00:21:42.302 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:42.302 =================================================================================================================== 00:21:42.302 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:42.302 10:10:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 470836 00:21:42.559 10:10:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:21:42.559 10:10:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:42.559 10:10:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:21:42.559 10:10:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:42.559 10:10:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:21:42.559 10:10:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:42.559 10:10:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:42.559 rmmod nvme_tcp 00:21:42.559 rmmod nvme_fabrics 00:21:42.559 rmmod nvme_keyring 00:21:42.559 10:10:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:42.559 10:10:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:21:42.559 10:10:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:21:42.559 10:10:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 470668 ']' 00:21:42.559 10:10:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 470668 00:21:42.559 10:10:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 470668 ']' 00:21:42.559 10:10:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 470668 00:21:42.559 10:10:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:21:42.816 10:10:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:42.816 10:10:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 470668 00:21:42.816 10:10:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:42.816 10:10:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:42.817 10:10:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 470668' 00:21:42.817 killing process with pid 470668 00:21:42.817 10:10:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 470668 00:21:42.817 10:10:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 470668 00:21:43.074 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:43.074 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:43.074 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:43.074 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:43.075 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:43.075 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:43.075 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:43.075 10:10:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:44.973 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:44.973 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.euw6gdkCoq /tmp/tmp.VDWGoAKSag /tmp/tmp.4XVnji5A9K 00:21:44.973 00:21:44.973 real 1m30.150s 00:21:44.973 user 2m23.882s 00:21:44.973 sys 0m34.468s 00:21:44.973 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:44.973 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:44.973 ************************************ 00:21:44.973 END TEST nvmf_tls 00:21:44.973 ************************************ 00:21:45.232 10:10:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:45.232 10:10:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:45.232 10:10:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:45.232 10:10:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:45.232 ************************************ 00:21:45.232 START TEST nvmf_fips 00:21:45.232 ************************************ 00:21:45.232 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:21:45.232 * Looking for test storage... 00:21:45.232 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:21:45.232 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:45.232 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:21:45.232 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:45.232 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:45.232 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:45.232 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:45.232 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:45.232 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:45.232 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:45.232 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:45.232 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:45.232 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:45.232 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:21:45.232 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:21:45.232 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:45.232 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:45.232 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:45.232 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:45.232 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:45.232 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:45.232 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:45.232 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:45.232 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:45.232 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:45.232 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:45.232 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:21:45.232 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:45.232 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:21:45.232 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:45.232 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:45.232 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:45.232 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:45.232 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:45.232 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:45.232 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:45.233 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:45.233 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:45.233 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:21:45.233 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:21:45.233 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:21:45.233 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:21:45.233 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:21:45.233 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:21:45.233 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:21:45.233 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:21:45.233 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:21:45.233 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:21:45.233 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:21:45.233 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:21:45.233 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:21:45.233 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:21:45.233 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:21:45.233 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:21:45.233 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:21:45.233 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:21:45.233 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:21:45.233 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:45.233 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:21:45.233 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:21:45.233 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:45.233 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:21:45.233 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:21:45.233 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:21:45.233 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:21:45.233 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:21:45.233 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:21:45.233 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:21:45.233 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:21:45.233 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:21:45.233 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:21:45.233 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:45.233 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:21:45.233 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:21:45.233 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:45.233 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:21:45.233 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:21:45.233 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:21:45.233 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:21:45.233 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:45.233 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:21:45.233 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:21:45.233 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:21:45.233 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:21:45.233 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:21:45.233 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:45.233 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:21:45.233 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:21:45.233 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:21:45.233 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:21:45.233 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:21:45.233 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:21:45.233 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:21:45.233 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:21:45.233 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:21:45.233 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:21:45.233 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:21:45.233 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:21:45.233 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:21:45.233 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:21:45.233 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:21:45.233 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:21:45.233 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:21:45.233 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:21:45.233 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:21:45.233 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:21:45.233 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@37 -- # cat 00:21:45.233 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:21:45.233 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:21:45.233 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:21:45.233 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:21:45.233 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:21:45.233 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:21:45.233 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:21:45.233 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:21:45.233 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:21:45.233 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:21:45.233 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:21:45.233 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # : 00:21:45.233 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:21:45.233 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:21:45.233 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:21:45.233 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:45.233 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:21:45.233 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:45.233 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:21:45.233 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:45.233 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:21:45.233 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:21:45.233 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:21:45.491 Error setting digest 00:21:45.492 00224C26F87F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:21:45.492 00224C26F87F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:21:45.492 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:21:45.492 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:45.492 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:45.492 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:45.492 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:21:45.492 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:45.492 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:45.492 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:45.492 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:45.492 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:45.492 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:45.492 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:45.492 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:45.492 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:45.492 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:45.492 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:21:45.492 10:10:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:48.021 10:10:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:48.021 10:10:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:21:48.021 10:10:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:48.021 10:10:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:48.021 10:10:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:48.021 10:10:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:48.021 10:10:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:48.021 10:10:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:21:48.021 10:10:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:48.021 10:10:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:21:48.021 10:10:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:21:48.021 10:10:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:21:48.021 10:10:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:21:48.022 10:10:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:21:48.022 10:10:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:21:48.022 10:10:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:48.022 10:10:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:48.022 10:10:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:48.022 10:10:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:48.022 10:10:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:48.022 10:10:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:48.022 10:10:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:48.022 10:10:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:48.022 10:10:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:48.022 10:10:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:48.022 10:10:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:48.022 10:10:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:48.022 10:10:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:48.022 10:10:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:48.022 10:10:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:48.022 10:10:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:48.022 10:10:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:48.022 10:10:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:48.022 10:10:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:21:48.022 Found 0000:84:00.0 (0x8086 - 0x159b) 00:21:48.022 10:10:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:48.022 10:10:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:48.022 10:10:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:48.022 10:10:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:48.022 10:10:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:48.022 10:10:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:48.022 10:10:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:21:48.022 Found 0000:84:00.1 (0x8086 - 0x159b) 00:21:48.022 10:10:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:48.022 10:10:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:48.022 10:10:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:48.022 10:10:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:48.022 10:10:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:48.022 10:10:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:48.022 10:10:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:48.022 10:10:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:48.022 10:10:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:48.022 10:10:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:48.022 10:10:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:48.022 10:10:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:48.022 10:10:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:48.022 10:10:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:48.022 10:10:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:48.022 10:10:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:21:48.022 Found net devices under 0000:84:00.0: cvl_0_0 00:21:48.022 10:10:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:48.022 10:10:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:48.022 10:10:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:48.022 10:10:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:48.022 10:10:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:48.022 10:10:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:48.022 10:10:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:48.022 10:10:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:48.022 10:10:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:21:48.022 Found net devices under 0000:84:00.1: cvl_0_1 00:21:48.022 10:10:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:48.022 10:10:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:48.022 10:10:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:21:48.022 10:10:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:48.022 10:10:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:48.022 10:10:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:48.022 10:10:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:48.022 10:10:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:48.022 10:10:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:48.022 10:10:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:48.022 10:10:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:48.022 10:10:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:48.022 10:10:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:48.022 10:10:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:48.022 10:10:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:48.022 10:10:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:48.022 10:10:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:48.022 10:10:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:48.022 10:10:32 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:48.022 10:10:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:48.022 10:10:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:48.022 10:10:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:48.022 10:10:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:48.022 10:10:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:48.022 10:10:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:48.022 10:10:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:48.022 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:48.022 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.217 ms 00:21:48.022 00:21:48.022 --- 10.0.0.2 ping statistics --- 00:21:48.022 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:48.022 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:21:48.022 10:10:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:48.022 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:48.022 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.061 ms 00:21:48.022 00:21:48.022 --- 10.0.0.1 ping statistics --- 00:21:48.022 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:48.022 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:21:48.022 10:10:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:48.022 10:10:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:21:48.022 10:10:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:48.022 10:10:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:48.022 10:10:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:48.022 10:10:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:48.022 10:10:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:48.022 10:10:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:48.022 10:10:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:48.022 10:10:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:21:48.022 10:10:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:48.022 10:10:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:48.022 10:10:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:48.022 10:10:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=473336 00:21:48.022 10:10:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:48.022 10:10:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 473336 00:21:48.022 10:10:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 473336 ']' 00:21:48.022 10:10:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:48.022 10:10:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:48.023 10:10:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:48.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:48.023 10:10:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:48.023 10:10:33 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:48.280 [2024-07-25 10:10:33.196788] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:21:48.280 [2024-07-25 10:10:33.196888] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:48.280 EAL: No free 2048 kB hugepages reported on node 1 00:21:48.280 [2024-07-25 10:10:33.272012] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:48.280 [2024-07-25 10:10:33.396585] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:48.280 [2024-07-25 10:10:33.396645] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:48.280 [2024-07-25 10:10:33.396671] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:48.280 [2024-07-25 10:10:33.396703] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:48.280 [2024-07-25 10:10:33.396723] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:48.280 [2024-07-25 10:10:33.396775] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:49.211 10:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:49.211 10:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:21:49.211 10:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:49.211 10:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:49.211 10:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:49.211 10:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:49.211 10:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:21:49.211 10:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:49.211 10:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:49.211 10:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:21:49.212 10:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:49.212 10:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:49.212 10:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:49.212 10:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:49.470 [2024-07-25 10:10:34.593716] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:49.470 [2024-07-25 10:10:34.609688] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:49.470 [2024-07-25 10:10:34.609944] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:49.729 [2024-07-25 10:10:34.641655] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:49.729 malloc0 00:21:49.729 10:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:49.729 10:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=473502 00:21:49.729 10:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:49.729 10:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 473502 /var/tmp/bdevperf.sock 00:21:49.729 10:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 473502 ']' 00:21:49.729 10:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:49.729 10:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:49.729 10:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:49.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:49.729 10:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:49.729 10:10:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:21:49.729 [2024-07-25 10:10:34.754898] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:21:49.729 [2024-07-25 10:10:34.754994] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid473502 ] 00:21:49.729 EAL: No free 2048 kB hugepages reported on node 1 00:21:49.729 [2024-07-25 10:10:34.828648] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:49.987 [2024-07-25 10:10:34.954065] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:50.918 10:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:50.918 10:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:21:50.918 10:10:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:21:51.176 [2024-07-25 10:10:36.102292] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:51.176 [2024-07-25 10:10:36.102424] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:51.176 TLSTESTn1 00:21:51.176 10:10:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:51.176 Running I/O for 10 seconds... 00:22:03.440 00:22:03.440 Latency(us) 00:22:03.440 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:03.440 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:03.440 Verification LBA range: start 0x0 length 0x2000 00:22:03.440 TLSTESTn1 : 10.04 2681.02 10.47 0.00 0.00 47625.06 10000.31 90876.59 00:22:03.440 =================================================================================================================== 00:22:03.440 Total : 2681.02 10.47 0.00 0.00 47625.06 10000.31 90876.59 00:22:03.440 0 00:22:03.440 10:10:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:22:03.440 10:10:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:22:03.440 10:10:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@808 -- # type=--id 00:22:03.440 10:10:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@809 -- # id=0 00:22:03.440 10:10:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:22:03.440 10:10:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:22:03.440 10:10:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:22:03.440 10:10:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:22:03.440 10:10:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # for n in $shm_files 00:22:03.440 10:10:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:22:03.440 nvmf_trace.0 00:22:03.440 10:10:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # return 0 00:22:03.440 10:10:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 473502 00:22:03.441 10:10:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 473502 ']' 00:22:03.441 10:10:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 473502 00:22:03.441 10:10:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:22:03.441 10:10:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:03.441 10:10:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 473502 00:22:03.441 10:10:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:03.441 10:10:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:03.441 10:10:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 473502' 00:22:03.441 killing process with pid 473502 00:22:03.441 10:10:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 473502 00:22:03.441 Received shutdown signal, test time was about 10.000000 seconds 00:22:03.441 00:22:03.441 Latency(us) 00:22:03.441 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:03.441 =================================================================================================================== 00:22:03.441 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:03.441 [2024-07-25 10:10:46.579964] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:03.441 10:10:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 473502 00:22:03.441 10:10:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:22:03.441 10:10:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:03.441 10:10:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:22:03.441 10:10:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:03.441 10:10:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:22:03.441 10:10:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:03.441 10:10:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:03.441 rmmod nvme_tcp 00:22:03.441 rmmod nvme_fabrics 00:22:03.441 rmmod nvme_keyring 00:22:03.441 10:10:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:03.441 10:10:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:22:03.441 10:10:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:22:03.441 10:10:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 473336 ']' 00:22:03.441 10:10:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 473336 00:22:03.441 10:10:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 473336 ']' 00:22:03.441 10:10:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 473336 00:22:03.441 10:10:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:22:03.441 10:10:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:03.441 10:10:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 473336 00:22:03.441 10:10:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:03.441 10:10:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:03.441 10:10:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 473336' 00:22:03.441 killing process with pid 473336 00:22:03.441 10:10:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 473336 00:22:03.441 [2024-07-25 10:10:46.954337] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:03.441 10:10:46 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 473336 00:22:03.441 10:10:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:03.441 10:10:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:03.441 10:10:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:03.441 10:10:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:03.441 10:10:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:03.441 10:10:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:03.441 10:10:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:03.441 10:10:47 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:04.377 10:10:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:04.377 10:10:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:04.377 00:22:04.377 real 0m19.139s 00:22:04.377 user 0m22.891s 00:22:04.377 sys 0m8.590s 00:22:04.377 10:10:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:04.377 10:10:49 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:04.377 ************************************ 00:22:04.377 END TEST nvmf_fips 00:22:04.377 ************************************ 00:22:04.377 10:10:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@45 -- # '[' 0 -eq 1 ']' 00:22:04.377 10:10:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@51 -- # [[ phy == phy ]] 00:22:04.377 10:10:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@52 -- # '[' tcp = tcp ']' 00:22:04.377 10:10:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # gather_supported_nvmf_pci_devs 00:22:04.377 10:10:49 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@285 -- # xtrace_disable 00:22:04.377 10:10:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:06.908 10:10:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:06.908 10:10:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@291 -- # pci_devs=() 00:22:06.908 10:10:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:06.908 10:10:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:06.908 10:10:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:06.908 10:10:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:06.908 10:10:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:06.908 10:10:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@295 -- # net_devs=() 00:22:06.908 10:10:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:06.908 10:10:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@296 -- # e810=() 00:22:06.908 10:10:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@296 -- # local -ga e810 00:22:06.908 10:10:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@297 -- # x722=() 00:22:06.908 10:10:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@297 -- # local -ga x722 00:22:06.908 10:10:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@298 -- # mlx=() 00:22:06.908 10:10:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@298 -- # local -ga mlx 00:22:06.908 10:10:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:06.908 10:10:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:06.908 10:10:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:06.908 10:10:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:06.908 10:10:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:06.908 10:10:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:06.908 10:10:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:06.908 10:10:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:06.908 10:10:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:06.908 10:10:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:06.908 10:10:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:06.908 10:10:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:06.908 10:10:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:06.908 10:10:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:06.908 10:10:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:06.908 10:10:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:06.908 10:10:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:06.908 10:10:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:06.908 10:10:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:22:06.908 Found 0000:84:00.0 (0x8086 - 0x159b) 00:22:06.908 10:10:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:06.908 10:10:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:06.908 10:10:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:06.908 10:10:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:06.908 10:10:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:06.908 10:10:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:06.908 10:10:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:22:06.908 Found 0000:84:00.1 (0x8086 - 0x159b) 00:22:06.908 10:10:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:06.908 10:10:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:06.908 10:10:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:06.908 10:10:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:06.908 10:10:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:06.908 10:10:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:06.908 10:10:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:06.908 10:10:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:06.908 10:10:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:06.908 10:10:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:06.908 10:10:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:06.908 10:10:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:06.908 10:10:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:06.908 10:10:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:06.908 10:10:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:06.908 10:10:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:22:06.908 Found net devices under 0000:84:00.0: cvl_0_0 00:22:06.908 10:10:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:06.908 10:10:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:06.908 10:10:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:06.908 10:10:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:06.908 10:10:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:06.908 10:10:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:06.909 10:10:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:06.909 10:10:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:06.909 10:10:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:22:06.909 Found net devices under 0000:84:00.1: cvl_0_1 00:22:06.909 10:10:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:06.909 10:10:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:06.909 10:10:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:06.909 10:10:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # (( 2 > 0 )) 00:22:06.909 10:10:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:22:06.909 10:10:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:06.909 10:10:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:06.909 10:10:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:06.909 ************************************ 00:22:06.909 START TEST nvmf_perf_adq 00:22:06.909 ************************************ 00:22:06.909 10:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:22:06.909 * Looking for test storage... 00:22:06.909 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:06.909 10:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:06.909 10:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:22:06.909 10:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:06.909 10:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:06.909 10:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:06.909 10:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:06.909 10:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:06.909 10:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:06.909 10:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:06.909 10:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:06.909 10:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:06.909 10:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:06.909 10:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:22:06.909 10:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:22:06.909 10:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:06.909 10:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:06.909 10:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:06.909 10:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:06.909 10:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:06.909 10:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:06.909 10:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:06.909 10:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:06.909 10:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:06.909 10:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:06.909 10:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:06.909 10:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:22:06.909 10:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:06.909 10:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:22:06.909 10:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:06.909 10:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:06.909 10:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:06.909 10:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:06.909 10:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:06.909 10:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:06.909 10:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:06.909 10:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:06.909 10:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:22:06.909 10:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:22:06.909 10:10:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:09.436 10:10:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:09.436 10:10:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:22:09.437 10:10:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:09.437 10:10:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:09.437 10:10:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:09.437 10:10:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:09.437 10:10:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:09.437 10:10:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:22:09.437 10:10:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:09.437 10:10:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:22:09.437 10:10:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:22:09.437 10:10:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:22:09.437 10:10:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:22:09.437 10:10:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:22:09.437 10:10:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:22:09.437 10:10:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:09.437 10:10:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:09.437 10:10:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:09.437 10:10:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:09.437 10:10:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:09.437 10:10:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:09.437 10:10:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:09.437 10:10:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:09.437 10:10:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:09.437 10:10:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:09.437 10:10:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:09.437 10:10:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:09.437 10:10:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:09.437 10:10:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:09.437 10:10:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:09.437 10:10:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:09.437 10:10:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:09.437 10:10:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:09.437 10:10:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:22:09.437 Found 0000:84:00.0 (0x8086 - 0x159b) 00:22:09.437 10:10:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:09.437 10:10:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:09.437 10:10:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:09.437 10:10:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:09.437 10:10:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:09.437 10:10:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:09.437 10:10:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:22:09.437 Found 0000:84:00.1 (0x8086 - 0x159b) 00:22:09.437 10:10:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:09.437 10:10:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:09.437 10:10:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:09.437 10:10:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:09.437 10:10:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:09.437 10:10:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:09.437 10:10:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:09.437 10:10:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:09.437 10:10:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:09.437 10:10:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:09.437 10:10:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:09.437 10:10:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:09.437 10:10:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:09.437 10:10:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:09.437 10:10:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:09.437 10:10:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:22:09.437 Found net devices under 0000:84:00.0: cvl_0_0 00:22:09.437 10:10:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:09.437 10:10:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:09.437 10:10:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:09.437 10:10:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:09.437 10:10:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:09.437 10:10:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:09.437 10:10:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:09.437 10:10:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:09.437 10:10:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:22:09.437 Found net devices under 0000:84:00.1: cvl_0_1 00:22:09.437 10:10:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:09.437 10:10:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:09.437 10:10:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:09.437 10:10:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:22:09.437 10:10:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:22:09.437 10:10:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:22:09.437 10:10:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:22:10.004 10:10:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:22:11.905 10:10:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:22:17.177 10:11:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:22:17.177 10:11:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:17.177 10:11:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:17.177 10:11:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:17.177 10:11:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:17.177 10:11:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:17.177 10:11:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:17.177 10:11:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:17.177 10:11:01 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:17.177 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:17.177 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:17.177 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:22:17.177 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:17.177 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:17.177 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:22:17.177 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:17.177 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:17.177 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:17.177 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:17.177 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:17.177 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:22:17.177 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:17.177 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:22:17.177 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:22:17.177 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:22:17.177 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:22:17.177 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:22:17.177 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:22:17.177 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:17.177 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:17.177 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:17.177 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:17.177 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:17.177 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:17.177 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:17.177 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:17.177 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:17.177 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:17.177 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:17.177 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:17.177 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:17.177 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:17.177 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:17.177 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:17.177 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:17.177 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:17.177 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:22:17.177 Found 0000:84:00.0 (0x8086 - 0x159b) 00:22:17.177 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:17.177 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:17.177 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:17.177 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:17.177 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:17.177 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:17.177 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:22:17.177 Found 0000:84:00.1 (0x8086 - 0x159b) 00:22:17.177 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:17.177 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:17.177 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:17.177 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:17.177 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:17.177 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:17.177 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:17.177 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:17.177 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:17.177 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:17.177 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:17.177 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:17.177 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:17.177 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:17.177 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:17.177 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:22:17.177 Found net devices under 0000:84:00.0: cvl_0_0 00:22:17.177 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:17.177 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:17.177 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:17.177 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:17.177 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:17.177 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:17.177 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:17.177 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:17.177 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:22:17.177 Found net devices under 0000:84:00.1: cvl_0_1 00:22:17.177 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:17.177 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:17.177 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:22:17.177 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:17.177 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:17.177 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:17.177 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:17.177 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:17.177 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:17.177 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:17.177 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:17.177 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:17.177 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:17.177 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:17.177 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:17.177 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:17.177 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:17.177 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:17.177 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:17.178 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:17.178 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:17.178 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:17.178 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:17.178 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:17.178 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:17.178 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:17.178 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:17.178 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.700 ms 00:22:17.178 00:22:17.178 --- 10.0.0.2 ping statistics --- 00:22:17.178 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:17.178 rtt min/avg/max/mdev = 0.700/0.700/0.700/0.000 ms 00:22:17.178 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:17.178 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:17.178 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.167 ms 00:22:17.178 00:22:17.178 --- 10.0.0.1 ping statistics --- 00:22:17.178 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:17.178 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:22:17.178 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:17.178 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:22:17.178 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:17.178 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:17.178 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:17.178 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:17.178 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:17.178 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:17.178 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:17.178 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:22:17.178 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:17.178 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:17.178 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:17.178 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=479525 00:22:17.178 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 479525 00:22:17.178 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 479525 ']' 00:22:17.178 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:17.178 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:22:17.178 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:17.178 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:17.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:17.178 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:17.178 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:17.178 [2024-07-25 10:11:02.246786] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:22:17.178 [2024-07-25 10:11:02.246887] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:17.178 EAL: No free 2048 kB hugepages reported on node 1 00:22:17.178 [2024-07-25 10:11:02.327224] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:17.436 [2024-07-25 10:11:02.451289] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:17.436 [2024-07-25 10:11:02.451356] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:17.436 [2024-07-25 10:11:02.451373] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:17.436 [2024-07-25 10:11:02.451387] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:17.436 [2024-07-25 10:11:02.451399] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:17.436 [2024-07-25 10:11:02.451482] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:17.436 [2024-07-25 10:11:02.451535] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:17.436 [2024-07-25 10:11:02.451587] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:17.436 [2024-07-25 10:11:02.451590] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:17.436 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:17.436 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:22:17.436 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:17.436 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:17.436 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:17.436 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:17.436 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:22:17.436 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:22:17.436 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:22:17.436 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.436 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:17.436 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.436 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:22:17.436 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:22:17.436 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.436 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:17.436 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.437 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:22:17.437 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.437 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:17.695 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.695 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:22:17.695 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.695 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:17.695 [2024-07-25 10:11:02.678695] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:17.695 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.695 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:17.695 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.695 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:17.695 Malloc1 00:22:17.695 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.695 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:17.695 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.695 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:17.695 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.695 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:17.695 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.695 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:17.695 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.695 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:17.695 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:17.695 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:17.695 [2024-07-25 10:11:02.731195] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:17.695 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:17.695 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=479556 00:22:17.695 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:17.695 10:11:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:22:17.695 EAL: No free 2048 kB hugepages reported on node 1 00:22:19.592 10:11:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:22:19.592 10:11:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:19.592 10:11:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:19.592 10:11:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:19.592 10:11:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:22:19.592 "tick_rate": 2700000000, 00:22:19.592 "poll_groups": [ 00:22:19.592 { 00:22:19.592 "name": "nvmf_tgt_poll_group_000", 00:22:19.592 "admin_qpairs": 1, 00:22:19.592 "io_qpairs": 1, 00:22:19.592 "current_admin_qpairs": 1, 00:22:19.592 "current_io_qpairs": 1, 00:22:19.592 "pending_bdev_io": 0, 00:22:19.592 "completed_nvme_io": 20869, 00:22:19.592 "transports": [ 00:22:19.592 { 00:22:19.592 "trtype": "TCP" 00:22:19.592 } 00:22:19.592 ] 00:22:19.592 }, 00:22:19.592 { 00:22:19.592 "name": "nvmf_tgt_poll_group_001", 00:22:19.592 "admin_qpairs": 0, 00:22:19.592 "io_qpairs": 1, 00:22:19.592 "current_admin_qpairs": 0, 00:22:19.592 "current_io_qpairs": 1, 00:22:19.592 "pending_bdev_io": 0, 00:22:19.592 "completed_nvme_io": 21087, 00:22:19.592 "transports": [ 00:22:19.592 { 00:22:19.592 "trtype": "TCP" 00:22:19.592 } 00:22:19.592 ] 00:22:19.592 }, 00:22:19.592 { 00:22:19.592 "name": "nvmf_tgt_poll_group_002", 00:22:19.592 "admin_qpairs": 0, 00:22:19.592 "io_qpairs": 1, 00:22:19.592 "current_admin_qpairs": 0, 00:22:19.592 "current_io_qpairs": 1, 00:22:19.592 "pending_bdev_io": 0, 00:22:19.592 "completed_nvme_io": 21312, 00:22:19.592 "transports": [ 00:22:19.592 { 00:22:19.592 "trtype": "TCP" 00:22:19.592 } 00:22:19.592 ] 00:22:19.592 }, 00:22:19.592 { 00:22:19.592 "name": "nvmf_tgt_poll_group_003", 00:22:19.592 "admin_qpairs": 0, 00:22:19.592 "io_qpairs": 1, 00:22:19.592 "current_admin_qpairs": 0, 00:22:19.592 "current_io_qpairs": 1, 00:22:19.592 "pending_bdev_io": 0, 00:22:19.592 "completed_nvme_io": 20677, 00:22:19.592 "transports": [ 00:22:19.592 { 00:22:19.592 "trtype": "TCP" 00:22:19.592 } 00:22:19.592 ] 00:22:19.592 } 00:22:19.592 ] 00:22:19.592 }' 00:22:19.850 10:11:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:22:19.850 10:11:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:22:19.850 10:11:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:22:19.850 10:11:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:22:19.850 10:11:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 479556 00:22:27.987 Initializing NVMe Controllers 00:22:27.987 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:27.987 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:22:27.987 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:22:27.987 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:22:27.987 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:22:27.987 Initialization complete. Launching workers. 00:22:27.987 ======================================================== 00:22:27.987 Latency(us) 00:22:27.987 Device Information : IOPS MiB/s Average min max 00:22:27.987 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10911.54 42.62 5867.31 1131.69 7984.85 00:22:27.987 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10784.65 42.13 5940.13 2177.32 43158.65 00:22:27.987 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10590.36 41.37 6044.40 2598.10 8121.50 00:22:27.987 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10698.45 41.79 5984.13 2377.87 8801.16 00:22:27.987 ======================================================== 00:22:27.987 Total : 42985.00 167.91 5958.29 1131.69 43158.65 00:22:27.987 00:22:27.987 10:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:22:27.987 10:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:27.987 10:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:22:27.987 10:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:27.987 10:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:22:27.987 10:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:27.987 10:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:27.987 rmmod nvme_tcp 00:22:27.987 rmmod nvme_fabrics 00:22:27.987 rmmod nvme_keyring 00:22:27.987 10:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:27.987 10:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:22:27.987 10:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:22:27.987 10:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 479525 ']' 00:22:27.987 10:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 479525 00:22:27.987 10:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 479525 ']' 00:22:27.987 10:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 479525 00:22:27.987 10:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:22:27.987 10:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:27.987 10:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 479525 00:22:27.987 10:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:27.987 10:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:27.987 10:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 479525' 00:22:27.987 killing process with pid 479525 00:22:27.987 10:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 479525 00:22:27.987 10:11:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 479525 00:22:28.246 10:11:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:28.246 10:11:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:28.246 10:11:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:28.246 10:11:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:28.246 10:11:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:28.246 10:11:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:28.246 10:11:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:28.246 10:11:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:30.777 10:11:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:30.777 10:11:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:22:30.777 10:11:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:22:31.035 10:11:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:22:33.015 10:11:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:22:38.279 10:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:22:38.279 10:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:38.279 10:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:38.279 10:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:38.279 10:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:38.279 10:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:38.279 10:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:38.279 10:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:38.279 10:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:38.279 10:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:38.279 10:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:38.279 10:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:22:38.279 10:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:38.279 10:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:38.279 10:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:22:38.279 10:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:38.279 10:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:38.280 10:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:38.280 10:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:38.280 10:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:38.280 10:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:22:38.280 10:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:38.280 10:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:22:38.280 10:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:22:38.280 10:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:22:38.280 10:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:22:38.280 10:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:22:38.280 10:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:22:38.280 10:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:38.280 10:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:38.280 10:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:38.280 10:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:38.280 10:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:38.280 10:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:38.280 10:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:38.280 10:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:38.280 10:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:38.280 10:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:38.280 10:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:38.280 10:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:38.280 10:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:38.280 10:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:38.280 10:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:38.280 10:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:38.280 10:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:38.280 10:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:38.280 10:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:22:38.280 Found 0000:84:00.0 (0x8086 - 0x159b) 00:22:38.280 10:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:38.280 10:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:38.280 10:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:38.280 10:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:38.280 10:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:38.280 10:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:38.280 10:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:22:38.280 Found 0000:84:00.1 (0x8086 - 0x159b) 00:22:38.280 10:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:38.280 10:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:38.280 10:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:38.280 10:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:38.280 10:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:38.280 10:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:38.280 10:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:38.280 10:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:38.280 10:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:38.280 10:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:38.280 10:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:38.280 10:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:38.280 10:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:38.280 10:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:38.280 10:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:38.280 10:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:22:38.280 Found net devices under 0000:84:00.0: cvl_0_0 00:22:38.280 10:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:38.280 10:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:38.280 10:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:38.280 10:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:38.280 10:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:38.280 10:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:38.280 10:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:38.280 10:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:38.280 10:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:22:38.280 Found net devices under 0000:84:00.1: cvl_0_1 00:22:38.280 10:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:38.280 10:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:38.280 10:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:22:38.280 10:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:38.280 10:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:38.280 10:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:38.280 10:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:38.280 10:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:38.280 10:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:38.280 10:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:38.280 10:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:38.280 10:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:38.280 10:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:38.280 10:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:38.280 10:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:38.280 10:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:38.280 10:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:38.280 10:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:38.280 10:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:38.280 10:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:38.280 10:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:38.280 10:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:38.280 10:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:38.280 10:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:38.280 10:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:38.280 10:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:38.280 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:38.280 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.133 ms 00:22:38.280 00:22:38.280 --- 10.0.0.2 ping statistics --- 00:22:38.280 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:38.280 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:22:38.280 10:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:38.280 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:38.280 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.127 ms 00:22:38.280 00:22:38.280 --- 10.0.0.1 ping statistics --- 00:22:38.280 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:38.280 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:22:38.280 10:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:38.280 10:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:22:38.280 10:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:38.280 10:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:38.280 10:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:38.280 10:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:38.280 10:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:38.280 10:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:38.280 10:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:38.281 10:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:22:38.281 10:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:22:38.281 10:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:22:38.281 10:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:22:38.281 net.core.busy_poll = 1 00:22:38.281 10:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:22:38.281 net.core.busy_read = 1 00:22:38.281 10:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:22:38.281 10:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:22:38.281 10:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:22:38.281 10:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:22:38.281 10:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:22:38.281 10:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:22:38.281 10:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:38.281 10:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:38.281 10:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:38.281 10:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=482168 00:22:38.281 10:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:22:38.281 10:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 482168 00:22:38.281 10:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 482168 ']' 00:22:38.281 10:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:38.281 10:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:38.281 10:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:38.281 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:38.281 10:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:38.281 10:11:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:38.281 [2024-07-25 10:11:23.438067] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:22:38.281 [2024-07-25 10:11:23.438169] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:38.538 EAL: No free 2048 kB hugepages reported on node 1 00:22:38.538 [2024-07-25 10:11:23.526590] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:38.538 [2024-07-25 10:11:23.653908] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:38.538 [2024-07-25 10:11:23.653971] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:38.538 [2024-07-25 10:11:23.653988] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:38.538 [2024-07-25 10:11:23.654001] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:38.538 [2024-07-25 10:11:23.654012] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:38.538 [2024-07-25 10:11:23.654073] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:38.538 [2024-07-25 10:11:23.654132] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:38.538 [2024-07-25 10:11:23.654198] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:38.538 [2024-07-25 10:11:23.654195] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:39.909 10:11:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:39.909 10:11:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:22:39.909 10:11:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:39.909 10:11:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:39.909 10:11:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:39.909 10:11:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:39.909 10:11:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:22:39.909 10:11:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:22:39.909 10:11:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:22:39.909 10:11:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.909 10:11:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:39.909 10:11:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.909 10:11:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:22:39.909 10:11:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:22:39.909 10:11:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.909 10:11:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:39.909 10:11:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.909 10:11:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:22:39.909 10:11:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.909 10:11:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:39.909 10:11:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.909 10:11:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:22:39.909 10:11:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.909 10:11:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:39.909 [2024-07-25 10:11:24.949974] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:39.909 10:11:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.909 10:11:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:39.909 10:11:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.909 10:11:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:39.909 Malloc1 00:22:39.909 10:11:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.909 10:11:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:39.909 10:11:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.909 10:11:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:39.909 10:11:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.909 10:11:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:39.909 10:11:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.909 10:11:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:39.909 10:11:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.909 10:11:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:39.909 10:11:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.909 10:11:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:39.909 [2024-07-25 10:11:25.003509] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:39.909 10:11:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.909 10:11:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=482445 00:22:39.909 10:11:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:22:39.909 10:11:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:39.909 EAL: No free 2048 kB hugepages reported on node 1 00:22:42.433 10:11:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:22:42.433 10:11:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.433 10:11:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:42.433 10:11:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.433 10:11:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:22:42.433 "tick_rate": 2700000000, 00:22:42.433 "poll_groups": [ 00:22:42.433 { 00:22:42.433 "name": "nvmf_tgt_poll_group_000", 00:22:42.433 "admin_qpairs": 1, 00:22:42.433 "io_qpairs": 1, 00:22:42.433 "current_admin_qpairs": 1, 00:22:42.433 "current_io_qpairs": 1, 00:22:42.433 "pending_bdev_io": 0, 00:22:42.433 "completed_nvme_io": 25453, 00:22:42.433 "transports": [ 00:22:42.433 { 00:22:42.433 "trtype": "TCP" 00:22:42.433 } 00:22:42.433 ] 00:22:42.433 }, 00:22:42.433 { 00:22:42.433 "name": "nvmf_tgt_poll_group_001", 00:22:42.433 "admin_qpairs": 0, 00:22:42.433 "io_qpairs": 3, 00:22:42.434 "current_admin_qpairs": 0, 00:22:42.434 "current_io_qpairs": 3, 00:22:42.434 "pending_bdev_io": 0, 00:22:42.434 "completed_nvme_io": 27657, 00:22:42.434 "transports": [ 00:22:42.434 { 00:22:42.434 "trtype": "TCP" 00:22:42.434 } 00:22:42.434 ] 00:22:42.434 }, 00:22:42.434 { 00:22:42.434 "name": "nvmf_tgt_poll_group_002", 00:22:42.434 "admin_qpairs": 0, 00:22:42.434 "io_qpairs": 0, 00:22:42.434 "current_admin_qpairs": 0, 00:22:42.434 "current_io_qpairs": 0, 00:22:42.434 "pending_bdev_io": 0, 00:22:42.434 "completed_nvme_io": 0, 00:22:42.434 "transports": [ 00:22:42.434 { 00:22:42.434 "trtype": "TCP" 00:22:42.434 } 00:22:42.434 ] 00:22:42.434 }, 00:22:42.434 { 00:22:42.434 "name": "nvmf_tgt_poll_group_003", 00:22:42.434 "admin_qpairs": 0, 00:22:42.434 "io_qpairs": 0, 00:22:42.434 "current_admin_qpairs": 0, 00:22:42.434 "current_io_qpairs": 0, 00:22:42.434 "pending_bdev_io": 0, 00:22:42.434 "completed_nvme_io": 0, 00:22:42.434 "transports": [ 00:22:42.434 { 00:22:42.434 "trtype": "TCP" 00:22:42.434 } 00:22:42.434 ] 00:22:42.434 } 00:22:42.434 ] 00:22:42.434 }' 00:22:42.434 10:11:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:22:42.434 10:11:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:22:42.434 10:11:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:22:42.434 10:11:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:22:42.434 10:11:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 482445 00:22:50.537 Initializing NVMe Controllers 00:22:50.537 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:50.537 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:22:50.537 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:22:50.537 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:22:50.537 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:22:50.537 Initialization complete. Launching workers. 00:22:50.537 ======================================================== 00:22:50.537 Latency(us) 00:22:50.537 Device Information : IOPS MiB/s Average min max 00:22:50.537 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 13123.44 51.26 4876.91 1647.35 7112.23 00:22:50.537 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 4692.34 18.33 13643.03 1728.14 58948.53 00:22:50.537 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 5272.44 20.60 12179.54 2221.41 64313.73 00:22:50.537 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 4322.65 16.89 14808.63 1911.90 59334.43 00:22:50.537 ======================================================== 00:22:50.537 Total : 27410.86 107.07 9348.41 1647.35 64313.73 00:22:50.537 00:22:50.537 10:11:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:22:50.537 10:11:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:50.537 10:11:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:22:50.537 10:11:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:50.537 10:11:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:22:50.537 10:11:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:50.537 10:11:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:50.537 rmmod nvme_tcp 00:22:50.537 rmmod nvme_fabrics 00:22:50.537 rmmod nvme_keyring 00:22:50.537 10:11:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:50.537 10:11:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:22:50.537 10:11:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:22:50.537 10:11:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 482168 ']' 00:22:50.537 10:11:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 482168 00:22:50.537 10:11:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 482168 ']' 00:22:50.537 10:11:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 482168 00:22:50.537 10:11:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:22:50.537 10:11:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:50.537 10:11:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 482168 00:22:50.537 10:11:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:50.537 10:11:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:50.537 10:11:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 482168' 00:22:50.537 killing process with pid 482168 00:22:50.537 10:11:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 482168 00:22:50.537 10:11:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 482168 00:22:50.537 10:11:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:50.537 10:11:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:50.537 10:11:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:50.537 10:11:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:50.537 10:11:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:50.537 10:11:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:50.537 10:11:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:50.537 10:11:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:53.820 10:11:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:53.820 10:11:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:22:53.820 00:22:53.820 real 0m46.842s 00:22:53.820 user 2m44.907s 00:22:53.820 sys 0m10.335s 00:22:53.820 10:11:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:53.820 10:11:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:53.820 ************************************ 00:22:53.820 END TEST nvmf_perf_adq 00:22:53.820 ************************************ 00:22:53.820 10:11:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@63 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:22:53.820 10:11:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:53.820 10:11:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:53.820 10:11:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:53.820 ************************************ 00:22:53.820 START TEST nvmf_shutdown 00:22:53.820 ************************************ 00:22:53.820 10:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:22:53.820 * Looking for test storage... 00:22:53.820 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:53.820 10:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:53.820 10:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:22:53.820 10:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:53.820 10:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:53.820 10:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:53.820 10:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:53.820 10:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:53.820 10:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:53.820 10:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:53.820 10:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:53.820 10:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:53.820 10:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:53.820 10:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:22:53.820 10:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:22:53.820 10:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:53.820 10:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:53.820 10:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:53.820 10:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:53.820 10:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:53.820 10:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:53.820 10:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:53.820 10:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:53.820 10:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:53.820 10:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:53.820 10:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:53.820 10:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:22:53.820 10:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:53.820 10:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:22:53.820 10:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:53.820 10:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:53.820 10:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:53.820 10:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:53.820 10:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:53.820 10:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:53.820 10:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:53.820 10:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:53.820 10:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:53.820 10:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:53.820 10:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:22:53.820 10:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:22:53.821 10:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:53.821 10:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:22:53.821 ************************************ 00:22:53.821 START TEST nvmf_shutdown_tc1 00:22:53.821 ************************************ 00:22:53.821 10:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc1 00:22:53.821 10:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:22:53.821 10:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:22:53.821 10:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:53.821 10:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:53.821 10:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:53.821 10:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:53.821 10:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:53.821 10:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:53.821 10:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:53.821 10:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:53.821 10:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:53.821 10:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:53.821 10:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:22:53.821 10:11:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:56.388 10:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:56.388 10:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:22:56.388 10:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:56.388 10:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:56.388 10:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:56.388 10:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:56.388 10:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:56.388 10:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:22:56.388 10:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:56.388 10:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:22:56.388 10:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:22:56.388 10:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:22:56.388 10:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:22:56.388 10:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:22:56.388 10:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:22:56.388 10:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:56.388 10:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:56.388 10:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:56.388 10:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:56.388 10:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:56.388 10:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:56.388 10:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:56.388 10:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:56.388 10:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:56.388 10:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:56.388 10:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:56.388 10:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:56.388 10:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:56.388 10:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:56.388 10:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:56.388 10:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:56.388 10:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:56.388 10:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:56.389 10:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:22:56.389 Found 0000:84:00.0 (0x8086 - 0x159b) 00:22:56.389 10:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:56.389 10:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:56.389 10:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:56.389 10:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:56.389 10:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:56.389 10:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:56.389 10:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:22:56.389 Found 0000:84:00.1 (0x8086 - 0x159b) 00:22:56.389 10:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:56.389 10:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:56.389 10:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:56.389 10:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:56.389 10:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:56.389 10:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:56.389 10:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:56.389 10:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:56.389 10:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:56.389 10:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:56.389 10:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:56.389 10:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:56.389 10:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:56.389 10:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:56.389 10:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:56.389 10:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:22:56.389 Found net devices under 0000:84:00.0: cvl_0_0 00:22:56.389 10:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:56.389 10:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:56.389 10:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:56.389 10:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:56.389 10:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:56.389 10:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:56.389 10:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:56.389 10:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:56.389 10:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:22:56.389 Found net devices under 0000:84:00.1: cvl_0_1 00:22:56.389 10:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:56.389 10:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:56.389 10:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:22:56.389 10:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:56.389 10:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:56.389 10:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:56.389 10:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:56.389 10:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:56.389 10:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:56.389 10:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:56.389 10:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:56.389 10:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:56.389 10:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:56.389 10:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:56.389 10:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:56.389 10:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:56.389 10:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:56.389 10:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:56.389 10:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:56.389 10:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:56.389 10:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:56.389 10:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:56.389 10:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:56.389 10:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:56.389 10:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:56.389 10:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:56.389 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:56.389 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.206 ms 00:22:56.389 00:22:56.389 --- 10.0.0.2 ping statistics --- 00:22:56.389 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:56.389 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:22:56.389 10:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:56.389 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:56.389 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.145 ms 00:22:56.389 00:22:56.389 --- 10.0.0.1 ping statistics --- 00:22:56.389 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:56.389 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:22:56.389 10:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:56.389 10:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:22:56.389 10:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:56.389 10:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:56.389 10:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:56.389 10:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:56.389 10:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:56.389 10:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:56.389 10:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:56.389 10:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:22:56.389 10:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:56.389 10:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:56.389 10:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:56.389 10:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=485746 00:22:56.389 10:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:22:56.389 10:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 485746 00:22:56.389 10:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 485746 ']' 00:22:56.389 10:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:56.389 10:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:56.389 10:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:56.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:56.390 10:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:56.390 10:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:56.390 [2024-07-25 10:11:41.390149] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:22:56.390 [2024-07-25 10:11:41.390254] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:56.390 EAL: No free 2048 kB hugepages reported on node 1 00:22:56.390 [2024-07-25 10:11:41.476776] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:56.647 [2024-07-25 10:11:41.601364] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:56.647 [2024-07-25 10:11:41.601441] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:56.647 [2024-07-25 10:11:41.601459] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:56.647 [2024-07-25 10:11:41.601474] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:56.647 [2024-07-25 10:11:41.601497] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:56.647 [2024-07-25 10:11:41.601580] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:56.647 [2024-07-25 10:11:41.601638] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:56.647 [2024-07-25 10:11:41.601664] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:22:56.647 [2024-07-25 10:11:41.601668] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:56.647 10:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:56.647 10:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:22:56.647 10:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:56.647 10:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:56.647 10:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:56.647 10:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:56.647 10:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:56.647 10:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:56.647 10:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:56.647 [2024-07-25 10:11:41.776991] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:56.647 10:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:56.647 10:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:22:56.647 10:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:22:56.647 10:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:56.647 10:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:56.647 10:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:22:56.647 10:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:56.647 10:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:56.647 10:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:56.647 10:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:56.647 10:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:56.647 10:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:56.647 10:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:56.647 10:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:56.647 10:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:56.647 10:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:56.647 10:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:56.647 10:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:56.647 10:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:56.647 10:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:56.647 10:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:56.647 10:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:56.647 10:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:56.647 10:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:56.905 10:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:22:56.905 10:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:22:56.905 10:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:22:56.905 10:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:56.905 10:11:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:56.905 Malloc1 00:22:56.905 [2024-07-25 10:11:41.867058] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:56.905 Malloc2 00:22:56.905 Malloc3 00:22:56.905 Malloc4 00:22:56.905 Malloc5 00:22:57.163 Malloc6 00:22:57.163 Malloc7 00:22:57.163 Malloc8 00:22:57.163 Malloc9 00:22:57.163 Malloc10 00:22:57.163 10:11:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:57.163 10:11:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:22:57.163 10:11:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:57.163 10:11:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:57.422 10:11:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=485926 00:22:57.422 10:11:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 485926 /var/tmp/bdevperf.sock 00:22:57.422 10:11:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 485926 ']' 00:22:57.422 10:11:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:22:57.422 10:11:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:57.422 10:11:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:22:57.422 10:11:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:57.422 10:11:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:22:57.422 10:11:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:57.422 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:57.422 10:11:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:22:57.422 10:11:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:57.422 10:11:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:57.422 10:11:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:57.422 10:11:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:57.422 { 00:22:57.422 "params": { 00:22:57.422 "name": "Nvme$subsystem", 00:22:57.422 "trtype": "$TEST_TRANSPORT", 00:22:57.422 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:57.422 "adrfam": "ipv4", 00:22:57.422 "trsvcid": "$NVMF_PORT", 00:22:57.422 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:57.422 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:57.422 "hdgst": ${hdgst:-false}, 00:22:57.422 "ddgst": ${ddgst:-false} 00:22:57.422 }, 00:22:57.422 "method": "bdev_nvme_attach_controller" 00:22:57.422 } 00:22:57.422 EOF 00:22:57.422 )") 00:22:57.422 10:11:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:57.422 10:11:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:57.422 10:11:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:57.422 { 00:22:57.422 "params": { 00:22:57.422 "name": "Nvme$subsystem", 00:22:57.422 "trtype": "$TEST_TRANSPORT", 00:22:57.422 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:57.422 "adrfam": "ipv4", 00:22:57.422 "trsvcid": "$NVMF_PORT", 00:22:57.422 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:57.422 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:57.422 "hdgst": ${hdgst:-false}, 00:22:57.422 "ddgst": ${ddgst:-false} 00:22:57.422 }, 00:22:57.422 "method": "bdev_nvme_attach_controller" 00:22:57.422 } 00:22:57.422 EOF 00:22:57.422 )") 00:22:57.422 10:11:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:57.422 10:11:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:57.422 10:11:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:57.422 { 00:22:57.422 "params": { 00:22:57.422 "name": "Nvme$subsystem", 00:22:57.422 "trtype": "$TEST_TRANSPORT", 00:22:57.422 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:57.422 "adrfam": "ipv4", 00:22:57.422 "trsvcid": "$NVMF_PORT", 00:22:57.422 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:57.422 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:57.422 "hdgst": ${hdgst:-false}, 00:22:57.422 "ddgst": ${ddgst:-false} 00:22:57.422 }, 00:22:57.422 "method": "bdev_nvme_attach_controller" 00:22:57.422 } 00:22:57.422 EOF 00:22:57.422 )") 00:22:57.422 10:11:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:57.422 10:11:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:57.422 10:11:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:57.422 { 00:22:57.422 "params": { 00:22:57.422 "name": "Nvme$subsystem", 00:22:57.422 "trtype": "$TEST_TRANSPORT", 00:22:57.422 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:57.422 "adrfam": "ipv4", 00:22:57.422 "trsvcid": "$NVMF_PORT", 00:22:57.422 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:57.422 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:57.422 "hdgst": ${hdgst:-false}, 00:22:57.422 "ddgst": ${ddgst:-false} 00:22:57.422 }, 00:22:57.422 "method": "bdev_nvme_attach_controller" 00:22:57.422 } 00:22:57.422 EOF 00:22:57.422 )") 00:22:57.422 10:11:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:57.422 10:11:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:57.422 10:11:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:57.422 { 00:22:57.422 "params": { 00:22:57.422 "name": "Nvme$subsystem", 00:22:57.422 "trtype": "$TEST_TRANSPORT", 00:22:57.422 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:57.422 "adrfam": "ipv4", 00:22:57.422 "trsvcid": "$NVMF_PORT", 00:22:57.422 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:57.422 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:57.422 "hdgst": ${hdgst:-false}, 00:22:57.422 "ddgst": ${ddgst:-false} 00:22:57.422 }, 00:22:57.422 "method": "bdev_nvme_attach_controller" 00:22:57.422 } 00:22:57.422 EOF 00:22:57.422 )") 00:22:57.422 10:11:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:57.422 10:11:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:57.422 10:11:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:57.422 { 00:22:57.422 "params": { 00:22:57.422 "name": "Nvme$subsystem", 00:22:57.422 "trtype": "$TEST_TRANSPORT", 00:22:57.422 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:57.422 "adrfam": "ipv4", 00:22:57.422 "trsvcid": "$NVMF_PORT", 00:22:57.422 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:57.422 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:57.422 "hdgst": ${hdgst:-false}, 00:22:57.422 "ddgst": ${ddgst:-false} 00:22:57.422 }, 00:22:57.422 "method": "bdev_nvme_attach_controller" 00:22:57.422 } 00:22:57.422 EOF 00:22:57.422 )") 00:22:57.422 10:11:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:57.422 10:11:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:57.422 10:11:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:57.422 { 00:22:57.422 "params": { 00:22:57.422 "name": "Nvme$subsystem", 00:22:57.422 "trtype": "$TEST_TRANSPORT", 00:22:57.422 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:57.422 "adrfam": "ipv4", 00:22:57.422 "trsvcid": "$NVMF_PORT", 00:22:57.422 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:57.422 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:57.422 "hdgst": ${hdgst:-false}, 00:22:57.422 "ddgst": ${ddgst:-false} 00:22:57.422 }, 00:22:57.422 "method": "bdev_nvme_attach_controller" 00:22:57.422 } 00:22:57.422 EOF 00:22:57.422 )") 00:22:57.422 10:11:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:57.422 10:11:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:57.422 10:11:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:57.422 { 00:22:57.423 "params": { 00:22:57.423 "name": "Nvme$subsystem", 00:22:57.423 "trtype": "$TEST_TRANSPORT", 00:22:57.423 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:57.423 "adrfam": "ipv4", 00:22:57.423 "trsvcid": "$NVMF_PORT", 00:22:57.423 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:57.423 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:57.423 "hdgst": ${hdgst:-false}, 00:22:57.423 "ddgst": ${ddgst:-false} 00:22:57.423 }, 00:22:57.423 "method": "bdev_nvme_attach_controller" 00:22:57.423 } 00:22:57.423 EOF 00:22:57.423 )") 00:22:57.423 10:11:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:57.423 10:11:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:57.423 10:11:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:57.423 { 00:22:57.423 "params": { 00:22:57.423 "name": "Nvme$subsystem", 00:22:57.423 "trtype": "$TEST_TRANSPORT", 00:22:57.423 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:57.423 "adrfam": "ipv4", 00:22:57.423 "trsvcid": "$NVMF_PORT", 00:22:57.423 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:57.423 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:57.423 "hdgst": ${hdgst:-false}, 00:22:57.423 "ddgst": ${ddgst:-false} 00:22:57.423 }, 00:22:57.423 "method": "bdev_nvme_attach_controller" 00:22:57.423 } 00:22:57.423 EOF 00:22:57.423 )") 00:22:57.423 10:11:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:57.423 10:11:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:57.423 10:11:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:57.423 { 00:22:57.423 "params": { 00:22:57.423 "name": "Nvme$subsystem", 00:22:57.423 "trtype": "$TEST_TRANSPORT", 00:22:57.423 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:57.423 "adrfam": "ipv4", 00:22:57.423 "trsvcid": "$NVMF_PORT", 00:22:57.423 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:57.423 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:57.423 "hdgst": ${hdgst:-false}, 00:22:57.423 "ddgst": ${ddgst:-false} 00:22:57.423 }, 00:22:57.423 "method": "bdev_nvme_attach_controller" 00:22:57.423 } 00:22:57.423 EOF 00:22:57.423 )") 00:22:57.423 10:11:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:22:57.423 10:11:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:22:57.423 10:11:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:22:57.423 10:11:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:57.423 "params": { 00:22:57.423 "name": "Nvme1", 00:22:57.423 "trtype": "tcp", 00:22:57.423 "traddr": "10.0.0.2", 00:22:57.423 "adrfam": "ipv4", 00:22:57.423 "trsvcid": "4420", 00:22:57.423 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:57.423 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:57.423 "hdgst": false, 00:22:57.423 "ddgst": false 00:22:57.423 }, 00:22:57.423 "method": "bdev_nvme_attach_controller" 00:22:57.423 },{ 00:22:57.423 "params": { 00:22:57.423 "name": "Nvme2", 00:22:57.423 "trtype": "tcp", 00:22:57.423 "traddr": "10.0.0.2", 00:22:57.423 "adrfam": "ipv4", 00:22:57.423 "trsvcid": "4420", 00:22:57.423 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:22:57.423 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:22:57.423 "hdgst": false, 00:22:57.423 "ddgst": false 00:22:57.423 }, 00:22:57.423 "method": "bdev_nvme_attach_controller" 00:22:57.423 },{ 00:22:57.423 "params": { 00:22:57.423 "name": "Nvme3", 00:22:57.423 "trtype": "tcp", 00:22:57.423 "traddr": "10.0.0.2", 00:22:57.423 "adrfam": "ipv4", 00:22:57.423 "trsvcid": "4420", 00:22:57.423 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:22:57.423 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:22:57.423 "hdgst": false, 00:22:57.423 "ddgst": false 00:22:57.423 }, 00:22:57.423 "method": "bdev_nvme_attach_controller" 00:22:57.423 },{ 00:22:57.423 "params": { 00:22:57.423 "name": "Nvme4", 00:22:57.423 "trtype": "tcp", 00:22:57.423 "traddr": "10.0.0.2", 00:22:57.423 "adrfam": "ipv4", 00:22:57.423 "trsvcid": "4420", 00:22:57.423 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:22:57.423 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:22:57.423 "hdgst": false, 00:22:57.423 "ddgst": false 00:22:57.423 }, 00:22:57.423 "method": "bdev_nvme_attach_controller" 00:22:57.423 },{ 00:22:57.423 "params": { 00:22:57.423 "name": "Nvme5", 00:22:57.423 "trtype": "tcp", 00:22:57.423 "traddr": "10.0.0.2", 00:22:57.423 "adrfam": "ipv4", 00:22:57.423 "trsvcid": "4420", 00:22:57.423 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:22:57.423 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:22:57.423 "hdgst": false, 00:22:57.423 "ddgst": false 00:22:57.423 }, 00:22:57.423 "method": "bdev_nvme_attach_controller" 00:22:57.423 },{ 00:22:57.423 "params": { 00:22:57.423 "name": "Nvme6", 00:22:57.423 "trtype": "tcp", 00:22:57.423 "traddr": "10.0.0.2", 00:22:57.423 "adrfam": "ipv4", 00:22:57.423 "trsvcid": "4420", 00:22:57.423 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:22:57.423 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:22:57.423 "hdgst": false, 00:22:57.423 "ddgst": false 00:22:57.423 }, 00:22:57.423 "method": "bdev_nvme_attach_controller" 00:22:57.423 },{ 00:22:57.423 "params": { 00:22:57.423 "name": "Nvme7", 00:22:57.423 "trtype": "tcp", 00:22:57.423 "traddr": "10.0.0.2", 00:22:57.423 "adrfam": "ipv4", 00:22:57.423 "trsvcid": "4420", 00:22:57.423 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:22:57.423 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:22:57.423 "hdgst": false, 00:22:57.423 "ddgst": false 00:22:57.423 }, 00:22:57.423 "method": "bdev_nvme_attach_controller" 00:22:57.423 },{ 00:22:57.423 "params": { 00:22:57.423 "name": "Nvme8", 00:22:57.423 "trtype": "tcp", 00:22:57.423 "traddr": "10.0.0.2", 00:22:57.423 "adrfam": "ipv4", 00:22:57.423 "trsvcid": "4420", 00:22:57.423 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:22:57.423 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:22:57.423 "hdgst": false, 00:22:57.423 "ddgst": false 00:22:57.423 }, 00:22:57.423 "method": "bdev_nvme_attach_controller" 00:22:57.423 },{ 00:22:57.423 "params": { 00:22:57.423 "name": "Nvme9", 00:22:57.423 "trtype": "tcp", 00:22:57.423 "traddr": "10.0.0.2", 00:22:57.423 "adrfam": "ipv4", 00:22:57.423 "trsvcid": "4420", 00:22:57.423 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:22:57.423 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:22:57.423 "hdgst": false, 00:22:57.423 "ddgst": false 00:22:57.423 }, 00:22:57.423 "method": "bdev_nvme_attach_controller" 00:22:57.423 },{ 00:22:57.423 "params": { 00:22:57.423 "name": "Nvme10", 00:22:57.423 "trtype": "tcp", 00:22:57.423 "traddr": "10.0.0.2", 00:22:57.423 "adrfam": "ipv4", 00:22:57.423 "trsvcid": "4420", 00:22:57.423 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:22:57.423 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:22:57.423 "hdgst": false, 00:22:57.423 "ddgst": false 00:22:57.423 }, 00:22:57.423 "method": "bdev_nvme_attach_controller" 00:22:57.423 }' 00:22:57.424 [2024-07-25 10:11:42.389091] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:22:57.424 [2024-07-25 10:11:42.389184] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:22:57.424 EAL: No free 2048 kB hugepages reported on node 1 00:22:57.424 [2024-07-25 10:11:42.452317] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:57.424 [2024-07-25 10:11:42.561986] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:59.320 10:11:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:59.320 10:11:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:22:59.320 10:11:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:22:59.320 10:11:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:59.320 10:11:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:22:59.320 10:11:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:59.320 10:11:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 485926 00:22:59.320 10:11:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:22:59.320 10:11:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:23:00.252 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 485926 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:23:00.252 10:11:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 485746 00:23:00.252 10:11:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:23:00.252 10:11:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:00.252 10:11:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:23:00.252 10:11:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:23:00.252 10:11:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:00.252 10:11:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:00.252 { 00:23:00.252 "params": { 00:23:00.252 "name": "Nvme$subsystem", 00:23:00.252 "trtype": "$TEST_TRANSPORT", 00:23:00.252 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:00.252 "adrfam": "ipv4", 00:23:00.252 "trsvcid": "$NVMF_PORT", 00:23:00.252 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:00.252 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:00.252 "hdgst": ${hdgst:-false}, 00:23:00.252 "ddgst": ${ddgst:-false} 00:23:00.252 }, 00:23:00.252 "method": "bdev_nvme_attach_controller" 00:23:00.252 } 00:23:00.252 EOF 00:23:00.252 )") 00:23:00.252 10:11:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:00.252 10:11:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:00.252 10:11:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:00.252 { 00:23:00.252 "params": { 00:23:00.252 "name": "Nvme$subsystem", 00:23:00.252 "trtype": "$TEST_TRANSPORT", 00:23:00.252 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:00.252 "adrfam": "ipv4", 00:23:00.252 "trsvcid": "$NVMF_PORT", 00:23:00.252 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:00.252 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:00.252 "hdgst": ${hdgst:-false}, 00:23:00.252 "ddgst": ${ddgst:-false} 00:23:00.252 }, 00:23:00.252 "method": "bdev_nvme_attach_controller" 00:23:00.252 } 00:23:00.252 EOF 00:23:00.252 )") 00:23:00.252 10:11:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:00.511 10:11:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:00.511 10:11:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:00.511 { 00:23:00.511 "params": { 00:23:00.511 "name": "Nvme$subsystem", 00:23:00.511 "trtype": "$TEST_TRANSPORT", 00:23:00.511 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:00.511 "adrfam": "ipv4", 00:23:00.511 "trsvcid": "$NVMF_PORT", 00:23:00.511 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:00.511 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:00.511 "hdgst": ${hdgst:-false}, 00:23:00.511 "ddgst": ${ddgst:-false} 00:23:00.511 }, 00:23:00.511 "method": "bdev_nvme_attach_controller" 00:23:00.511 } 00:23:00.511 EOF 00:23:00.511 )") 00:23:00.511 10:11:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:00.511 10:11:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:00.511 10:11:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:00.511 { 00:23:00.511 "params": { 00:23:00.511 "name": "Nvme$subsystem", 00:23:00.511 "trtype": "$TEST_TRANSPORT", 00:23:00.511 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:00.511 "adrfam": "ipv4", 00:23:00.511 "trsvcid": "$NVMF_PORT", 00:23:00.511 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:00.511 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:00.511 "hdgst": ${hdgst:-false}, 00:23:00.511 "ddgst": ${ddgst:-false} 00:23:00.511 }, 00:23:00.511 "method": "bdev_nvme_attach_controller" 00:23:00.511 } 00:23:00.511 EOF 00:23:00.511 )") 00:23:00.511 10:11:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:00.511 10:11:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:00.511 10:11:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:00.511 { 00:23:00.511 "params": { 00:23:00.511 "name": "Nvme$subsystem", 00:23:00.511 "trtype": "$TEST_TRANSPORT", 00:23:00.511 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:00.511 "adrfam": "ipv4", 00:23:00.511 "trsvcid": "$NVMF_PORT", 00:23:00.511 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:00.511 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:00.511 "hdgst": ${hdgst:-false}, 00:23:00.511 "ddgst": ${ddgst:-false} 00:23:00.511 }, 00:23:00.511 "method": "bdev_nvme_attach_controller" 00:23:00.511 } 00:23:00.511 EOF 00:23:00.511 )") 00:23:00.511 10:11:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:00.511 10:11:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:00.511 10:11:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:00.511 { 00:23:00.511 "params": { 00:23:00.511 "name": "Nvme$subsystem", 00:23:00.511 "trtype": "$TEST_TRANSPORT", 00:23:00.511 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:00.511 "adrfam": "ipv4", 00:23:00.511 "trsvcid": "$NVMF_PORT", 00:23:00.511 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:00.511 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:00.511 "hdgst": ${hdgst:-false}, 00:23:00.511 "ddgst": ${ddgst:-false} 00:23:00.511 }, 00:23:00.511 "method": "bdev_nvme_attach_controller" 00:23:00.511 } 00:23:00.511 EOF 00:23:00.511 )") 00:23:00.511 10:11:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:00.511 10:11:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:00.511 10:11:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:00.511 { 00:23:00.511 "params": { 00:23:00.511 "name": "Nvme$subsystem", 00:23:00.511 "trtype": "$TEST_TRANSPORT", 00:23:00.511 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:00.511 "adrfam": "ipv4", 00:23:00.511 "trsvcid": "$NVMF_PORT", 00:23:00.511 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:00.511 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:00.511 "hdgst": ${hdgst:-false}, 00:23:00.511 "ddgst": ${ddgst:-false} 00:23:00.511 }, 00:23:00.511 "method": "bdev_nvme_attach_controller" 00:23:00.511 } 00:23:00.511 EOF 00:23:00.511 )") 00:23:00.511 10:11:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:00.511 10:11:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:00.511 10:11:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:00.511 { 00:23:00.511 "params": { 00:23:00.511 "name": "Nvme$subsystem", 00:23:00.511 "trtype": "$TEST_TRANSPORT", 00:23:00.511 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:00.511 "adrfam": "ipv4", 00:23:00.511 "trsvcid": "$NVMF_PORT", 00:23:00.511 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:00.511 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:00.511 "hdgst": ${hdgst:-false}, 00:23:00.511 "ddgst": ${ddgst:-false} 00:23:00.511 }, 00:23:00.511 "method": "bdev_nvme_attach_controller" 00:23:00.511 } 00:23:00.511 EOF 00:23:00.511 )") 00:23:00.511 10:11:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:00.511 10:11:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:00.511 10:11:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:00.511 { 00:23:00.511 "params": { 00:23:00.511 "name": "Nvme$subsystem", 00:23:00.511 "trtype": "$TEST_TRANSPORT", 00:23:00.511 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:00.511 "adrfam": "ipv4", 00:23:00.511 "trsvcid": "$NVMF_PORT", 00:23:00.511 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:00.511 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:00.511 "hdgst": ${hdgst:-false}, 00:23:00.511 "ddgst": ${ddgst:-false} 00:23:00.511 }, 00:23:00.511 "method": "bdev_nvme_attach_controller" 00:23:00.511 } 00:23:00.511 EOF 00:23:00.511 )") 00:23:00.511 10:11:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:00.511 10:11:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:00.511 10:11:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:00.511 { 00:23:00.511 "params": { 00:23:00.511 "name": "Nvme$subsystem", 00:23:00.511 "trtype": "$TEST_TRANSPORT", 00:23:00.511 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:00.511 "adrfam": "ipv4", 00:23:00.511 "trsvcid": "$NVMF_PORT", 00:23:00.511 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:00.511 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:00.511 "hdgst": ${hdgst:-false}, 00:23:00.511 "ddgst": ${ddgst:-false} 00:23:00.511 }, 00:23:00.511 "method": "bdev_nvme_attach_controller" 00:23:00.511 } 00:23:00.511 EOF 00:23:00.511 )") 00:23:00.511 10:11:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:00.511 10:11:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:23:00.511 10:11:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:23:00.511 10:11:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:00.511 "params": { 00:23:00.511 "name": "Nvme1", 00:23:00.511 "trtype": "tcp", 00:23:00.511 "traddr": "10.0.0.2", 00:23:00.511 "adrfam": "ipv4", 00:23:00.511 "trsvcid": "4420", 00:23:00.511 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:00.511 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:00.511 "hdgst": false, 00:23:00.511 "ddgst": false 00:23:00.511 }, 00:23:00.511 "method": "bdev_nvme_attach_controller" 00:23:00.511 },{ 00:23:00.511 "params": { 00:23:00.511 "name": "Nvme2", 00:23:00.511 "trtype": "tcp", 00:23:00.511 "traddr": "10.0.0.2", 00:23:00.511 "adrfam": "ipv4", 00:23:00.511 "trsvcid": "4420", 00:23:00.511 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:00.511 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:00.511 "hdgst": false, 00:23:00.511 "ddgst": false 00:23:00.511 }, 00:23:00.511 "method": "bdev_nvme_attach_controller" 00:23:00.511 },{ 00:23:00.511 "params": { 00:23:00.511 "name": "Nvme3", 00:23:00.511 "trtype": "tcp", 00:23:00.511 "traddr": "10.0.0.2", 00:23:00.511 "adrfam": "ipv4", 00:23:00.511 "trsvcid": "4420", 00:23:00.512 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:00.512 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:00.512 "hdgst": false, 00:23:00.512 "ddgst": false 00:23:00.512 }, 00:23:00.512 "method": "bdev_nvme_attach_controller" 00:23:00.512 },{ 00:23:00.512 "params": { 00:23:00.512 "name": "Nvme4", 00:23:00.512 "trtype": "tcp", 00:23:00.512 "traddr": "10.0.0.2", 00:23:00.512 "adrfam": "ipv4", 00:23:00.512 "trsvcid": "4420", 00:23:00.512 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:00.512 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:00.512 "hdgst": false, 00:23:00.512 "ddgst": false 00:23:00.512 }, 00:23:00.512 "method": "bdev_nvme_attach_controller" 00:23:00.512 },{ 00:23:00.512 "params": { 00:23:00.512 "name": "Nvme5", 00:23:00.512 "trtype": "tcp", 00:23:00.512 "traddr": "10.0.0.2", 00:23:00.512 "adrfam": "ipv4", 00:23:00.512 "trsvcid": "4420", 00:23:00.512 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:00.512 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:00.512 "hdgst": false, 00:23:00.512 "ddgst": false 00:23:00.512 }, 00:23:00.512 "method": "bdev_nvme_attach_controller" 00:23:00.512 },{ 00:23:00.512 "params": { 00:23:00.512 "name": "Nvme6", 00:23:00.512 "trtype": "tcp", 00:23:00.512 "traddr": "10.0.0.2", 00:23:00.512 "adrfam": "ipv4", 00:23:00.512 "trsvcid": "4420", 00:23:00.512 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:00.512 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:00.512 "hdgst": false, 00:23:00.512 "ddgst": false 00:23:00.512 }, 00:23:00.512 "method": "bdev_nvme_attach_controller" 00:23:00.512 },{ 00:23:00.512 "params": { 00:23:00.512 "name": "Nvme7", 00:23:00.512 "trtype": "tcp", 00:23:00.512 "traddr": "10.0.0.2", 00:23:00.512 "adrfam": "ipv4", 00:23:00.512 "trsvcid": "4420", 00:23:00.512 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:00.512 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:00.512 "hdgst": false, 00:23:00.512 "ddgst": false 00:23:00.512 }, 00:23:00.512 "method": "bdev_nvme_attach_controller" 00:23:00.512 },{ 00:23:00.512 "params": { 00:23:00.512 "name": "Nvme8", 00:23:00.512 "trtype": "tcp", 00:23:00.512 "traddr": "10.0.0.2", 00:23:00.512 "adrfam": "ipv4", 00:23:00.512 "trsvcid": "4420", 00:23:00.512 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:00.512 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:00.512 "hdgst": false, 00:23:00.512 "ddgst": false 00:23:00.512 }, 00:23:00.512 "method": "bdev_nvme_attach_controller" 00:23:00.512 },{ 00:23:00.512 "params": { 00:23:00.512 "name": "Nvme9", 00:23:00.512 "trtype": "tcp", 00:23:00.512 "traddr": "10.0.0.2", 00:23:00.512 "adrfam": "ipv4", 00:23:00.512 "trsvcid": "4420", 00:23:00.512 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:00.512 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:00.512 "hdgst": false, 00:23:00.512 "ddgst": false 00:23:00.512 }, 00:23:00.512 "method": "bdev_nvme_attach_controller" 00:23:00.512 },{ 00:23:00.512 "params": { 00:23:00.512 "name": "Nvme10", 00:23:00.512 "trtype": "tcp", 00:23:00.512 "traddr": "10.0.0.2", 00:23:00.512 "adrfam": "ipv4", 00:23:00.512 "trsvcid": "4420", 00:23:00.512 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:00.512 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:00.512 "hdgst": false, 00:23:00.512 "ddgst": false 00:23:00.512 }, 00:23:00.512 "method": "bdev_nvme_attach_controller" 00:23:00.512 }' 00:23:00.512 [2024-07-25 10:11:45.462348] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:23:00.512 [2024-07-25 10:11:45.462457] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid486232 ] 00:23:00.512 EAL: No free 2048 kB hugepages reported on node 1 00:23:00.512 [2024-07-25 10:11:45.533135] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:00.512 [2024-07-25 10:11:45.645330] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:02.409 Running I/O for 1 seconds... 00:23:03.342 00:23:03.342 Latency(us) 00:23:03.342 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:03.342 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:03.342 Verification LBA range: start 0x0 length 0x400 00:23:03.342 Nvme1n1 : 1.10 233.21 14.58 0.00 0.00 269697.52 18932.62 262532.36 00:23:03.342 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:03.342 Verification LBA range: start 0x0 length 0x400 00:23:03.342 Nvme2n1 : 1.12 235.58 14.72 0.00 0.00 260385.66 8204.14 239230.67 00:23:03.342 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:03.342 Verification LBA range: start 0x0 length 0x400 00:23:03.342 Nvme3n1 : 1.11 234.10 14.63 0.00 0.00 259317.92 11747.93 242337.56 00:23:03.342 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:03.342 Verification LBA range: start 0x0 length 0x400 00:23:03.342 Nvme4n1 : 1.11 233.53 14.60 0.00 0.00 256038.14 7767.23 276513.37 00:23:03.342 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:03.342 Verification LBA range: start 0x0 length 0x400 00:23:03.342 Nvme5n1 : 1.12 227.88 14.24 0.00 0.00 259814.78 19126.80 265639.25 00:23:03.342 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:03.342 Verification LBA range: start 0x0 length 0x400 00:23:03.342 Nvme6n1 : 1.16 220.45 13.78 0.00 0.00 264756.53 19612.25 267192.70 00:23:03.342 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:03.342 Verification LBA range: start 0x0 length 0x400 00:23:03.342 Nvme7n1 : 1.15 223.00 13.94 0.00 0.00 256664.46 17767.54 260978.92 00:23:03.342 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:03.342 Verification LBA range: start 0x0 length 0x400 00:23:03.342 Nvme8n1 : 1.18 271.58 16.97 0.00 0.00 207801.38 7233.23 245444.46 00:23:03.342 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:03.342 Verification LBA range: start 0x0 length 0x400 00:23:03.342 Nvme9n1 : 1.17 222.15 13.88 0.00 0.00 249440.24 1541.31 267192.70 00:23:03.342 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:03.342 Verification LBA range: start 0x0 length 0x400 00:23:03.343 Nvme10n1 : 1.17 218.76 13.67 0.00 0.00 249434.26 21748.24 295154.73 00:23:03.343 =================================================================================================================== 00:23:03.343 Total : 2320.24 145.02 0.00 0.00 252265.63 1541.31 295154.73 00:23:03.600 10:11:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:23:03.600 10:11:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:23:03.601 10:11:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:03.601 10:11:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:03.601 10:11:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:23:03.601 10:11:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:03.601 10:11:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:23:03.601 10:11:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:03.601 10:11:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:23:03.601 10:11:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:03.601 10:11:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:03.601 rmmod nvme_tcp 00:23:03.601 rmmod nvme_fabrics 00:23:03.601 rmmod nvme_keyring 00:23:03.601 10:11:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:03.601 10:11:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:23:03.601 10:11:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:23:03.601 10:11:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 485746 ']' 00:23:03.601 10:11:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 485746 00:23:03.601 10:11:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@950 -- # '[' -z 485746 ']' 00:23:03.601 10:11:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # kill -0 485746 00:23:03.601 10:11:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # uname 00:23:03.601 10:11:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:03.601 10:11:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 485746 00:23:03.601 10:11:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:03.601 10:11:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:03.601 10:11:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 485746' 00:23:03.601 killing process with pid 485746 00:23:03.601 10:11:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@969 -- # kill 485746 00:23:03.601 10:11:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@974 -- # wait 485746 00:23:04.166 10:11:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:04.166 10:11:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:04.166 10:11:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:04.166 10:11:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:04.166 10:11:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:04.166 10:11:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:04.166 10:11:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:04.166 10:11:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:06.697 10:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:06.697 00:23:06.697 real 0m12.542s 00:23:06.697 user 0m35.481s 00:23:06.697 sys 0m3.608s 00:23:06.697 10:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:06.697 10:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:06.697 ************************************ 00:23:06.697 END TEST nvmf_shutdown_tc1 00:23:06.697 ************************************ 00:23:06.697 10:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:23:06.697 10:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:23:06.697 10:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:06.697 10:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:06.697 ************************************ 00:23:06.697 START TEST nvmf_shutdown_tc2 00:23:06.697 ************************************ 00:23:06.697 10:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc2 00:23:06.697 10:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:23:06.697 10:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:23:06.697 10:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:06.697 10:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:06.697 10:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:06.697 10:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:06.697 10:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:06.697 10:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:06.697 10:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:06.697 10:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:06.697 10:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:06.697 10:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:06.697 10:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:06.697 10:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:06.697 10:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:06.697 10:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:06.697 10:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:06.697 10:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:06.697 10:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:06.697 10:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:06.697 10:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:06.697 10:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:23:06.697 10:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:06.697 10:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:23:06.697 10:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:23:06.697 10:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:23:06.697 10:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:23:06.697 10:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:23:06.697 10:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:06.697 10:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:06.697 10:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:06.697 10:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:06.697 10:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:06.697 10:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:06.697 10:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:06.697 10:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:06.697 10:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:06.697 10:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:06.697 10:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:06.697 10:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:06.697 10:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:06.697 10:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:06.697 10:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:06.697 10:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:06.697 10:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:06.697 10:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:06.697 10:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:06.697 10:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:23:06.697 Found 0000:84:00.0 (0x8086 - 0x159b) 00:23:06.697 10:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:06.697 10:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:06.697 10:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:06.697 10:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:06.697 10:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:06.697 10:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:06.697 10:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:23:06.697 Found 0000:84:00.1 (0x8086 - 0x159b) 00:23:06.697 10:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:06.697 10:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:06.697 10:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:06.697 10:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:06.697 10:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:06.697 10:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:06.697 10:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:06.697 10:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:06.697 10:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:06.697 10:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:06.698 10:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:06.698 10:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:06.698 10:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:06.698 10:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:06.698 10:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:06.698 10:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:23:06.698 Found net devices under 0000:84:00.0: cvl_0_0 00:23:06.698 10:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:06.698 10:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:06.698 10:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:06.698 10:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:06.698 10:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:06.698 10:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:06.698 10:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:06.698 10:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:06.698 10:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:23:06.698 Found net devices under 0000:84:00.1: cvl_0_1 00:23:06.698 10:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:06.698 10:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:06.698 10:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:23:06.698 10:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:06.698 10:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:06.698 10:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:06.698 10:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:06.698 10:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:06.698 10:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:06.698 10:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:06.698 10:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:06.698 10:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:06.698 10:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:06.698 10:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:06.698 10:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:06.698 10:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:06.698 10:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:06.698 10:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:06.698 10:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:06.698 10:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:06.698 10:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:06.698 10:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:06.698 10:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:06.698 10:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:06.698 10:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:06.698 10:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:06.698 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:06.698 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.170 ms 00:23:06.698 00:23:06.698 --- 10.0.0.2 ping statistics --- 00:23:06.698 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:06.698 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:23:06.698 10:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:06.698 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:06.698 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.100 ms 00:23:06.698 00:23:06.698 --- 10.0.0.1 ping statistics --- 00:23:06.698 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:06.698 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:23:06.698 10:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:06.698 10:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:23:06.698 10:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:06.698 10:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:06.698 10:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:06.698 10:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:06.698 10:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:06.698 10:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:06.698 10:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:06.698 10:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:23:06.698 10:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:06.698 10:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:06.698 10:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:06.698 10:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=487116 00:23:06.698 10:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:06.698 10:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 487116 00:23:06.698 10:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 487116 ']' 00:23:06.698 10:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:06.698 10:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:06.698 10:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:06.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:06.698 10:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:06.698 10:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:06.698 [2024-07-25 10:11:51.607514] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:23:06.698 [2024-07-25 10:11:51.607620] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:06.698 EAL: No free 2048 kB hugepages reported on node 1 00:23:06.698 [2024-07-25 10:11:51.702771] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:06.698 [2024-07-25 10:11:51.829557] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:06.698 [2024-07-25 10:11:51.829621] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:06.698 [2024-07-25 10:11:51.829638] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:06.698 [2024-07-25 10:11:51.829661] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:06.698 [2024-07-25 10:11:51.829674] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:06.698 [2024-07-25 10:11:51.829758] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:06.698 [2024-07-25 10:11:51.829814] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:06.698 [2024-07-25 10:11:51.829868] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:23:06.698 [2024-07-25 10:11:51.829871] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:06.957 10:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:06.957 10:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:23:06.957 10:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:06.957 10:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:06.957 10:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:06.957 10:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:06.957 10:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:06.957 10:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.957 10:11:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:06.957 [2024-07-25 10:11:51.996229] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:06.957 10:11:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:06.957 10:11:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:23:06.957 10:11:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:23:06.957 10:11:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:06.957 10:11:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:06.957 10:11:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:06.957 10:11:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:06.957 10:11:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:06.957 10:11:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:06.957 10:11:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:06.957 10:11:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:06.957 10:11:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:06.957 10:11:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:06.957 10:11:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:06.957 10:11:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:06.957 10:11:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:06.957 10:11:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:06.957 10:11:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:06.957 10:11:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:06.957 10:11:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:06.957 10:11:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:06.957 10:11:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:06.957 10:11:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:06.957 10:11:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:06.957 10:11:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:06.957 10:11:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:06.957 10:11:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:23:06.957 10:11:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:06.957 10:11:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:06.957 Malloc1 00:23:06.957 [2024-07-25 10:11:52.095071] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:07.215 Malloc2 00:23:07.215 Malloc3 00:23:07.215 Malloc4 00:23:07.215 Malloc5 00:23:07.215 Malloc6 00:23:07.215 Malloc7 00:23:07.473 Malloc8 00:23:07.473 Malloc9 00:23:07.473 Malloc10 00:23:07.473 10:11:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.473 10:11:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:23:07.473 10:11:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:07.473 10:11:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:07.473 10:11:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=487277 00:23:07.473 10:11:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 487277 /var/tmp/bdevperf.sock 00:23:07.473 10:11:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 487277 ']' 00:23:07.473 10:11:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:07.473 10:11:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:23:07.473 10:11:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:07.473 10:11:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:07.473 10:11:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:07.473 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:07.473 10:11:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:23:07.473 10:11:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:07.473 10:11:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:23:07.473 10:11:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:07.473 10:11:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:07.473 10:11:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:07.473 { 00:23:07.473 "params": { 00:23:07.473 "name": "Nvme$subsystem", 00:23:07.473 "trtype": "$TEST_TRANSPORT", 00:23:07.473 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:07.473 "adrfam": "ipv4", 00:23:07.473 "trsvcid": "$NVMF_PORT", 00:23:07.473 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:07.473 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:07.473 "hdgst": ${hdgst:-false}, 00:23:07.473 "ddgst": ${ddgst:-false} 00:23:07.473 }, 00:23:07.473 "method": "bdev_nvme_attach_controller" 00:23:07.473 } 00:23:07.473 EOF 00:23:07.473 )") 00:23:07.473 10:11:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:07.473 10:11:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:07.473 10:11:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:07.473 { 00:23:07.473 "params": { 00:23:07.473 "name": "Nvme$subsystem", 00:23:07.473 "trtype": "$TEST_TRANSPORT", 00:23:07.473 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:07.473 "adrfam": "ipv4", 00:23:07.473 "trsvcid": "$NVMF_PORT", 00:23:07.473 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:07.473 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:07.473 "hdgst": ${hdgst:-false}, 00:23:07.473 "ddgst": ${ddgst:-false} 00:23:07.473 }, 00:23:07.473 "method": "bdev_nvme_attach_controller" 00:23:07.473 } 00:23:07.473 EOF 00:23:07.473 )") 00:23:07.473 10:11:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:07.473 10:11:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:07.473 10:11:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:07.473 { 00:23:07.473 "params": { 00:23:07.473 "name": "Nvme$subsystem", 00:23:07.473 "trtype": "$TEST_TRANSPORT", 00:23:07.473 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:07.473 "adrfam": "ipv4", 00:23:07.473 "trsvcid": "$NVMF_PORT", 00:23:07.473 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:07.473 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:07.473 "hdgst": ${hdgst:-false}, 00:23:07.473 "ddgst": ${ddgst:-false} 00:23:07.473 }, 00:23:07.473 "method": "bdev_nvme_attach_controller" 00:23:07.473 } 00:23:07.473 EOF 00:23:07.473 )") 00:23:07.473 10:11:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:07.473 10:11:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:07.473 10:11:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:07.473 { 00:23:07.473 "params": { 00:23:07.473 "name": "Nvme$subsystem", 00:23:07.473 "trtype": "$TEST_TRANSPORT", 00:23:07.473 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:07.473 "adrfam": "ipv4", 00:23:07.473 "trsvcid": "$NVMF_PORT", 00:23:07.473 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:07.473 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:07.473 "hdgst": ${hdgst:-false}, 00:23:07.473 "ddgst": ${ddgst:-false} 00:23:07.473 }, 00:23:07.473 "method": "bdev_nvme_attach_controller" 00:23:07.473 } 00:23:07.473 EOF 00:23:07.473 )") 00:23:07.473 10:11:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:07.473 10:11:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:07.473 10:11:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:07.473 { 00:23:07.473 "params": { 00:23:07.473 "name": "Nvme$subsystem", 00:23:07.473 "trtype": "$TEST_TRANSPORT", 00:23:07.474 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:07.474 "adrfam": "ipv4", 00:23:07.474 "trsvcid": "$NVMF_PORT", 00:23:07.474 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:07.474 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:07.474 "hdgst": ${hdgst:-false}, 00:23:07.474 "ddgst": ${ddgst:-false} 00:23:07.474 }, 00:23:07.474 "method": "bdev_nvme_attach_controller" 00:23:07.474 } 00:23:07.474 EOF 00:23:07.474 )") 00:23:07.474 10:11:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:07.474 10:11:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:07.474 10:11:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:07.474 { 00:23:07.474 "params": { 00:23:07.474 "name": "Nvme$subsystem", 00:23:07.474 "trtype": "$TEST_TRANSPORT", 00:23:07.474 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:07.474 "adrfam": "ipv4", 00:23:07.474 "trsvcid": "$NVMF_PORT", 00:23:07.474 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:07.474 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:07.474 "hdgst": ${hdgst:-false}, 00:23:07.474 "ddgst": ${ddgst:-false} 00:23:07.474 }, 00:23:07.474 "method": "bdev_nvme_attach_controller" 00:23:07.474 } 00:23:07.474 EOF 00:23:07.474 )") 00:23:07.474 10:11:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:07.474 10:11:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:07.474 10:11:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:07.474 { 00:23:07.474 "params": { 00:23:07.474 "name": "Nvme$subsystem", 00:23:07.474 "trtype": "$TEST_TRANSPORT", 00:23:07.474 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:07.474 "adrfam": "ipv4", 00:23:07.474 "trsvcid": "$NVMF_PORT", 00:23:07.474 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:07.474 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:07.474 "hdgst": ${hdgst:-false}, 00:23:07.474 "ddgst": ${ddgst:-false} 00:23:07.474 }, 00:23:07.474 "method": "bdev_nvme_attach_controller" 00:23:07.474 } 00:23:07.474 EOF 00:23:07.474 )") 00:23:07.474 10:11:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:07.474 10:11:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:07.474 10:11:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:07.474 { 00:23:07.474 "params": { 00:23:07.474 "name": "Nvme$subsystem", 00:23:07.474 "trtype": "$TEST_TRANSPORT", 00:23:07.474 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:07.474 "adrfam": "ipv4", 00:23:07.474 "trsvcid": "$NVMF_PORT", 00:23:07.474 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:07.474 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:07.474 "hdgst": ${hdgst:-false}, 00:23:07.474 "ddgst": ${ddgst:-false} 00:23:07.474 }, 00:23:07.474 "method": "bdev_nvme_attach_controller" 00:23:07.474 } 00:23:07.474 EOF 00:23:07.474 )") 00:23:07.474 10:11:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:07.474 10:11:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:07.474 10:11:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:07.474 { 00:23:07.474 "params": { 00:23:07.474 "name": "Nvme$subsystem", 00:23:07.474 "trtype": "$TEST_TRANSPORT", 00:23:07.474 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:07.474 "adrfam": "ipv4", 00:23:07.474 "trsvcid": "$NVMF_PORT", 00:23:07.474 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:07.474 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:07.474 "hdgst": ${hdgst:-false}, 00:23:07.474 "ddgst": ${ddgst:-false} 00:23:07.474 }, 00:23:07.474 "method": "bdev_nvme_attach_controller" 00:23:07.474 } 00:23:07.474 EOF 00:23:07.474 )") 00:23:07.474 10:11:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:07.474 10:11:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:07.474 10:11:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:07.474 { 00:23:07.474 "params": { 00:23:07.474 "name": "Nvme$subsystem", 00:23:07.474 "trtype": "$TEST_TRANSPORT", 00:23:07.474 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:07.474 "adrfam": "ipv4", 00:23:07.474 "trsvcid": "$NVMF_PORT", 00:23:07.474 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:07.474 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:07.474 "hdgst": ${hdgst:-false}, 00:23:07.474 "ddgst": ${ddgst:-false} 00:23:07.474 }, 00:23:07.474 "method": "bdev_nvme_attach_controller" 00:23:07.474 } 00:23:07.474 EOF 00:23:07.474 )") 00:23:07.474 10:11:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:07.474 10:11:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:23:07.474 10:11:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:23:07.474 10:11:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:07.474 "params": { 00:23:07.474 "name": "Nvme1", 00:23:07.474 "trtype": "tcp", 00:23:07.474 "traddr": "10.0.0.2", 00:23:07.474 "adrfam": "ipv4", 00:23:07.474 "trsvcid": "4420", 00:23:07.474 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:07.474 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:07.474 "hdgst": false, 00:23:07.474 "ddgst": false 00:23:07.474 }, 00:23:07.474 "method": "bdev_nvme_attach_controller" 00:23:07.474 },{ 00:23:07.474 "params": { 00:23:07.474 "name": "Nvme2", 00:23:07.474 "trtype": "tcp", 00:23:07.474 "traddr": "10.0.0.2", 00:23:07.474 "adrfam": "ipv4", 00:23:07.474 "trsvcid": "4420", 00:23:07.474 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:07.474 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:07.474 "hdgst": false, 00:23:07.474 "ddgst": false 00:23:07.474 }, 00:23:07.474 "method": "bdev_nvme_attach_controller" 00:23:07.474 },{ 00:23:07.474 "params": { 00:23:07.474 "name": "Nvme3", 00:23:07.474 "trtype": "tcp", 00:23:07.474 "traddr": "10.0.0.2", 00:23:07.474 "adrfam": "ipv4", 00:23:07.474 "trsvcid": "4420", 00:23:07.474 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:07.474 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:07.474 "hdgst": false, 00:23:07.474 "ddgst": false 00:23:07.474 }, 00:23:07.474 "method": "bdev_nvme_attach_controller" 00:23:07.474 },{ 00:23:07.474 "params": { 00:23:07.474 "name": "Nvme4", 00:23:07.474 "trtype": "tcp", 00:23:07.474 "traddr": "10.0.0.2", 00:23:07.474 "adrfam": "ipv4", 00:23:07.474 "trsvcid": "4420", 00:23:07.474 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:07.474 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:07.474 "hdgst": false, 00:23:07.474 "ddgst": false 00:23:07.474 }, 00:23:07.474 "method": "bdev_nvme_attach_controller" 00:23:07.474 },{ 00:23:07.474 "params": { 00:23:07.474 "name": "Nvme5", 00:23:07.474 "trtype": "tcp", 00:23:07.474 "traddr": "10.0.0.2", 00:23:07.474 "adrfam": "ipv4", 00:23:07.474 "trsvcid": "4420", 00:23:07.474 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:07.474 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:07.474 "hdgst": false, 00:23:07.474 "ddgst": false 00:23:07.474 }, 00:23:07.474 "method": "bdev_nvme_attach_controller" 00:23:07.474 },{ 00:23:07.474 "params": { 00:23:07.474 "name": "Nvme6", 00:23:07.474 "trtype": "tcp", 00:23:07.474 "traddr": "10.0.0.2", 00:23:07.474 "adrfam": "ipv4", 00:23:07.474 "trsvcid": "4420", 00:23:07.474 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:07.474 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:07.474 "hdgst": false, 00:23:07.474 "ddgst": false 00:23:07.474 }, 00:23:07.474 "method": "bdev_nvme_attach_controller" 00:23:07.474 },{ 00:23:07.474 "params": { 00:23:07.474 "name": "Nvme7", 00:23:07.474 "trtype": "tcp", 00:23:07.474 "traddr": "10.0.0.2", 00:23:07.474 "adrfam": "ipv4", 00:23:07.474 "trsvcid": "4420", 00:23:07.474 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:07.474 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:07.474 "hdgst": false, 00:23:07.474 "ddgst": false 00:23:07.474 }, 00:23:07.474 "method": "bdev_nvme_attach_controller" 00:23:07.474 },{ 00:23:07.474 "params": { 00:23:07.474 "name": "Nvme8", 00:23:07.474 "trtype": "tcp", 00:23:07.474 "traddr": "10.0.0.2", 00:23:07.474 "adrfam": "ipv4", 00:23:07.474 "trsvcid": "4420", 00:23:07.474 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:07.474 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:07.474 "hdgst": false, 00:23:07.474 "ddgst": false 00:23:07.474 }, 00:23:07.474 "method": "bdev_nvme_attach_controller" 00:23:07.474 },{ 00:23:07.474 "params": { 00:23:07.474 "name": "Nvme9", 00:23:07.475 "trtype": "tcp", 00:23:07.475 "traddr": "10.0.0.2", 00:23:07.475 "adrfam": "ipv4", 00:23:07.475 "trsvcid": "4420", 00:23:07.475 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:07.475 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:07.475 "hdgst": false, 00:23:07.475 "ddgst": false 00:23:07.475 }, 00:23:07.475 "method": "bdev_nvme_attach_controller" 00:23:07.475 },{ 00:23:07.475 "params": { 00:23:07.475 "name": "Nvme10", 00:23:07.475 "trtype": "tcp", 00:23:07.475 "traddr": "10.0.0.2", 00:23:07.475 "adrfam": "ipv4", 00:23:07.475 "trsvcid": "4420", 00:23:07.475 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:07.475 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:07.475 "hdgst": false, 00:23:07.475 "ddgst": false 00:23:07.475 }, 00:23:07.475 "method": "bdev_nvme_attach_controller" 00:23:07.475 }' 00:23:07.475 [2024-07-25 10:11:52.609917] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:23:07.475 [2024-07-25 10:11:52.610005] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid487277 ] 00:23:07.732 EAL: No free 2048 kB hugepages reported on node 1 00:23:07.732 [2024-07-25 10:11:52.673807] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:07.732 [2024-07-25 10:11:52.783791] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:09.630 Running I/O for 10 seconds... 00:23:09.630 10:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:09.630 10:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:23:09.630 10:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:09.630 10:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.630 10:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:09.630 10:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.630 10:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:23:09.630 10:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:23:09.630 10:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:23:09.630 10:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:23:09.630 10:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:23:09.630 10:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:23:09.630 10:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:09.630 10:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:09.630 10:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:09.630 10:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.630 10:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:09.630 10:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.630 10:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:23:09.630 10:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:23:09.630 10:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:23:09.888 10:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:23:09.888 10:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:09.888 10:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:09.888 10:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:09.888 10:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.888 10:11:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:09.888 10:11:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.888 10:11:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:23:09.888 10:11:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:23:09.888 10:11:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:23:09.888 10:11:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:23:09.888 10:11:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:23:09.888 10:11:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 487277 00:23:09.888 10:11:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 487277 ']' 00:23:09.888 10:11:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 487277 00:23:09.888 10:11:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:23:09.888 10:11:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:09.888 10:11:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 487277 00:23:10.146 10:11:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:10.146 10:11:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:10.146 10:11:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 487277' 00:23:10.146 killing process with pid 487277 00:23:10.146 10:11:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 487277 00:23:10.146 10:11:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 487277 00:23:10.146 Received shutdown signal, test time was about 0.836051 seconds 00:23:10.146 00:23:10.146 Latency(us) 00:23:10.146 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:10.146 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:10.146 Verification LBA range: start 0x0 length 0x400 00:23:10.146 Nvme1n1 : 0.81 236.45 14.78 0.00 0.00 266780.32 23301.69 256318.58 00:23:10.146 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:10.146 Verification LBA range: start 0x0 length 0x400 00:23:10.146 Nvme2n1 : 0.81 237.96 14.87 0.00 0.00 259362.77 31068.92 251658.24 00:23:10.146 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:10.146 Verification LBA range: start 0x0 length 0x400 00:23:10.146 Nvme3n1 : 0.81 237.31 14.83 0.00 0.00 253603.59 19223.89 250104.79 00:23:10.146 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:10.146 Verification LBA range: start 0x0 length 0x400 00:23:10.146 Nvme4n1 : 0.79 241.52 15.09 0.00 0.00 242713.03 16117.00 264085.81 00:23:10.146 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:10.146 Verification LBA range: start 0x0 length 0x400 00:23:10.146 Nvme5n1 : 0.83 231.32 14.46 0.00 0.00 248790.28 21262.79 265639.25 00:23:10.146 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:10.146 Verification LBA range: start 0x0 length 0x400 00:23:10.146 Nvme6n1 : 0.83 232.63 14.54 0.00 0.00 240247.59 38059.43 237677.23 00:23:10.146 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:10.146 Verification LBA range: start 0x0 length 0x400 00:23:10.146 Nvme7n1 : 0.82 232.97 14.56 0.00 0.00 234794.35 17670.45 242337.56 00:23:10.146 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:10.146 Verification LBA range: start 0x0 length 0x400 00:23:10.146 Nvme8n1 : 0.82 235.16 14.70 0.00 0.00 226110.58 17670.45 250104.79 00:23:10.146 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:10.146 Verification LBA range: start 0x0 length 0x400 00:23:10.146 Nvme9n1 : 0.78 164.03 10.25 0.00 0.00 312527.83 21165.70 279620.27 00:23:10.146 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:10.146 Verification LBA range: start 0x0 length 0x400 00:23:10.146 Nvme10n1 : 0.84 229.89 14.37 0.00 0.00 220801.71 19223.89 295154.73 00:23:10.146 =================================================================================================================== 00:23:10.146 Total : 2279.23 142.45 0.00 0.00 248436.84 16117.00 295154.73 00:23:10.404 10:11:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:23:11.335 10:11:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 487116 00:23:11.335 10:11:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:23:11.335 10:11:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:23:11.335 10:11:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:11.335 10:11:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:11.335 10:11:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:23:11.335 10:11:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:11.335 10:11:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:23:11.335 10:11:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:11.335 10:11:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:23:11.335 10:11:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:11.335 10:11:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:11.335 rmmod nvme_tcp 00:23:11.335 rmmod nvme_fabrics 00:23:11.335 rmmod nvme_keyring 00:23:11.335 10:11:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:11.335 10:11:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:23:11.335 10:11:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:23:11.335 10:11:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 487116 ']' 00:23:11.335 10:11:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 487116 00:23:11.335 10:11:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 487116 ']' 00:23:11.335 10:11:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 487116 00:23:11.335 10:11:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:23:11.335 10:11:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:11.335 10:11:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 487116 00:23:11.593 10:11:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:11.593 10:11:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:11.593 10:11:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 487116' 00:23:11.593 killing process with pid 487116 00:23:11.593 10:11:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 487116 00:23:11.593 10:11:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 487116 00:23:12.158 10:11:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:12.158 10:11:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:12.158 10:11:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:12.159 10:11:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:12.159 10:11:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:12.159 10:11:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:12.159 10:11:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:12.159 10:11:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:14.062 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:14.062 00:23:14.062 real 0m7.775s 00:23:14.062 user 0m23.223s 00:23:14.062 sys 0m1.529s 00:23:14.062 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:14.062 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:14.062 ************************************ 00:23:14.062 END TEST nvmf_shutdown_tc2 00:23:14.062 ************************************ 00:23:14.062 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:23:14.062 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:23:14.062 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:14.062 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:14.062 ************************************ 00:23:14.062 START TEST nvmf_shutdown_tc3 00:23:14.062 ************************************ 00:23:14.062 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc3 00:23:14.062 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:23:14.062 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:23:14.062 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:14.062 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:14.062 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:14.062 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:14.062 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:14.062 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:14.062 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:14.062 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:14.062 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:14.062 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:14.062 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:14.062 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:14.062 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:14.062 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:14.062 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:14.062 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:14.062 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:14.063 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:14.063 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:14.063 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:23:14.063 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:14.063 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:23:14.063 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:23:14.063 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:23:14.063 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:23:14.063 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:23:14.063 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:14.063 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:14.063 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:14.063 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:14.063 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:14.063 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:14.063 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:14.063 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:14.063 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:14.063 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:14.063 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:14.063 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:14.063 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:14.063 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:14.063 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:14.063 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:14.063 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:14.063 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:14.063 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:14.063 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:23:14.063 Found 0000:84:00.0 (0x8086 - 0x159b) 00:23:14.063 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:14.063 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:14.063 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:14.063 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:14.063 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:14.063 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:14.063 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:23:14.063 Found 0000:84:00.1 (0x8086 - 0x159b) 00:23:14.063 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:14.063 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:14.063 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:14.063 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:14.063 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:14.063 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:14.063 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:14.063 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:14.063 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:14.063 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:14.063 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:14.063 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:14.063 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:14.063 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:14.063 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:14.063 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:23:14.063 Found net devices under 0000:84:00.0: cvl_0_0 00:23:14.063 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:14.063 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:14.063 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:14.063 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:14.063 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:14.063 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:14.063 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:14.063 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:14.063 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:23:14.063 Found net devices under 0000:84:00.1: cvl_0_1 00:23:14.063 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:14.063 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:14.063 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:23:14.063 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:14.063 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:14.063 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:14.063 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:14.063 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:14.063 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:14.063 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:14.063 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:14.063 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:14.063 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:14.063 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:14.063 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:14.063 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:14.063 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:14.063 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:14.063 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:14.322 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:14.322 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:14.322 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:14.322 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:14.322 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:14.322 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:14.322 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:14.322 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:14.322 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.137 ms 00:23:14.322 00:23:14.322 --- 10.0.0.2 ping statistics --- 00:23:14.322 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:14.322 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:23:14.322 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:14.322 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:14.322 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.137 ms 00:23:14.322 00:23:14.322 --- 10.0.0.1 ping statistics --- 00:23:14.322 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:14.322 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:23:14.322 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:14.322 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:23:14.322 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:14.322 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:14.322 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:14.322 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:14.322 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:14.322 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:14.322 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:14.322 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:23:14.322 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:14.322 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:14.322 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:14.322 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=488096 00:23:14.322 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 488096 00:23:14.323 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:14.323 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 488096 ']' 00:23:14.323 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:14.323 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:14.323 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:14.323 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:14.323 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:14.323 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:14.323 [2024-07-25 10:11:59.439849] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:23:14.323 [2024-07-25 10:11:59.439958] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:14.323 EAL: No free 2048 kB hugepages reported on node 1 00:23:14.580 [2024-07-25 10:11:59.523043] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:14.580 [2024-07-25 10:11:59.650455] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:14.580 [2024-07-25 10:11:59.650526] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:14.580 [2024-07-25 10:11:59.650543] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:14.580 [2024-07-25 10:11:59.650557] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:14.580 [2024-07-25 10:11:59.650569] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:14.580 [2024-07-25 10:11:59.650665] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:14.580 [2024-07-25 10:11:59.650723] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:14.580 [2024-07-25 10:11:59.650774] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:23:14.580 [2024-07-25 10:11:59.650776] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:14.839 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:14.839 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:23:14.839 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:14.839 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:14.839 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:14.839 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:14.839 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:14.839 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.839 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:14.839 [2024-07-25 10:11:59.819255] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:14.839 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.839 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:23:14.839 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:23:14.839 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:14.839 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:14.839 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:14.839 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:14.839 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:14.839 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:14.839 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:14.839 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:14.839 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:14.839 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:14.839 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:14.839 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:14.839 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:14.839 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:14.839 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:14.839 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:14.839 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:14.839 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:14.839 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:14.839 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:14.839 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:14.839 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:14.839 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:14.839 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:23:14.839 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.839 10:11:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:14.839 Malloc1 00:23:14.839 [2024-07-25 10:11:59.914092] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:14.839 Malloc2 00:23:14.839 Malloc3 00:23:15.097 Malloc4 00:23:15.097 Malloc5 00:23:15.097 Malloc6 00:23:15.097 Malloc7 00:23:15.097 Malloc8 00:23:15.355 Malloc9 00:23:15.355 Malloc10 00:23:15.355 10:12:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.355 10:12:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:23:15.355 10:12:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:15.355 10:12:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:15.355 10:12:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=488292 00:23:15.355 10:12:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 488292 /var/tmp/bdevperf.sock 00:23:15.355 10:12:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 488292 ']' 00:23:15.355 10:12:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:23:15.355 10:12:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:15.355 10:12:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:15.355 10:12:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:15.355 10:12:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:23:15.355 10:12:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:15.355 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:15.355 10:12:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:23:15.355 10:12:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:15.355 10:12:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:15.355 10:12:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:15.355 10:12:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:15.355 { 00:23:15.355 "params": { 00:23:15.355 "name": "Nvme$subsystem", 00:23:15.355 "trtype": "$TEST_TRANSPORT", 00:23:15.355 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:15.355 "adrfam": "ipv4", 00:23:15.355 "trsvcid": "$NVMF_PORT", 00:23:15.355 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:15.355 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:15.355 "hdgst": ${hdgst:-false}, 00:23:15.355 "ddgst": ${ddgst:-false} 00:23:15.355 }, 00:23:15.355 "method": "bdev_nvme_attach_controller" 00:23:15.355 } 00:23:15.355 EOF 00:23:15.355 )") 00:23:15.355 10:12:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:15.355 10:12:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:15.355 10:12:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:15.355 { 00:23:15.355 "params": { 00:23:15.355 "name": "Nvme$subsystem", 00:23:15.355 "trtype": "$TEST_TRANSPORT", 00:23:15.355 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:15.355 "adrfam": "ipv4", 00:23:15.355 "trsvcid": "$NVMF_PORT", 00:23:15.355 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:15.355 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:15.355 "hdgst": ${hdgst:-false}, 00:23:15.355 "ddgst": ${ddgst:-false} 00:23:15.355 }, 00:23:15.355 "method": "bdev_nvme_attach_controller" 00:23:15.355 } 00:23:15.355 EOF 00:23:15.355 )") 00:23:15.355 10:12:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:15.355 10:12:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:15.355 10:12:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:15.355 { 00:23:15.355 "params": { 00:23:15.355 "name": "Nvme$subsystem", 00:23:15.355 "trtype": "$TEST_TRANSPORT", 00:23:15.355 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:15.355 "adrfam": "ipv4", 00:23:15.355 "trsvcid": "$NVMF_PORT", 00:23:15.355 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:15.355 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:15.355 "hdgst": ${hdgst:-false}, 00:23:15.355 "ddgst": ${ddgst:-false} 00:23:15.355 }, 00:23:15.355 "method": "bdev_nvme_attach_controller" 00:23:15.355 } 00:23:15.355 EOF 00:23:15.355 )") 00:23:15.355 10:12:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:15.355 10:12:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:15.355 10:12:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:15.355 { 00:23:15.355 "params": { 00:23:15.355 "name": "Nvme$subsystem", 00:23:15.355 "trtype": "$TEST_TRANSPORT", 00:23:15.355 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:15.355 "adrfam": "ipv4", 00:23:15.355 "trsvcid": "$NVMF_PORT", 00:23:15.355 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:15.355 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:15.355 "hdgst": ${hdgst:-false}, 00:23:15.355 "ddgst": ${ddgst:-false} 00:23:15.355 }, 00:23:15.355 "method": "bdev_nvme_attach_controller" 00:23:15.355 } 00:23:15.355 EOF 00:23:15.355 )") 00:23:15.355 10:12:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:15.355 10:12:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:15.355 10:12:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:15.355 { 00:23:15.355 "params": { 00:23:15.355 "name": "Nvme$subsystem", 00:23:15.355 "trtype": "$TEST_TRANSPORT", 00:23:15.355 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:15.355 "adrfam": "ipv4", 00:23:15.355 "trsvcid": "$NVMF_PORT", 00:23:15.355 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:15.355 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:15.355 "hdgst": ${hdgst:-false}, 00:23:15.355 "ddgst": ${ddgst:-false} 00:23:15.355 }, 00:23:15.355 "method": "bdev_nvme_attach_controller" 00:23:15.355 } 00:23:15.355 EOF 00:23:15.355 )") 00:23:15.355 10:12:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:15.355 10:12:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:15.355 10:12:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:15.355 { 00:23:15.355 "params": { 00:23:15.355 "name": "Nvme$subsystem", 00:23:15.355 "trtype": "$TEST_TRANSPORT", 00:23:15.355 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:15.355 "adrfam": "ipv4", 00:23:15.355 "trsvcid": "$NVMF_PORT", 00:23:15.355 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:15.355 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:15.356 "hdgst": ${hdgst:-false}, 00:23:15.356 "ddgst": ${ddgst:-false} 00:23:15.356 }, 00:23:15.356 "method": "bdev_nvme_attach_controller" 00:23:15.356 } 00:23:15.356 EOF 00:23:15.356 )") 00:23:15.356 10:12:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:15.356 10:12:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:15.356 10:12:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:15.356 { 00:23:15.356 "params": { 00:23:15.356 "name": "Nvme$subsystem", 00:23:15.356 "trtype": "$TEST_TRANSPORT", 00:23:15.356 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:15.356 "adrfam": "ipv4", 00:23:15.356 "trsvcid": "$NVMF_PORT", 00:23:15.356 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:15.356 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:15.356 "hdgst": ${hdgst:-false}, 00:23:15.356 "ddgst": ${ddgst:-false} 00:23:15.356 }, 00:23:15.356 "method": "bdev_nvme_attach_controller" 00:23:15.356 } 00:23:15.356 EOF 00:23:15.356 )") 00:23:15.356 10:12:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:15.356 10:12:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:15.356 10:12:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:15.356 { 00:23:15.356 "params": { 00:23:15.356 "name": "Nvme$subsystem", 00:23:15.356 "trtype": "$TEST_TRANSPORT", 00:23:15.356 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:15.356 "adrfam": "ipv4", 00:23:15.356 "trsvcid": "$NVMF_PORT", 00:23:15.356 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:15.356 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:15.356 "hdgst": ${hdgst:-false}, 00:23:15.356 "ddgst": ${ddgst:-false} 00:23:15.356 }, 00:23:15.356 "method": "bdev_nvme_attach_controller" 00:23:15.356 } 00:23:15.356 EOF 00:23:15.356 )") 00:23:15.356 10:12:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:15.356 10:12:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:15.356 10:12:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:15.356 { 00:23:15.356 "params": { 00:23:15.356 "name": "Nvme$subsystem", 00:23:15.356 "trtype": "$TEST_TRANSPORT", 00:23:15.356 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:15.356 "adrfam": "ipv4", 00:23:15.356 "trsvcid": "$NVMF_PORT", 00:23:15.356 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:15.356 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:15.356 "hdgst": ${hdgst:-false}, 00:23:15.356 "ddgst": ${ddgst:-false} 00:23:15.356 }, 00:23:15.356 "method": "bdev_nvme_attach_controller" 00:23:15.356 } 00:23:15.356 EOF 00:23:15.356 )") 00:23:15.356 10:12:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:15.356 10:12:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:15.356 10:12:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:15.356 { 00:23:15.356 "params": { 00:23:15.356 "name": "Nvme$subsystem", 00:23:15.356 "trtype": "$TEST_TRANSPORT", 00:23:15.356 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:15.356 "adrfam": "ipv4", 00:23:15.356 "trsvcid": "$NVMF_PORT", 00:23:15.356 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:15.356 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:15.356 "hdgst": ${hdgst:-false}, 00:23:15.356 "ddgst": ${ddgst:-false} 00:23:15.356 }, 00:23:15.356 "method": "bdev_nvme_attach_controller" 00:23:15.356 } 00:23:15.356 EOF 00:23:15.356 )") 00:23:15.356 10:12:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:15.356 10:12:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:23:15.356 10:12:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:23:15.356 10:12:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:15.356 "params": { 00:23:15.356 "name": "Nvme1", 00:23:15.356 "trtype": "tcp", 00:23:15.356 "traddr": "10.0.0.2", 00:23:15.356 "adrfam": "ipv4", 00:23:15.356 "trsvcid": "4420", 00:23:15.356 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:15.356 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:15.356 "hdgst": false, 00:23:15.356 "ddgst": false 00:23:15.356 }, 00:23:15.356 "method": "bdev_nvme_attach_controller" 00:23:15.356 },{ 00:23:15.356 "params": { 00:23:15.356 "name": "Nvme2", 00:23:15.356 "trtype": "tcp", 00:23:15.356 "traddr": "10.0.0.2", 00:23:15.356 "adrfam": "ipv4", 00:23:15.356 "trsvcid": "4420", 00:23:15.356 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:15.356 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:15.356 "hdgst": false, 00:23:15.356 "ddgst": false 00:23:15.356 }, 00:23:15.356 "method": "bdev_nvme_attach_controller" 00:23:15.356 },{ 00:23:15.356 "params": { 00:23:15.356 "name": "Nvme3", 00:23:15.356 "trtype": "tcp", 00:23:15.356 "traddr": "10.0.0.2", 00:23:15.356 "adrfam": "ipv4", 00:23:15.356 "trsvcid": "4420", 00:23:15.356 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:15.356 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:15.356 "hdgst": false, 00:23:15.356 "ddgst": false 00:23:15.356 }, 00:23:15.356 "method": "bdev_nvme_attach_controller" 00:23:15.356 },{ 00:23:15.356 "params": { 00:23:15.356 "name": "Nvme4", 00:23:15.356 "trtype": "tcp", 00:23:15.356 "traddr": "10.0.0.2", 00:23:15.356 "adrfam": "ipv4", 00:23:15.356 "trsvcid": "4420", 00:23:15.356 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:15.356 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:15.356 "hdgst": false, 00:23:15.356 "ddgst": false 00:23:15.356 }, 00:23:15.356 "method": "bdev_nvme_attach_controller" 00:23:15.356 },{ 00:23:15.356 "params": { 00:23:15.356 "name": "Nvme5", 00:23:15.356 "trtype": "tcp", 00:23:15.356 "traddr": "10.0.0.2", 00:23:15.356 "adrfam": "ipv4", 00:23:15.356 "trsvcid": "4420", 00:23:15.356 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:15.356 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:15.356 "hdgst": false, 00:23:15.356 "ddgst": false 00:23:15.356 }, 00:23:15.356 "method": "bdev_nvme_attach_controller" 00:23:15.356 },{ 00:23:15.356 "params": { 00:23:15.356 "name": "Nvme6", 00:23:15.356 "trtype": "tcp", 00:23:15.356 "traddr": "10.0.0.2", 00:23:15.356 "adrfam": "ipv4", 00:23:15.356 "trsvcid": "4420", 00:23:15.356 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:15.356 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:15.356 "hdgst": false, 00:23:15.356 "ddgst": false 00:23:15.356 }, 00:23:15.356 "method": "bdev_nvme_attach_controller" 00:23:15.356 },{ 00:23:15.356 "params": { 00:23:15.356 "name": "Nvme7", 00:23:15.356 "trtype": "tcp", 00:23:15.356 "traddr": "10.0.0.2", 00:23:15.356 "adrfam": "ipv4", 00:23:15.356 "trsvcid": "4420", 00:23:15.356 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:15.356 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:15.356 "hdgst": false, 00:23:15.356 "ddgst": false 00:23:15.356 }, 00:23:15.356 "method": "bdev_nvme_attach_controller" 00:23:15.356 },{ 00:23:15.356 "params": { 00:23:15.356 "name": "Nvme8", 00:23:15.356 "trtype": "tcp", 00:23:15.356 "traddr": "10.0.0.2", 00:23:15.356 "adrfam": "ipv4", 00:23:15.356 "trsvcid": "4420", 00:23:15.356 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:15.356 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:15.356 "hdgst": false, 00:23:15.356 "ddgst": false 00:23:15.356 }, 00:23:15.356 "method": "bdev_nvme_attach_controller" 00:23:15.356 },{ 00:23:15.356 "params": { 00:23:15.356 "name": "Nvme9", 00:23:15.356 "trtype": "tcp", 00:23:15.356 "traddr": "10.0.0.2", 00:23:15.356 "adrfam": "ipv4", 00:23:15.356 "trsvcid": "4420", 00:23:15.356 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:15.356 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:15.356 "hdgst": false, 00:23:15.356 "ddgst": false 00:23:15.356 }, 00:23:15.356 "method": "bdev_nvme_attach_controller" 00:23:15.356 },{ 00:23:15.356 "params": { 00:23:15.356 "name": "Nvme10", 00:23:15.356 "trtype": "tcp", 00:23:15.356 "traddr": "10.0.0.2", 00:23:15.356 "adrfam": "ipv4", 00:23:15.356 "trsvcid": "4420", 00:23:15.357 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:15.357 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:15.357 "hdgst": false, 00:23:15.357 "ddgst": false 00:23:15.357 }, 00:23:15.357 "method": "bdev_nvme_attach_controller" 00:23:15.357 }' 00:23:15.357 [2024-07-25 10:12:00.450658] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:23:15.357 [2024-07-25 10:12:00.450758] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid488292 ] 00:23:15.357 EAL: No free 2048 kB hugepages reported on node 1 00:23:15.357 [2024-07-25 10:12:00.520185] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:15.614 [2024-07-25 10:12:00.633305] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:17.507 Running I/O for 10 seconds... 00:23:17.507 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:17.507 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:23:17.507 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:17.507 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.507 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:17.507 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.507 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:17.507 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:23:17.507 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:23:17.507 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:23:17.507 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:23:17.507 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:23:17.507 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:23:17.507 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:17.507 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:17.507 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:17.507 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.507 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:17.507 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.765 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:23:17.765 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:23:17.765 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:23:18.035 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:23:18.035 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:18.035 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:18.035 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.035 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:18.035 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:18.035 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.035 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:23:18.035 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:23:18.035 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:23:18.035 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:23:18.035 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:23:18.035 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 488096 00:23:18.035 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # '[' -z 488096 ']' 00:23:18.035 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # kill -0 488096 00:23:18.035 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # uname 00:23:18.035 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:18.036 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 488096 00:23:18.036 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:18.036 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:18.036 10:12:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 488096' 00:23:18.036 killing process with pid 488096 00:23:18.036 10:12:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@969 -- # kill 488096 00:23:18.036 10:12:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@974 -- # wait 488096 00:23:18.036 [2024-07-25 10:12:03.002198] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19bfdc0 is same with the state(5) to be set 00:23:18.036 [2024-07-25 10:12:03.002303] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19bfdc0 is same with the state(5) to be set 00:23:18.036 [2024-07-25 10:12:03.002319] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19bfdc0 is same with the state(5) to be set 00:23:18.036 [2024-07-25 10:12:03.002332] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19bfdc0 is same with the state(5) to be set 00:23:18.036 [2024-07-25 10:12:03.002344] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19bfdc0 is same with the state(5) to be set 00:23:18.036 [2024-07-25 10:12:03.002356] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19bfdc0 is same with the state(5) to be set 00:23:18.036 [2024-07-25 10:12:03.002368] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19bfdc0 is same with the state(5) to be set 00:23:18.036 [2024-07-25 10:12:03.002380] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19bfdc0 is same with the state(5) to be set 00:23:18.036 [2024-07-25 10:12:03.002392] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19bfdc0 is same with the state(5) to be set 00:23:18.036 [2024-07-25 10:12:03.002420] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19bfdc0 is same with the state(5) to be set 00:23:18.036 [2024-07-25 10:12:03.002441] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19bfdc0 is same with the state(5) to be set 00:23:18.036 [2024-07-25 10:12:03.002454] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19bfdc0 is same with the state(5) to be set 00:23:18.036 [2024-07-25 10:12:03.002469] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19bfdc0 is same with the state(5) to be set 00:23:18.036 [2024-07-25 10:12:03.002492] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19bfdc0 is same with the state(5) to be set 00:23:18.036 [2024-07-25 10:12:03.002505] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19bfdc0 is same with the state(5) to be set 00:23:18.036 [2024-07-25 10:12:03.002518] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19bfdc0 is same with the state(5) to be set 00:23:18.036 [2024-07-25 10:12:03.002531] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19bfdc0 is same with the state(5) to be set 00:23:18.036 [2024-07-25 10:12:03.002543] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19bfdc0 is same with the state(5) to be set 00:23:18.036 [2024-07-25 10:12:03.002556] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19bfdc0 is same with the state(5) to be set 00:23:18.036 [2024-07-25 10:12:03.002569] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19bfdc0 is same with the state(5) to be set 00:23:18.036 [2024-07-25 10:12:03.002582] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19bfdc0 is same with the state(5) to be set 00:23:18.036 [2024-07-25 10:12:03.002595] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19bfdc0 is same with the state(5) to be set 00:23:18.036 [2024-07-25 10:12:03.002608] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19bfdc0 is same with the state(5) to be set 00:23:18.036 [2024-07-25 10:12:03.002621] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19bfdc0 is same with the state(5) to be set 00:23:18.036 [2024-07-25 10:12:03.002634] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19bfdc0 is same with the state(5) to be set 00:23:18.036 [2024-07-25 10:12:03.002646] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19bfdc0 is same with the state(5) to be set 00:23:18.036 [2024-07-25 10:12:03.002660] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19bfdc0 is same with the state(5) to be set 00:23:18.036 [2024-07-25 10:12:03.002673] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19bfdc0 is same with the state(5) to be set 00:23:18.036 [2024-07-25 10:12:03.002685] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19bfdc0 is same with the state(5) to be set 00:23:18.036 [2024-07-25 10:12:03.002700] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19bfdc0 is same with the state(5) to be set 00:23:18.036 [2024-07-25 10:12:03.002712] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19bfdc0 is same with the state(5) to be set 00:23:18.036 [2024-07-25 10:12:03.002725] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19bfdc0 is same with the state(5) to be set 00:23:18.036 [2024-07-25 10:12:03.002753] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19bfdc0 is same with the state(5) to be set 00:23:18.036 [2024-07-25 10:12:03.002766] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19bfdc0 is same with the state(5) to be set 00:23:18.036 [2024-07-25 10:12:03.002779] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19bfdc0 is same with the state(5) to be set 00:23:18.036 [2024-07-25 10:12:03.002791] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19bfdc0 is same with the state(5) to be set 00:23:18.036 [2024-07-25 10:12:03.002803] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19bfdc0 is same with the state(5) to be set 00:23:18.036 [2024-07-25 10:12:03.002817] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19bfdc0 is same with the state(5) to be set 00:23:18.036 [2024-07-25 10:12:03.002830] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19bfdc0 is same with the state(5) to be set 00:23:18.036 [2024-07-25 10:12:03.002842] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19bfdc0 is same with the state(5) to be set 00:23:18.036 [2024-07-25 10:12:03.002858] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19bfdc0 is same with the state(5) to be set 00:23:18.036 [2024-07-25 10:12:03.002870] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19bfdc0 is same with the state(5) to be set 00:23:18.036 [2024-07-25 10:12:03.002882] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19bfdc0 is same with the state(5) to be set 00:23:18.036 [2024-07-25 10:12:03.002895] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19bfdc0 is same with the state(5) to be set 00:23:18.036 [2024-07-25 10:12:03.002907] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19bfdc0 is same with the state(5) to be set 00:23:18.036 [2024-07-25 10:12:03.002919] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19bfdc0 is same with the state(5) to be set 00:23:18.036 [2024-07-25 10:12:03.002931] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19bfdc0 is same with the state(5) to be set 00:23:18.036 [2024-07-25 10:12:03.002943] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19bfdc0 is same with the state(5) to be set 00:23:18.036 [2024-07-25 10:12:03.002956] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19bfdc0 is same with the state(5) to be set 00:23:18.036 [2024-07-25 10:12:03.002968] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19bfdc0 is same with the state(5) to be set 00:23:18.036 [2024-07-25 10:12:03.002981] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19bfdc0 is same with the state(5) to be set 00:23:18.036 [2024-07-25 10:12:03.002993] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19bfdc0 is same with the state(5) to be set 00:23:18.036 [2024-07-25 10:12:03.003005] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19bfdc0 is same with the state(5) to be set 00:23:18.036 [2024-07-25 10:12:03.003017] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19bfdc0 is same with the state(5) to be set 00:23:18.036 [2024-07-25 10:12:03.003029] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19bfdc0 is same with the state(5) to be set 00:23:18.036 [2024-07-25 10:12:03.003041] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19bfdc0 is same with the state(5) to be set 00:23:18.036 [2024-07-25 10:12:03.003053] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19bfdc0 is same with the state(5) to be set 00:23:18.036 [2024-07-25 10:12:03.003066] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19bfdc0 is same with the state(5) to be set 00:23:18.036 [2024-07-25 10:12:03.003078] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19bfdc0 is same with the state(5) to be set 00:23:18.036 [2024-07-25 10:12:03.003091] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19bfdc0 is same with the state(5) to be set 00:23:18.036 [2024-07-25 10:12:03.003103] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19bfdc0 is same with the state(5) to be set 00:23:18.036 [2024-07-25 10:12:03.003115] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19bfdc0 is same with the state(5) to be set 00:23:18.037 [2024-07-25 10:12:03.003127] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19bfdc0 is same with the state(5) to be set 00:23:18.037 [2024-07-25 10:12:03.005528] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1f60 is same with the state(5) to be set 00:23:18.037 [2024-07-25 10:12:03.005566] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1f60 is same with the state(5) to be set 00:23:18.037 [2024-07-25 10:12:03.005583] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1f60 is same with the state(5) to be set 00:23:18.037 [2024-07-25 10:12:03.005597] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1f60 is same with the state(5) to be set 00:23:18.037 [2024-07-25 10:12:03.005616] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1f60 is same with the state(5) to be set 00:23:18.037 [2024-07-25 10:12:03.005630] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1f60 is same with the state(5) to be set 00:23:18.037 [2024-07-25 10:12:03.005643] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1f60 is same with the state(5) to be set 00:23:18.037 [2024-07-25 10:12:03.005657] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1f60 is same with the state(5) to be set 00:23:18.037 [2024-07-25 10:12:03.005670] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1f60 is same with the state(5) to be set 00:23:18.037 [2024-07-25 10:12:03.005683] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1f60 is same with the state(5) to be set 00:23:18.037 [2024-07-25 10:12:03.005697] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1f60 is same with the state(5) to be set 00:23:18.037 [2024-07-25 10:12:03.005710] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1f60 is same with the state(5) to be set 00:23:18.037 [2024-07-25 10:12:03.005723] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1f60 is same with the state(5) to be set 00:23:18.037 [2024-07-25 10:12:03.005736] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1f60 is same with the state(5) to be set 00:23:18.037 [2024-07-25 10:12:03.005749] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1f60 is same with the state(5) to be set 00:23:18.037 [2024-07-25 10:12:03.005763] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1f60 is same with the state(5) to be set 00:23:18.037 [2024-07-25 10:12:03.005792] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1f60 is same with the state(5) to be set 00:23:18.037 [2024-07-25 10:12:03.005805] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1f60 is same with the state(5) to be set 00:23:18.037 [2024-07-25 10:12:03.005817] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1f60 is same with the state(5) to be set 00:23:18.037 [2024-07-25 10:12:03.005830] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1f60 is same with the state(5) to be set 00:23:18.037 [2024-07-25 10:12:03.005844] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1f60 is same with the state(5) to be set 00:23:18.037 [2024-07-25 10:12:03.005856] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1f60 is same with the state(5) to be set 00:23:18.037 [2024-07-25 10:12:03.005869] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1f60 is same with the state(5) to be set 00:23:18.037 [2024-07-25 10:12:03.005882] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1f60 is same with the state(5) to be set 00:23:18.037 [2024-07-25 10:12:03.005895] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1f60 is same with the state(5) to be set 00:23:18.037 [2024-07-25 10:12:03.005908] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1f60 is same with the state(5) to be set 00:23:18.037 [2024-07-25 10:12:03.005922] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1f60 is same with the state(5) to be set 00:23:18.037 [2024-07-25 10:12:03.005945] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1f60 is same with the state(5) to be set 00:23:18.037 [2024-07-25 10:12:03.005957] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1f60 is same with the state(5) to be set 00:23:18.037 [2024-07-25 10:12:03.005970] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1f60 is same with the state(5) to be set 00:23:18.037 [2024-07-25 10:12:03.005982] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1f60 is same with the state(5) to be set 00:23:18.037 [2024-07-25 10:12:03.005999] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1f60 is same with the state(5) to be set 00:23:18.037 [2024-07-25 10:12:03.006012] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1f60 is same with the state(5) to be set 00:23:18.037 [2024-07-25 10:12:03.006025] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1f60 is same with the state(5) to be set 00:23:18.037 [2024-07-25 10:12:03.006038] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1f60 is same with the state(5) to be set 00:23:18.037 [2024-07-25 10:12:03.006051] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1f60 is same with the state(5) to be set 00:23:18.037 [2024-07-25 10:12:03.006064] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1f60 is same with the state(5) to be set 00:23:18.037 [2024-07-25 10:12:03.006077] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1f60 is same with the state(5) to be set 00:23:18.037 [2024-07-25 10:12:03.006090] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1f60 is same with the state(5) to be set 00:23:18.037 [2024-07-25 10:12:03.006103] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1f60 is same with the state(5) to be set 00:23:18.037 [2024-07-25 10:12:03.006116] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1f60 is same with the state(5) to be set 00:23:18.037 [2024-07-25 10:12:03.006129] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1f60 is same with the state(5) to be set 00:23:18.037 [2024-07-25 10:12:03.006142] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1f60 is same with the state(5) to be set 00:23:18.037 [2024-07-25 10:12:03.006155] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1f60 is same with the state(5) to be set 00:23:18.037 [2024-07-25 10:12:03.006168] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1f60 is same with the state(5) to be set 00:23:18.037 [2024-07-25 10:12:03.006181] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1f60 is same with the state(5) to be set 00:23:18.037 [2024-07-25 10:12:03.006195] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1f60 is same with the state(5) to be set 00:23:18.037 [2024-07-25 10:12:03.006208] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1f60 is same with the state(5) to be set 00:23:18.037 [2024-07-25 10:12:03.006221] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1f60 is same with the state(5) to be set 00:23:18.037 [2024-07-25 10:12:03.006233] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1f60 is same with the state(5) to be set 00:23:18.037 [2024-07-25 10:12:03.006246] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1f60 is same with the state(5) to be set 00:23:18.037 [2024-07-25 10:12:03.006260] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1f60 is same with the state(5) to be set 00:23:18.037 [2024-07-25 10:12:03.006273] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1f60 is same with the state(5) to be set 00:23:18.037 [2024-07-25 10:12:03.006286] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1f60 is same with the state(5) to be set 00:23:18.037 [2024-07-25 10:12:03.006301] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1f60 is same with the state(5) to be set 00:23:18.037 [2024-07-25 10:12:03.006313] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1f60 is same with the state(5) to be set 00:23:18.038 [2024-07-25 10:12:03.006326] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1f60 is same with the state(5) to be set 00:23:18.038 [2024-07-25 10:12:03.006338] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1f60 is same with the state(5) to be set 00:23:18.038 [2024-07-25 10:12:03.006353] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1f60 is same with the state(5) to be set 00:23:18.038 [2024-07-25 10:12:03.006366] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1f60 is same with the state(5) to be set 00:23:18.038 [2024-07-25 10:12:03.006379] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1f60 is same with the state(5) to be set 00:23:18.038 [2024-07-25 10:12:03.006391] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1f60 is same with the state(5) to be set 00:23:18.038 [2024-07-25 10:12:03.006404] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1f60 is same with the state(5) to be set 00:23:18.038 [2024-07-25 10:12:03.010703] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:18.038 [2024-07-25 10:12:03.011531] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c0280 is same with the state(5) to be set 00:23:18.038 [2024-07-25 10:12:03.011568] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c0280 is same with the state(5) to be set 00:23:18.038 [2024-07-25 10:12:03.011584] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c0280 is same with the state(5) to be set 00:23:18.038 [2024-07-25 10:12:03.011597] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c0280 is same with the state(5) to be set 00:23:18.038 [2024-07-25 10:12:03.011610] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c0280 is same with the state(5) to be set 00:23:18.038 [2024-07-25 10:12:03.011623] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c0280 is same with the state(5) to be set 00:23:18.038 [2024-07-25 10:12:03.011636] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c0280 is same with the state(5) to be set 00:23:18.038 [2024-07-25 10:12:03.011649] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c0280 is same with the state(5) to be set 00:23:18.038 [2024-07-25 10:12:03.011662] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c0280 is same with the state(5) to be set 00:23:18.038 [2024-07-25 10:12:03.011675] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c0280 is same with the state(5) to be set 00:23:18.038 [2024-07-25 10:12:03.011688] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c0280 is same with the state(5) to be set 00:23:18.038 [2024-07-25 10:12:03.011701] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c0280 is same with the state(5) to be set 00:23:18.038 [2024-07-25 10:12:03.011714] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c0280 is same with the state(5) to be set 00:23:18.038 [2024-07-25 10:12:03.011726] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c0280 is same with the state(5) to be set 00:23:18.038 [2024-07-25 10:12:03.011739] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c0280 is same with the state(5) to be set 00:23:18.038 [2024-07-25 10:12:03.011767] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c0280 is same with the state(5) to be set 00:23:18.038 [2024-07-25 10:12:03.011941] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:18.038 [2024-07-25 10:12:03.013836] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:18.038 [2024-07-25 10:12:03.013865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.038 [2024-07-25 10:12:03.013883] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:18.038 [2024-07-25 10:12:03.013897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.038 [2024-07-25 10:12:03.013911] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:18.038 [2024-07-25 10:12:03.013935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.038 [2024-07-25 10:12:03.013950] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:18.038 [2024-07-25 10:12:03.013964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.038 [2024-07-25 10:12:03.013977] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9eb40 is same with the state(5) to be set 00:23:18.038 [2024-07-25 10:12:03.014069] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:18.038 [2024-07-25 10:12:03.014090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.038 [2024-07-25 10:12:03.014105] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:18.038 [2024-07-25 10:12:03.014119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.038 [2024-07-25 10:12:03.014133] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:18.038 [2024-07-25 10:12:03.014147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.038 [2024-07-25 10:12:03.014161] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:18.038 [2024-07-25 10:12:03.014180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.038 [2024-07-25 10:12:03.014166] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c0740 is same with the state(5) to be set 00:23:18.038 [2024-07-25 10:12:03.014193] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0e120 is same with the state(5) to be set 00:23:18.038 [2024-07-25 10:12:03.014261] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:18.038 [2024-07-25 10:12:03.014286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.038 [2024-07-25 10:12:03.014301] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:18.038 [2024-07-25 10:12:03.014316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.038 [2024-07-25 10:12:03.014330] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:18.038 [2024-07-25 10:12:03.014344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.038 [2024-07-25 10:12:03.014357] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:18.038 [2024-07-25 10:12:03.014371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.038 [2024-07-25 10:12:03.014384] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cf200 is same with the state(5) to be set 00:23:18.038 [2024-07-25 10:12:03.014452] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:18.038 [2024-07-25 10:12:03.014474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.039 [2024-07-25 10:12:03.014494] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:18.039 [2024-07-25 10:12:03.014509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.039 [2024-07-25 10:12:03.014524] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:18.039 [2024-07-25 10:12:03.014538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.039 [2024-07-25 10:12:03.014553] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:18.039 [2024-07-25 10:12:03.014567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.039 [2024-07-25 10:12:03.014580] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdad950 is same with the state(5) to be set 00:23:18.039 [2024-07-25 10:12:03.015384] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c10e0 is same with the state(5) to be set 00:23:18.039 [2024-07-25 10:12:03.015414] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c10e0 is same with the state(5) to be set 00:23:18.039 [2024-07-25 10:12:03.015435] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c10e0 is same with the state(5) to be set 00:23:18.039 [2024-07-25 10:12:03.015450] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c10e0 is same with the state(5) to be set 00:23:18.039 [2024-07-25 10:12:03.015464] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c10e0 is same with the state(5) to be set 00:23:18.039 [2024-07-25 10:12:03.015476] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c10e0 is same with the state(5) to be set 00:23:18.039 [2024-07-25 10:12:03.015490] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c10e0 is same with the state(5) to be set 00:23:18.039 [2024-07-25 10:12:03.015503] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c10e0 is same with the state(5) to be set 00:23:18.039 [2024-07-25 10:12:03.015516] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c10e0 is same with the state(5) to be set 00:23:18.039 [2024-07-25 10:12:03.015529] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c10e0 is same with the state(5) to be set 00:23:18.039 [2024-07-25 10:12:03.015542] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c10e0 is same with the state(5) to be set 00:23:18.039 [2024-07-25 10:12:03.015555] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c10e0 is same with the state(5) to be set 00:23:18.039 [2024-07-25 10:12:03.015568] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c10e0 is same with the state(5) to be set 00:23:18.039 [2024-07-25 10:12:03.015582] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c10e0 is same with the state(5) to be set 00:23:18.039 [2024-07-25 10:12:03.015595] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c10e0 is same with the state(5) to be set 00:23:18.039 [2024-07-25 10:12:03.015608] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c10e0 is same with the state(5) to be set 00:23:18.039 [2024-07-25 10:12:03.015621] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c10e0 is same with the state(5) to be set 00:23:18.039 [2024-07-25 10:12:03.015634] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c10e0 is same with the state(5) to be set 00:23:18.039 [2024-07-25 10:12:03.015647] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c10e0 is same with the state(5) to be set 00:23:18.039 [2024-07-25 10:12:03.015660] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c10e0 is same with the state(5) to be set 00:23:18.039 [2024-07-25 10:12:03.015689] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c10e0 is same with the state(5) to be set 00:23:18.039 [2024-07-25 10:12:03.015703] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c10e0 is same with the state(5) to be set 00:23:18.039 [2024-07-25 10:12:03.015716] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c10e0 is same with the state(5) to be set 00:23:18.039 [2024-07-25 10:12:03.015729] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c10e0 is same with the state(5) to be set 00:23:18.039 [2024-07-25 10:12:03.015750] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c10e0 is same with the state(5) to be set 00:23:18.039 [2024-07-25 10:12:03.015764] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c10e0 is same with the state(5) to be set 00:23:18.039 [2024-07-25 10:12:03.015777] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c10e0 is same with the state(5) to be set 00:23:18.039 [2024-07-25 10:12:03.015790] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c10e0 is same with the state(5) to be set 00:23:18.039 [2024-07-25 10:12:03.015804] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c10e0 is same with the state(5) to be set 00:23:18.039 [2024-07-25 10:12:03.015817] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c10e0 is same with the state(5) to be set 00:23:18.039 [2024-07-25 10:12:03.015831] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c10e0 is same with the state(5) to be set 00:23:18.039 [2024-07-25 10:12:03.015844] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c10e0 is same with the state(5) to be set 00:23:18.039 [2024-07-25 10:12:03.015846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.039 [2024-07-25 10:12:03.015858] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c10e0 is same with the state(5) to be set 00:23:18.039 [2024-07-25 10:12:03.015871] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c10e0 is same with [2024-07-25 10:12:03.015872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:23:18.039 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.039 [2024-07-25 10:12:03.015886] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c10e0 is same with the state(5) to be set 00:23:18.039 [2024-07-25 10:12:03.015901] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c10e0 is same with the state(5) to be set 00:23:18.039 [2024-07-25 10:12:03.015903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.039 [2024-07-25 10:12:03.015914] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c10e0 is same with the state(5) to be set 00:23:18.039 [2024-07-25 10:12:03.015919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.039 [2024-07-25 10:12:03.015928] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c10e0 is same with the state(5) to be set 00:23:18.039 [2024-07-25 10:12:03.015937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.039 [2024-07-25 10:12:03.015942] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c10e0 is same with the state(5) to be set 00:23:18.039 [2024-07-25 10:12:03.015952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.039 [2024-07-25 10:12:03.015956] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c10e0 is same with the state(5) to be set 00:23:18.039 [2024-07-25 10:12:03.015969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:1[2024-07-25 10:12:03.015970] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c10e0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.039 the state(5) to be set 00:23:18.040 [2024-07-25 10:12:03.015990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-25 10:12:03.015991] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c10e0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.040 the state(5) to be set 00:23:18.040 [2024-07-25 10:12:03.016007] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c10e0 is same with the state(5) to be set 00:23:18.040 [2024-07-25 10:12:03.016009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.040 [2024-07-25 10:12:03.016020] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c10e0 is same with the state(5) to be set 00:23:18.040 [2024-07-25 10:12:03.016024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.040 [2024-07-25 10:12:03.016034] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c10e0 is same with the state(5) to be set 00:23:18.040 [2024-07-25 10:12:03.016040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.040 [2024-07-25 10:12:03.016048] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c10e0 is same with the state(5) to be set 00:23:18.040 [2024-07-25 10:12:03.016055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.040 [2024-07-25 10:12:03.016061] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c10e0 is same with the state(5) to be set 00:23:18.040 [2024-07-25 10:12:03.016071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.040 [2024-07-25 10:12:03.016075] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c10e0 is same with the state(5) to be set 00:23:18.040 [2024-07-25 10:12:03.016086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.040 [2024-07-25 10:12:03.016089] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c10e0 is same with the state(5) to be set 00:23:18.040 [2024-07-25 10:12:03.016102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:1[2024-07-25 10:12:03.016103] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c10e0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.040 the state(5) to be set 00:23:18.040 [2024-07-25 10:12:03.016118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-25 10:12:03.016120] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c10e0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.040 the state(5) to be set 00:23:18.040 [2024-07-25 10:12:03.016135] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c10e0 is same with the state(5) to be set 00:23:18.040 [2024-07-25 10:12:03.016136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.040 [2024-07-25 10:12:03.016148] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c10e0 is same with the state(5) to be set 00:23:18.040 [2024-07-25 10:12:03.016151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.040 [2024-07-25 10:12:03.016170] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c10e0 is same with the state(5) to be set 00:23:18.040 [2024-07-25 10:12:03.016172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.040 [2024-07-25 10:12:03.016184] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c10e0 is same with the state(5) to be set 00:23:18.040 [2024-07-25 10:12:03.016190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.040 [2024-07-25 10:12:03.016198] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c10e0 is same with the state(5) to be set 00:23:18.040 [2024-07-25 10:12:03.016207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.040 [2024-07-25 10:12:03.016212] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c10e0 is same with the state(5) to be set 00:23:18.040 [2024-07-25 10:12:03.016222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.040 [2024-07-25 10:12:03.016225] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c10e0 is same with the state(5) to be set 00:23:18.040 [2024-07-25 10:12:03.016238] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c10e0 is same with the state(5) to be set 00:23:18.040 [2024-07-25 10:12:03.016239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.040 [2024-07-25 10:12:03.016251] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c10e0 is same with the state(5) to be set 00:23:18.040 [2024-07-25 10:12:03.016255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.040 [2024-07-25 10:12:03.016264] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c10e0 is same with the state(5) to be set 00:23:18.040 [2024-07-25 10:12:03.016272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.040 [2024-07-25 10:12:03.016277] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c10e0 is same with the state(5) to be set 00:23:18.040 [2024-07-25 10:12:03.016287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.040 [2024-07-25 10:12:03.016290] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c10e0 is same with the state(5) to be set 00:23:18.040 [2024-07-25 10:12:03.016304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.040 [2024-07-25 10:12:03.016319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.040 [2024-07-25 10:12:03.016336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.040 [2024-07-25 10:12:03.016350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.040 [2024-07-25 10:12:03.016366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.040 [2024-07-25 10:12:03.016380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.040 [2024-07-25 10:12:03.016397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.040 [2024-07-25 10:12:03.016412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.040 [2024-07-25 10:12:03.016438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.040 [2024-07-25 10:12:03.016458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.040 [2024-07-25 10:12:03.016476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.040 [2024-07-25 10:12:03.016491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.040 [2024-07-25 10:12:03.016507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.041 [2024-07-25 10:12:03.016521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.041 [2024-07-25 10:12:03.016537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.041 [2024-07-25 10:12:03.016551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.041 [2024-07-25 10:12:03.016567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.041 [2024-07-25 10:12:03.016582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.041 [2024-07-25 10:12:03.016598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.041 [2024-07-25 10:12:03.016612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.041 [2024-07-25 10:12:03.016628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.041 [2024-07-25 10:12:03.016642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.041 [2024-07-25 10:12:03.016658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.041 [2024-07-25 10:12:03.016672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.041 [2024-07-25 10:12:03.016687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.041 [2024-07-25 10:12:03.016701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.041 [2024-07-25 10:12:03.016718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.041 [2024-07-25 10:12:03.016732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.041 [2024-07-25 10:12:03.016748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.041 [2024-07-25 10:12:03.016761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.041 [2024-07-25 10:12:03.016777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.041 [2024-07-25 10:12:03.016791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.041 [2024-07-25 10:12:03.016807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.041 [2024-07-25 10:12:03.016821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.041 [2024-07-25 10:12:03.016840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.041 [2024-07-25 10:12:03.016855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.041 [2024-07-25 10:12:03.016871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.041 [2024-07-25 10:12:03.016885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.041 [2024-07-25 10:12:03.016901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.041 [2024-07-25 10:12:03.016915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.041 [2024-07-25 10:12:03.016941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.041 [2024-07-25 10:12:03.016955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.041 [2024-07-25 10:12:03.016971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.041 [2024-07-25 10:12:03.016985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.041 [2024-07-25 10:12:03.017000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.041 [2024-07-25 10:12:03.017026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.041 [2024-07-25 10:12:03.017043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.041 [2024-07-25 10:12:03.017057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.041 [2024-07-25 10:12:03.017073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.041 [2024-07-25 10:12:03.017087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.041 [2024-07-25 10:12:03.017103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.041 [2024-07-25 10:12:03.017117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.041 [2024-07-25 10:12:03.017132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.041 [2024-07-25 10:12:03.017146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.041 [2024-07-25 10:12:03.017163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.041 [2024-07-25 10:12:03.017176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.041 [2024-07-25 10:12:03.017192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.041 [2024-07-25 10:12:03.017205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.041 [2024-07-25 10:12:03.017221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.041 [2024-07-25 10:12:03.017235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.041 [2024-07-25 10:12:03.017258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.041 [2024-07-25 10:12:03.017273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.041 [2024-07-25 10:12:03.017289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.041 [2024-07-25 10:12:03.017303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.041 [2024-07-25 10:12:03.017319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.042 [2024-07-25 10:12:03.017333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.042 [2024-07-25 10:12:03.017347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.042 [2024-07-25 10:12:03.017361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.042 [2024-07-25 10:12:03.017377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.042 [2024-07-25 10:12:03.017390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.042 [2024-07-25 10:12:03.017406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.042 [2024-07-25 10:12:03.017420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.042 [2024-07-25 10:12:03.017444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.042 [2024-07-25 10:12:03.017459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.042 [2024-07-25 10:12:03.017474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.042 [2024-07-25 10:12:03.017489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.042 [2024-07-25 10:12:03.017504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.042 [2024-07-25 10:12:03.017524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.042 [2024-07-25 10:12:03.017541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.042 [2024-07-25 10:12:03.017555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.042 [2024-07-25 10:12:03.017571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.042 [2024-07-25 10:12:03.017586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.042 [2024-07-25 10:12:03.017601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.042 [2024-07-25 10:12:03.017615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.042 [2024-07-25 10:12:03.017630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.042 [2024-07-25 10:12:03.017648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.042 [2024-07-25 10:12:03.017664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.042 [2024-07-25 10:12:03.017678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.042 [2024-07-25 10:12:03.017694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.042 [2024-07-25 10:12:03.017708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.042 [2024-07-25 10:12:03.017724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.042 [2024-07-25 10:12:03.017738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.042 [2024-07-25 10:12:03.017753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.042 [2024-07-25 10:12:03.017767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.042 [2024-07-25 10:12:03.017783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.042 [2024-07-25 10:12:03.017797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.042 [2024-07-25 10:12:03.017812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.042 [2024-07-25 10:12:03.017826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.042 [2024-07-25 10:12:03.017842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.042 [2024-07-25 10:12:03.017856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.042 [2024-07-25 10:12:03.017871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.042 [2024-07-25 10:12:03.017885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.042 [2024-07-25 10:12:03.017977] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xd036b0 was disconnected and freed. reset controller. 00:23:18.042 [2024-07-25 10:12:03.018066] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf7610 is same with the state(5) to be set 00:23:18.042 [2024-07-25 10:12:03.018095] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf7610 is same with the state(5) to be set 00:23:18.043 [2024-07-25 10:12:03.018110] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf7610 is same with the state(5) to be set 00:23:18.043 [2024-07-25 10:12:03.018123] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf7610 is same with the state(5) to be set 00:23:18.043 [2024-07-25 10:12:03.018136] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf7610 is same with the state(5) to be set 00:23:18.043 [2024-07-25 10:12:03.018148] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf7610 is same with the state(5) to be set 00:23:18.043 [2024-07-25 10:12:03.018161] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf7610 is same with the state(5) to be set 00:23:18.043 [2024-07-25 10:12:03.018180] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf7610 is same with the state(5) to be set 00:23:18.043 [2024-07-25 10:12:03.018194] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf7610 is same with the state(5) to be set 00:23:18.043 [2024-07-25 10:12:03.018208] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf7610 is same with the state(5) to be set 00:23:18.043 [2024-07-25 10:12:03.018220] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf7610 is same with the state(5) to be set 00:23:18.043 [2024-07-25 10:12:03.018233] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf7610 is same with the state(5) to be set 00:23:18.043 [2024-07-25 10:12:03.018246] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf7610 is same with the state(5) to be set 00:23:18.043 [2024-07-25 10:12:03.018258] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf7610 is same with the state(5) to be set 00:23:18.043 [2024-07-25 10:12:03.018271] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf7610 is same with the state(5) to be set 00:23:18.043 [2024-07-25 10:12:03.018284] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf7610 is same with the state(5) to be set 00:23:18.043 [2024-07-25 10:12:03.018297] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf7610 is same with the state(5) to be set 00:23:18.043 [2024-07-25 10:12:03.018309] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf7610 is same with the state(5) to be set 00:23:18.043 [2024-07-25 10:12:03.018322] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf7610 is same with the state(5) to be set 00:23:18.043 [2024-07-25 10:12:03.018334] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf7610 is same with the state(5) to be set 00:23:18.043 [2024-07-25 10:12:03.018347] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf7610 is same with the state(5) to be set 00:23:18.043 [2024-07-25 10:12:03.018361] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf7610 is same with the state(5) to be set 00:23:18.043 [2024-07-25 10:12:03.018374] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf7610 is same with the state(5) to be set 00:23:18.043 [2024-07-25 10:12:03.018387] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf7610 is same with the state(5) to be set 00:23:18.043 [2024-07-25 10:12:03.018399] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf7610 is same with the state(5) to be set 00:23:18.043 [2024-07-25 10:12:03.018412] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf7610 is same with the state(5) to be set 00:23:18.043 [2024-07-25 10:12:03.018424] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf7610 is same with the state(5) to be set 00:23:18.043 [2024-07-25 10:12:03.018447] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf7610 is same with the state(5) to be set 00:23:18.043 [2024-07-25 10:12:03.018461] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf7610 is same with the state(5) to be set 00:23:18.043 [2024-07-25 10:12:03.018474] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf7610 is same with the state(5) to be set 00:23:18.043 [2024-07-25 10:12:03.018486] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf7610 is same with the state(5) to be set 00:23:18.043 [2024-07-25 10:12:03.018499] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf7610 is same with the state(5) to be set 00:23:18.043 [2024-07-25 10:12:03.018512] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf7610 is same with the state(5) to be set 00:23:18.043 [2024-07-25 10:12:03.018525] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf7610 is same with the state(5) to be set 00:23:18.043 [2024-07-25 10:12:03.018542] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf7610 is same with the state(5) to be set 00:23:18.043 [2024-07-25 10:12:03.018555] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf7610 is same with the state(5) to be set 00:23:18.043 [2024-07-25 10:12:03.018568] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf7610 is same with the state(5) to be set 00:23:18.043 [2024-07-25 10:12:03.018581] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf7610 is same with the state(5) to be set 00:23:18.043 [2024-07-25 10:12:03.018593] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf7610 is same with the state(5) to be set 00:23:18.043 [2024-07-25 10:12:03.018606] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf7610 is same with the state(5) to be set 00:23:18.043 [2024-07-25 10:12:03.018619] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf7610 is same with the state(5) to be set 00:23:18.043 [2024-07-25 10:12:03.018632] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf7610 is same with the state(5) to be set 00:23:18.043 [2024-07-25 10:12:03.018644] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf7610 is same with the state(5) to be set 00:23:18.043 [2024-07-25 10:12:03.018657] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf7610 is same with the state(5) to be set 00:23:18.043 [2024-07-25 10:12:03.018671] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf7610 is same with the state(5) to be set 00:23:18.043 [2024-07-25 10:12:03.018683] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf7610 is same with the state(5) to be set 00:23:18.043 [2024-07-25 10:12:03.018696] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf7610 is same with the state(5) to be set 00:23:18.043 [2024-07-25 10:12:03.018708] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf7610 is same with the state(5) to be set 00:23:18.043 [2024-07-25 10:12:03.018735] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf7610 is same with the state(5) to be set 00:23:18.043 [2024-07-25 10:12:03.018748] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf7610 is same with the state(5) to be set 00:23:18.043 [2024-07-25 10:12:03.018760] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf7610 is same with the state(5) to be set 00:23:18.043 [2024-07-25 10:12:03.018772] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf7610 is same with the state(5) to be set 00:23:18.043 [2024-07-25 10:12:03.018785] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf7610 is same with the state(5) to be set 00:23:18.043 [2024-07-25 10:12:03.018798] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf7610 is same with the state(5) to be set 00:23:18.043 [2024-07-25 10:12:03.018810] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf7610 is same with the state(5) to be set 00:23:18.043 [2024-07-25 10:12:03.018823] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf7610 is same with the state(5) to be set 00:23:18.043 [2024-07-25 10:12:03.018835] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf7610 is same with the state(5) to be set 00:23:18.043 [2024-07-25 10:12:03.018848] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf7610 is same with the state(5) to be set 00:23:18.043 [2024-07-25 10:12:03.018861] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf7610 is same with the state(5) to be set 00:23:18.043 [2024-07-25 10:12:03.018872] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf7610 is same with the state(5) to be set 00:23:18.043 [2024-07-25 10:12:03.018885] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf7610 is same with the state(5) to be set 00:23:18.043 [2024-07-25 10:12:03.018900] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf7610 is same with the state(5) to be set 00:23:18.044 [2024-07-25 10:12:03.018913] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf7610 is same with the state(5) to be set 00:23:18.044 [2024-07-25 10:12:03.019908] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf7ad0 is same with the state(5) to be set 00:23:18.044 [2024-07-25 10:12:03.019945] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf7ad0 is same with the state(5) to be set 00:23:18.044 [2024-07-25 10:12:03.019962] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bf7ad0 is same with the state(5) to be set 00:23:18.044 [2024-07-25 10:12:03.020324] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1a80 is same with the state(5) to be set 00:23:18.044 [2024-07-25 10:12:03.020350] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1a80 is same with the state(5) to be set 00:23:18.044 [2024-07-25 10:12:03.020364] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1a80 is same with the state(5) to be set 00:23:18.044 [2024-07-25 10:12:03.020377] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1a80 is same with the state(5) to be set 00:23:18.044 [2024-07-25 10:12:03.020390] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1a80 is same with the state(5) to be set 00:23:18.044 [2024-07-25 10:12:03.020402] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1a80 is same with the state(5) to be set 00:23:18.044 [2024-07-25 10:12:03.020415] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1a80 is same with the state(5) to be set 00:23:18.044 [2024-07-25 10:12:03.020435] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1a80 is same with the state(5) to be set 00:23:18.044 [2024-07-25 10:12:03.020450] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1a80 is same with the state(5) to be set 00:23:18.044 [2024-07-25 10:12:03.020463] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1a80 is same with the state(5) to be set 00:23:18.044 [2024-07-25 10:12:03.020475] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1a80 is same with the state(5) to be set 00:23:18.044 [2024-07-25 10:12:03.020487] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1a80 is same with the state(5) to be set 00:23:18.044 [2024-07-25 10:12:03.020500] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1a80 is same with the state(5) to be set 00:23:18.044 [2024-07-25 10:12:03.020512] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1a80 is same with the state(5) to be set 00:23:18.044 [2024-07-25 10:12:03.020525] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1a80 is same with the state(5) to be set 00:23:18.044 [2024-07-25 10:12:03.020537] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1a80 is same with the state(5) to be set 00:23:18.044 [2024-07-25 10:12:03.020550] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1a80 is same with the state(5) to be set 00:23:18.044 [2024-07-25 10:12:03.020562] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1a80 is same with the state(5) to be set 00:23:18.044 [2024-07-25 10:12:03.020574] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1a80 is same with the state(5) to be set 00:23:18.044 [2024-07-25 10:12:03.020587] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1a80 is same with the state(5) to be set 00:23:18.044 [2024-07-25 10:12:03.020599] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1a80 is same with the state(5) to be set 00:23:18.044 [2024-07-25 10:12:03.020612] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1a80 is same with the state(5) to be set 00:23:18.044 [2024-07-25 10:12:03.020625] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1a80 is same with the state(5) to be set 00:23:18.044 [2024-07-25 10:12:03.020646] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1a80 is same with the state(5) to be set 00:23:18.044 [2024-07-25 10:12:03.020659] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1a80 is same with the state(5) to be set 00:23:18.044 [2024-07-25 10:12:03.020672] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1a80 is same with the state(5) to be set 00:23:18.044 [2024-07-25 10:12:03.020685] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1a80 is same with the state(5) to be set 00:23:18.044 [2024-07-25 10:12:03.020697] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1a80 is same with the state(5) to be set 00:23:18.044 [2024-07-25 10:12:03.020710] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1a80 is same with the state(5) to be set 00:23:18.044 [2024-07-25 10:12:03.020723] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1a80 is same with the state(5) to be set 00:23:18.044 [2024-07-25 10:12:03.020735] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1a80 is same with the state(5) to be set 00:23:18.044 [2024-07-25 10:12:03.020748] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1a80 is same with the state(5) to be set 00:23:18.044 [2024-07-25 10:12:03.020760] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1a80 is same with the state(5) to be set 00:23:18.044 [2024-07-25 10:12:03.020773] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1a80 is same with the state(5) to be set 00:23:18.044 [2024-07-25 10:12:03.020785] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1a80 is same with the state(5) to be set 00:23:18.044 [2024-07-25 10:12:03.020799] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1a80 is same with the state(5) to be set 00:23:18.044 [2024-07-25 10:12:03.020811] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1a80 is same with the state(5) to be set 00:23:18.044 [2024-07-25 10:12:03.020823] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1a80 is same with the state(5) to be set 00:23:18.044 [2024-07-25 10:12:03.020835] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1a80 is same with the state(5) to be set 00:23:18.044 [2024-07-25 10:12:03.020848] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1a80 is same with the state(5) to be set 00:23:18.044 [2024-07-25 10:12:03.020861] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1a80 is same with the state(5) to be set 00:23:18.044 [2024-07-25 10:12:03.020874] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1a80 is same with the state(5) to be set 00:23:18.044 [2024-07-25 10:12:03.020886] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1a80 is same with the state(5) to be set 00:23:18.044 [2024-07-25 10:12:03.020898] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1a80 is same with the state(5) to be set 00:23:18.044 [2024-07-25 10:12:03.020911] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1a80 is same with the state(5) to be set 00:23:18.044 [2024-07-25 10:12:03.020923] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1a80 is same with the state(5) to be set 00:23:18.044 [2024-07-25 10:12:03.020935] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1a80 is same with the state(5) to be set 00:23:18.044 [2024-07-25 10:12:03.020947] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1a80 is same with the state(5) to be set 00:23:18.044 [2024-07-25 10:12:03.020959] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1a80 is same with the state(5) to be set 00:23:18.044 [2024-07-25 10:12:03.020972] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1a80 is same with the state(5) to be set 00:23:18.044 [2024-07-25 10:12:03.020987] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1a80 is same with the state(5) to be set 00:23:18.044 [2024-07-25 10:12:03.021000] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1a80 is same with the state(5) to be set 00:23:18.044 [2024-07-25 10:12:03.021012] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1a80 is same with the state(5) to be set 00:23:18.044 [2024-07-25 10:12:03.021025] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1a80 is same with the state(5) to be set 00:23:18.045 [2024-07-25 10:12:03.021037] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1a80 is same with the state(5) to be set 00:23:18.045 [2024-07-25 10:12:03.021049] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1a80 is same with the state(5) to be set 00:23:18.045 [2024-07-25 10:12:03.021061] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1a80 is same with the state(5) to be set 00:23:18.045 [2024-07-25 10:12:03.021073] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1a80 is same with the state(5) to be set 00:23:18.045 [2024-07-25 10:12:03.021085] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1a80 is same with the state(5) to be set 00:23:18.045 [2024-07-25 10:12:03.021097] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1a80 is same with the state(5) to be set 00:23:18.045 [2024-07-25 10:12:03.021109] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1a80 is same with the state(5) to be set 00:23:18.045 [2024-07-25 10:12:03.021121] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1a80 is same with the state(5) to be set 00:23:18.045 [2024-07-25 10:12:03.021133] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1a80 is same with the state(5) to be set 00:23:18.045 [2024-07-25 10:12:03.038667] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:18.045 [2024-07-25 10:12:03.038740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.045 [2024-07-25 10:12:03.038758] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:18.045 [2024-07-25 10:12:03.038773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.045 [2024-07-25 10:12:03.038788] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:18.045 [2024-07-25 10:12:03.038802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.045 [2024-07-25 10:12:03.038817] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:18.045 [2024-07-25 10:12:03.038830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.045 [2024-07-25 10:12:03.038844] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc11420 is same with the state(5) to be set 00:23:18.045 [2024-07-25 10:12:03.038886] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd9eb40 (9): Bad file descriptor 00:23:18.045 [2024-07-25 10:12:03.038958] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:18.045 [2024-07-25 10:12:03.038979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.045 [2024-07-25 10:12:03.038995] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:18.045 [2024-07-25 10:12:03.039019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.045 [2024-07-25 10:12:03.039035] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:18.045 [2024-07-25 10:12:03.039050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.045 [2024-07-25 10:12:03.039064] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:18.045 [2024-07-25 10:12:03.039078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.045 [2024-07-25 10:12:03.039091] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0d320 is same with the state(5) to be set 00:23:18.045 [2024-07-25 10:12:03.039140] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:18.045 [2024-07-25 10:12:03.039161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.045 [2024-07-25 10:12:03.039177] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:18.045 [2024-07-25 10:12:03.039192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.045 [2024-07-25 10:12:03.039206] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:18.045 [2024-07-25 10:12:03.039220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.045 [2024-07-25 10:12:03.039234] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:18.045 [2024-07-25 10:12:03.039247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.045 [2024-07-25 10:12:03.039261] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0dba0 is same with the state(5) to be set 00:23:18.045 [2024-07-25 10:12:03.039312] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:18.045 [2024-07-25 10:12:03.039332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.045 [2024-07-25 10:12:03.039348] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:18.045 [2024-07-25 10:12:03.039362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.045 [2024-07-25 10:12:03.039376] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:18.045 [2024-07-25 10:12:03.039389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.045 [2024-07-25 10:12:03.039404] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:18.045 [2024-07-25 10:12:03.039417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.045 [2024-07-25 10:12:03.039438] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd85b30 is same with the state(5) to be set 00:23:18.045 [2024-07-25 10:12:03.039468] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0e120 (9): Bad file descriptor 00:23:18.045 [2024-07-25 10:12:03.039516] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:18.045 [2024-07-25 10:12:03.039542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.045 [2024-07-25 10:12:03.039558] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:18.045 [2024-07-25 10:12:03.039572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.045 [2024-07-25 10:12:03.039587] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:18.045 [2024-07-25 10:12:03.039610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.045 [2024-07-25 10:12:03.039624] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:18.045 [2024-07-25 10:12:03.039637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.045 [2024-07-25 10:12:03.039651] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9cad0 is same with the state(5) to be set 00:23:18.045 [2024-07-25 10:12:03.039700] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:18.045 [2024-07-25 10:12:03.039721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.045 [2024-07-25 10:12:03.039737] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:18.045 [2024-07-25 10:12:03.039751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.045 [2024-07-25 10:12:03.039765] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:18.045 [2024-07-25 10:12:03.039779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.046 [2024-07-25 10:12:03.039794] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:18.046 [2024-07-25 10:12:03.039809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.046 [2024-07-25 10:12:03.039823] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e3610 is same with the state(5) to be set 00:23:18.046 [2024-07-25 10:12:03.039851] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7cf200 (9): Bad file descriptor 00:23:18.046 [2024-07-25 10:12:03.039891] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdad950 (9): Bad file descriptor 00:23:18.046 [2024-07-25 10:12:03.041211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.046 [2024-07-25 10:12:03.041238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.046 [2024-07-25 10:12:03.041266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.046 [2024-07-25 10:12:03.041283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.046 [2024-07-25 10:12:03.041301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.046 [2024-07-25 10:12:03.041316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.046 [2024-07-25 10:12:03.041338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.046 [2024-07-25 10:12:03.041354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.046 [2024-07-25 10:12:03.041370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.046 [2024-07-25 10:12:03.041385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.046 [2024-07-25 10:12:03.041402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.046 [2024-07-25 10:12:03.041417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.046 [2024-07-25 10:12:03.041441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.046 [2024-07-25 10:12:03.041458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.046 [2024-07-25 10:12:03.041474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.046 [2024-07-25 10:12:03.041489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.046 [2024-07-25 10:12:03.041505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.046 [2024-07-25 10:12:03.041520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.046 [2024-07-25 10:12:03.041536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.046 [2024-07-25 10:12:03.041550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.046 [2024-07-25 10:12:03.041567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.046 [2024-07-25 10:12:03.041581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.046 [2024-07-25 10:12:03.041598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.046 [2024-07-25 10:12:03.041612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.046 [2024-07-25 10:12:03.041629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.046 [2024-07-25 10:12:03.041643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.046 [2024-07-25 10:12:03.041659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.046 [2024-07-25 10:12:03.041674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.046 [2024-07-25 10:12:03.041690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.046 [2024-07-25 10:12:03.041705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.046 [2024-07-25 10:12:03.041721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.046 [2024-07-25 10:12:03.041739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.046 [2024-07-25 10:12:03.041756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.046 [2024-07-25 10:12:03.041771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.046 [2024-07-25 10:12:03.041787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.046 [2024-07-25 10:12:03.041802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.046 [2024-07-25 10:12:03.041818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.046 [2024-07-25 10:12:03.041833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.046 [2024-07-25 10:12:03.041849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.046 [2024-07-25 10:12:03.041864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.046 [2024-07-25 10:12:03.041880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.046 [2024-07-25 10:12:03.041895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.046 [2024-07-25 10:12:03.041912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.046 [2024-07-25 10:12:03.041926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.046 [2024-07-25 10:12:03.041943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.046 [2024-07-25 10:12:03.041957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.046 [2024-07-25 10:12:03.041974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.046 [2024-07-25 10:12:03.041989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.046 [2024-07-25 10:12:03.042005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.046 [2024-07-25 10:12:03.042019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.046 [2024-07-25 10:12:03.042036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.046 [2024-07-25 10:12:03.042050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.046 [2024-07-25 10:12:03.042067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.046 [2024-07-25 10:12:03.042082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.046 [2024-07-25 10:12:03.042098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.046 [2024-07-25 10:12:03.042113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.046 [2024-07-25 10:12:03.042133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.046 [2024-07-25 10:12:03.042148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.046 [2024-07-25 10:12:03.042165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.046 [2024-07-25 10:12:03.042179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.047 [2024-07-25 10:12:03.042195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.047 [2024-07-25 10:12:03.042209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.047 [2024-07-25 10:12:03.042227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.047 [2024-07-25 10:12:03.042241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.047 [2024-07-25 10:12:03.042257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.047 [2024-07-25 10:12:03.042272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.047 [2024-07-25 10:12:03.042288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.047 [2024-07-25 10:12:03.042302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.047 [2024-07-25 10:12:03.042319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.047 [2024-07-25 10:12:03.042334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.047 [2024-07-25 10:12:03.042350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.047 [2024-07-25 10:12:03.042364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.047 [2024-07-25 10:12:03.042381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.047 [2024-07-25 10:12:03.042396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.047 [2024-07-25 10:12:03.042412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.047 [2024-07-25 10:12:03.042426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.047 [2024-07-25 10:12:03.042451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.047 [2024-07-25 10:12:03.042466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.047 [2024-07-25 10:12:03.042483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.047 [2024-07-25 10:12:03.042497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.047 [2024-07-25 10:12:03.042514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.047 [2024-07-25 10:12:03.042532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.047 [2024-07-25 10:12:03.042549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.047 [2024-07-25 10:12:03.042564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.047 [2024-07-25 10:12:03.042581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.047 [2024-07-25 10:12:03.042596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.047 [2024-07-25 10:12:03.042612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.047 [2024-07-25 10:12:03.042627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.047 [2024-07-25 10:12:03.042643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.047 [2024-07-25 10:12:03.042657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.047 [2024-07-25 10:12:03.042674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.047 [2024-07-25 10:12:03.042689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.047 [2024-07-25 10:12:03.042704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.047 [2024-07-25 10:12:03.042719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.047 [2024-07-25 10:12:03.042735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.047 [2024-07-25 10:12:03.042750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.047 [2024-07-25 10:12:03.042766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.047 [2024-07-25 10:12:03.042780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.047 [2024-07-25 10:12:03.042796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.047 [2024-07-25 10:12:03.042811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.047 [2024-07-25 10:12:03.042827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.047 [2024-07-25 10:12:03.042842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.047 [2024-07-25 10:12:03.042858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.047 [2024-07-25 10:12:03.042873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.047 [2024-07-25 10:12:03.042889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.047 [2024-07-25 10:12:03.042904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.047 [2024-07-25 10:12:03.042923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.047 [2024-07-25 10:12:03.042938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.047 [2024-07-25 10:12:03.042955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.047 [2024-07-25 10:12:03.042969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.047 [2024-07-25 10:12:03.042985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.047 [2024-07-25 10:12:03.043000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.047 [2024-07-25 10:12:03.043016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.047 [2024-07-25 10:12:03.043030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.047 [2024-07-25 10:12:03.043046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.047 [2024-07-25 10:12:03.043060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.047 [2024-07-25 10:12:03.043077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.047 [2024-07-25 10:12:03.043092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.047 [2024-07-25 10:12:03.043109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.047 [2024-07-25 10:12:03.043124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.047 [2024-07-25 10:12:03.043140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.047 [2024-07-25 10:12:03.043154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.048 [2024-07-25 10:12:03.043170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.048 [2024-07-25 10:12:03.043185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.048 [2024-07-25 10:12:03.043201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.048 [2024-07-25 10:12:03.043215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.048 [2024-07-25 10:12:03.043231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.048 [2024-07-25 10:12:03.043245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.048 [2024-07-25 10:12:03.043260] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdb470 is same with the state(5) to be set 00:23:18.048 [2024-07-25 10:12:03.043343] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xbdb470 was disconnected and freed. reset controller. 00:23:18.048 [2024-07-25 10:12:03.045206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.048 [2024-07-25 10:12:03.045237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.048 [2024-07-25 10:12:03.045261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.048 [2024-07-25 10:12:03.045277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.048 [2024-07-25 10:12:03.045293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.048 [2024-07-25 10:12:03.045310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.048 [2024-07-25 10:12:03.045327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.048 [2024-07-25 10:12:03.045342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.048 [2024-07-25 10:12:03.045358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.048 [2024-07-25 10:12:03.045372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.048 [2024-07-25 10:12:03.045388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.048 [2024-07-25 10:12:03.045402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.048 [2024-07-25 10:12:03.045418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.048 [2024-07-25 10:12:03.045441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.048 [2024-07-25 10:12:03.045458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.048 [2024-07-25 10:12:03.045473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.048 [2024-07-25 10:12:03.045489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.048 [2024-07-25 10:12:03.045503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.048 [2024-07-25 10:12:03.045520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.048 [2024-07-25 10:12:03.045535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.048 [2024-07-25 10:12:03.045552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.048 [2024-07-25 10:12:03.045566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.048 [2024-07-25 10:12:03.045583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.048 [2024-07-25 10:12:03.045597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.048 [2024-07-25 10:12:03.045615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.048 [2024-07-25 10:12:03.045629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.048 [2024-07-25 10:12:03.045649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.048 [2024-07-25 10:12:03.045664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.048 [2024-07-25 10:12:03.045681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.049 [2024-07-25 10:12:03.045695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.049 [2024-07-25 10:12:03.045711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.049 [2024-07-25 10:12:03.045725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.049 [2024-07-25 10:12:03.045742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.049 [2024-07-25 10:12:03.045756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.049 [2024-07-25 10:12:03.045771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.049 [2024-07-25 10:12:03.045785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.049 [2024-07-25 10:12:03.045801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.049 [2024-07-25 10:12:03.045816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.049 [2024-07-25 10:12:03.045833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.049 [2024-07-25 10:12:03.045847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.049 [2024-07-25 10:12:03.045863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.049 [2024-07-25 10:12:03.045877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.049 [2024-07-25 10:12:03.045893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.049 [2024-07-25 10:12:03.045907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.049 [2024-07-25 10:12:03.045923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.049 [2024-07-25 10:12:03.045937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.049 [2024-07-25 10:12:03.045954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.049 [2024-07-25 10:12:03.045968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.049 [2024-07-25 10:12:03.045985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.049 [2024-07-25 10:12:03.045999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.049 [2024-07-25 10:12:03.046015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.049 [2024-07-25 10:12:03.046030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.049 [2024-07-25 10:12:03.046049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.049 [2024-07-25 10:12:03.046065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.049 [2024-07-25 10:12:03.046081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.049 [2024-07-25 10:12:03.046095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.049 [2024-07-25 10:12:03.046111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.049 [2024-07-25 10:12:03.046126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.049 [2024-07-25 10:12:03.046142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.049 [2024-07-25 10:12:03.046156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.049 [2024-07-25 10:12:03.046172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.049 [2024-07-25 10:12:03.046187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.049 [2024-07-25 10:12:03.046203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.049 [2024-07-25 10:12:03.046217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.049 [2024-07-25 10:12:03.046234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.049 [2024-07-25 10:12:03.046248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.049 [2024-07-25 10:12:03.046264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.049 [2024-07-25 10:12:03.046279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.049 [2024-07-25 10:12:03.046295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.049 [2024-07-25 10:12:03.046309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.049 [2024-07-25 10:12:03.046326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.049 [2024-07-25 10:12:03.046340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.049 [2024-07-25 10:12:03.046356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.049 [2024-07-25 10:12:03.046370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.049 [2024-07-25 10:12:03.046386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.049 [2024-07-25 10:12:03.046400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.049 [2024-07-25 10:12:03.046416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.049 [2024-07-25 10:12:03.046441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.049 [2024-07-25 10:12:03.046459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.049 [2024-07-25 10:12:03.046474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.049 [2024-07-25 10:12:03.046490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.049 [2024-07-25 10:12:03.046505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.049 [2024-07-25 10:12:03.046521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.049 [2024-07-25 10:12:03.046535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.049 [2024-07-25 10:12:03.046551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.049 [2024-07-25 10:12:03.046566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.049 [2024-07-25 10:12:03.046582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.049 [2024-07-25 10:12:03.046597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.049 [2024-07-25 10:12:03.046612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.049 [2024-07-25 10:12:03.046626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.049 [2024-07-25 10:12:03.046642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.049 [2024-07-25 10:12:03.046657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.049 [2024-07-25 10:12:03.046673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.049 [2024-07-25 10:12:03.046687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.049 [2024-07-25 10:12:03.046702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.049 [2024-07-25 10:12:03.046717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.049 [2024-07-25 10:12:03.046734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.049 [2024-07-25 10:12:03.046748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.049 [2024-07-25 10:12:03.046764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.049 [2024-07-25 10:12:03.046778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.049 [2024-07-25 10:12:03.046793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.049 [2024-07-25 10:12:03.046807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.049 [2024-07-25 10:12:03.046827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.049 [2024-07-25 10:12:03.046842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.049 [2024-07-25 10:12:03.046857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.049 [2024-07-25 10:12:03.046871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.050 [2024-07-25 10:12:03.046887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.050 [2024-07-25 10:12:03.046902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.050 [2024-07-25 10:12:03.046917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.050 [2024-07-25 10:12:03.046931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.050 [2024-07-25 10:12:03.046947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.050 [2024-07-25 10:12:03.046961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.050 [2024-07-25 10:12:03.046977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.050 [2024-07-25 10:12:03.046991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.050 [2024-07-25 10:12:03.047007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.050 [2024-07-25 10:12:03.047022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.050 [2024-07-25 10:12:03.047038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.050 [2024-07-25 10:12:03.047052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.050 [2024-07-25 10:12:03.047068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.050 [2024-07-25 10:12:03.047082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.050 [2024-07-25 10:12:03.047097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.050 [2024-07-25 10:12:03.047112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.050 [2024-07-25 10:12:03.047127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.050 [2024-07-25 10:12:03.047141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.050 [2024-07-25 10:12:03.047157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.050 [2024-07-25 10:12:03.047171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.050 [2024-07-25 10:12:03.047187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.050 [2024-07-25 10:12:03.047207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.050 [2024-07-25 10:12:03.047299] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x16b6e30 was disconnected and freed. reset controller. 00:23:18.050 [2024-07-25 10:12:03.047437] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:23:18.050 [2024-07-25 10:12:03.047605] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:18.050 [2024-07-25 10:12:03.048920] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xbdddd0 was disconnected and freed. reset controller. 00:23:18.050 [2024-07-25 10:12:03.050219] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:23:18.050 [2024-07-25 10:12:03.050257] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc11420 (9): Bad file descriptor 00:23:18.050 [2024-07-25 10:12:03.050474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:18.050 [2024-07-25 10:12:03.050504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0e120 with addr=10.0.0.2, port=4420 00:23:18.050 [2024-07-25 10:12:03.050521] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0e120 is same with the state(5) to be set 00:23:18.050 [2024-07-25 10:12:03.050558] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0d320 (9): Bad file descriptor 00:23:18.050 [2024-07-25 10:12:03.050591] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0dba0 (9): Bad file descriptor 00:23:18.050 [2024-07-25 10:12:03.050623] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd85b30 (9): Bad file descriptor 00:23:18.050 [2024-07-25 10:12:03.050654] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd9cad0 (9): Bad file descriptor 00:23:18.050 [2024-07-25 10:12:03.050679] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6e3610 (9): Bad file descriptor 00:23:18.050 [2024-07-25 10:12:03.051607] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:23:18.050 [2024-07-25 10:12:03.051657] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0e120 (9): Bad file descriptor 00:23:18.050 [2024-07-25 10:12:03.051739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.050 [2024-07-25 10:12:03.051764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.050 [2024-07-25 10:12:03.051789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.050 [2024-07-25 10:12:03.051807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.050 [2024-07-25 10:12:03.051825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.050 [2024-07-25 10:12:03.051840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.050 [2024-07-25 10:12:03.051856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.050 [2024-07-25 10:12:03.051871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.050 [2024-07-25 10:12:03.051887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.050 [2024-07-25 10:12:03.051901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.050 [2024-07-25 10:12:03.051925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.050 [2024-07-25 10:12:03.051941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.050 [2024-07-25 10:12:03.051957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.050 [2024-07-25 10:12:03.051981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.050 [2024-07-25 10:12:03.051997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.050 [2024-07-25 10:12:03.052011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.050 [2024-07-25 10:12:03.052027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.050 [2024-07-25 10:12:03.052041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.050 [2024-07-25 10:12:03.052057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.050 [2024-07-25 10:12:03.052071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.050 [2024-07-25 10:12:03.052087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.050 [2024-07-25 10:12:03.052101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.050 [2024-07-25 10:12:03.052117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.050 [2024-07-25 10:12:03.052131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.050 [2024-07-25 10:12:03.052147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.050 [2024-07-25 10:12:03.052161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.050 [2024-07-25 10:12:03.052177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.050 [2024-07-25 10:12:03.052191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.050 [2024-07-25 10:12:03.052207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.050 [2024-07-25 10:12:03.052221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.050 [2024-07-25 10:12:03.052237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.050 [2024-07-25 10:12:03.052251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.050 [2024-07-25 10:12:03.052268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.050 [2024-07-25 10:12:03.052282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.050 [2024-07-25 10:12:03.052299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.050 [2024-07-25 10:12:03.052318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.050 [2024-07-25 10:12:03.052335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.050 [2024-07-25 10:12:03.052350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.050 [2024-07-25 10:12:03.052366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.050 [2024-07-25 10:12:03.052381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.050 [2024-07-25 10:12:03.052397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.051 [2024-07-25 10:12:03.052412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.051 [2024-07-25 10:12:03.052434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.051 [2024-07-25 10:12:03.052451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.051 [2024-07-25 10:12:03.052467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.051 [2024-07-25 10:12:03.052482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.051 [2024-07-25 10:12:03.052498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.051 [2024-07-25 10:12:03.052512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.051 [2024-07-25 10:12:03.052528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.051 [2024-07-25 10:12:03.052543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.051 [2024-07-25 10:12:03.052560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.051 [2024-07-25 10:12:03.052574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.051 [2024-07-25 10:12:03.052591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.051 [2024-07-25 10:12:03.052605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.051 [2024-07-25 10:12:03.052621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.051 [2024-07-25 10:12:03.052636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.051 [2024-07-25 10:12:03.052652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.051 [2024-07-25 10:12:03.052667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.051 [2024-07-25 10:12:03.052683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.051 [2024-07-25 10:12:03.052697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.051 [2024-07-25 10:12:03.052718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.051 [2024-07-25 10:12:03.052733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.051 [2024-07-25 10:12:03.052749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.051 [2024-07-25 10:12:03.052763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.051 [2024-07-25 10:12:03.052779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.051 [2024-07-25 10:12:03.052794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.051 [2024-07-25 10:12:03.052810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.051 [2024-07-25 10:12:03.052824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.051 [2024-07-25 10:12:03.052840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.051 [2024-07-25 10:12:03.052854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.051 [2024-07-25 10:12:03.052870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.051 [2024-07-25 10:12:03.052884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.051 [2024-07-25 10:12:03.052900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.051 [2024-07-25 10:12:03.052914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.051 [2024-07-25 10:12:03.052930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.051 [2024-07-25 10:12:03.052944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.051 [2024-07-25 10:12:03.052960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.051 [2024-07-25 10:12:03.052974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.051 [2024-07-25 10:12:03.052990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.051 [2024-07-25 10:12:03.053004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.051 [2024-07-25 10:12:03.053020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.051 [2024-07-25 10:12:03.053034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.051 [2024-07-25 10:12:03.053050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.051 [2024-07-25 10:12:03.053065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.051 [2024-07-25 10:12:03.053081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.051 [2024-07-25 10:12:03.053098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.051 [2024-07-25 10:12:03.053115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.051 [2024-07-25 10:12:03.053130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.051 [2024-07-25 10:12:03.053146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.051 [2024-07-25 10:12:03.053161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.051 [2024-07-25 10:12:03.053177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.051 [2024-07-25 10:12:03.053191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.051 [2024-07-25 10:12:03.053207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.051 [2024-07-25 10:12:03.053221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.051 [2024-07-25 10:12:03.053237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.051 [2024-07-25 10:12:03.053252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.051 [2024-07-25 10:12:03.053268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.051 [2024-07-25 10:12:03.053282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.051 [2024-07-25 10:12:03.053299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.051 [2024-07-25 10:12:03.053313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.051 [2024-07-25 10:12:03.053329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.051 [2024-07-25 10:12:03.053343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.051 [2024-07-25 10:12:03.053359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.051 [2024-07-25 10:12:03.053373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.051 [2024-07-25 10:12:03.053389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.051 [2024-07-25 10:12:03.053404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.051 [2024-07-25 10:12:03.053421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.051 [2024-07-25 10:12:03.053452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.051 [2024-07-25 10:12:03.053470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.051 [2024-07-25 10:12:03.053484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.051 [2024-07-25 10:12:03.053504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.051 [2024-07-25 10:12:03.053519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.051 [2024-07-25 10:12:03.053536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.051 [2024-07-25 10:12:03.053550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.051 [2024-07-25 10:12:03.053566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.051 [2024-07-25 10:12:03.053580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.051 [2024-07-25 10:12:03.053597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.051 [2024-07-25 10:12:03.053611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.052 [2024-07-25 10:12:03.053627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.052 [2024-07-25 10:12:03.053641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.052 [2024-07-25 10:12:03.053657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.052 [2024-07-25 10:12:03.053672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.052 [2024-07-25 10:12:03.053688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.052 [2024-07-25 10:12:03.053702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.052 [2024-07-25 10:12:03.053719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.052 [2024-07-25 10:12:03.053733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.052 [2024-07-25 10:12:03.053749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.052 [2024-07-25 10:12:03.053764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.052 [2024-07-25 10:12:03.053780] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd02860 is same with the state(5) to be set 00:23:18.052 [2024-07-25 10:12:03.055031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.052 [2024-07-25 10:12:03.055056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.052 [2024-07-25 10:12:03.055077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.052 [2024-07-25 10:12:03.055093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.052 [2024-07-25 10:12:03.055110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.052 [2024-07-25 10:12:03.055124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.052 [2024-07-25 10:12:03.055146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.052 [2024-07-25 10:12:03.055161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.052 [2024-07-25 10:12:03.055177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.052 [2024-07-25 10:12:03.055191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.052 [2024-07-25 10:12:03.055208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.052 [2024-07-25 10:12:03.055222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.052 [2024-07-25 10:12:03.055238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.052 [2024-07-25 10:12:03.055252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.052 [2024-07-25 10:12:03.055270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.052 [2024-07-25 10:12:03.055284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.052 [2024-07-25 10:12:03.055300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.052 [2024-07-25 10:12:03.055315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.052 [2024-07-25 10:12:03.055332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.052 [2024-07-25 10:12:03.055346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.052 [2024-07-25 10:12:03.055362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.052 [2024-07-25 10:12:03.055377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.052 [2024-07-25 10:12:03.055393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.052 [2024-07-25 10:12:03.055407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.052 [2024-07-25 10:12:03.055423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.052 [2024-07-25 10:12:03.055445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.052 [2024-07-25 10:12:03.055463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.052 [2024-07-25 10:12:03.055478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.052 [2024-07-25 10:12:03.055494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.052 [2024-07-25 10:12:03.055508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.052 [2024-07-25 10:12:03.055524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.052 [2024-07-25 10:12:03.055538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.052 [2024-07-25 10:12:03.055559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.052 [2024-07-25 10:12:03.055574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.052 [2024-07-25 10:12:03.055590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.052 [2024-07-25 10:12:03.055604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.052 [2024-07-25 10:12:03.055621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.052 [2024-07-25 10:12:03.055635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.052 [2024-07-25 10:12:03.055651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.052 [2024-07-25 10:12:03.055665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.052 [2024-07-25 10:12:03.055681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.052 [2024-07-25 10:12:03.055695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.052 [2024-07-25 10:12:03.055712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.052 [2024-07-25 10:12:03.055726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.052 [2024-07-25 10:12:03.055743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.052 [2024-07-25 10:12:03.055757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.052 [2024-07-25 10:12:03.055773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.052 [2024-07-25 10:12:03.055787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.052 [2024-07-25 10:12:03.055805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.052 [2024-07-25 10:12:03.055819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.052 [2024-07-25 10:12:03.055835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.052 [2024-07-25 10:12:03.055849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.052 [2024-07-25 10:12:03.055865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.053 [2024-07-25 10:12:03.055879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.053 [2024-07-25 10:12:03.055896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.053 [2024-07-25 10:12:03.055910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.053 [2024-07-25 10:12:03.055926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.053 [2024-07-25 10:12:03.055945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.053 [2024-07-25 10:12:03.055961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.053 [2024-07-25 10:12:03.055976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.053 [2024-07-25 10:12:03.055992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.053 [2024-07-25 10:12:03.056006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.053 [2024-07-25 10:12:03.056022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.053 [2024-07-25 10:12:03.056037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.053 [2024-07-25 10:12:03.056054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.053 [2024-07-25 10:12:03.056068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.053 [2024-07-25 10:12:03.056084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.053 [2024-07-25 10:12:03.056099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.053 [2024-07-25 10:12:03.056115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.053 [2024-07-25 10:12:03.056129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.053 [2024-07-25 10:12:03.056145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.053 [2024-07-25 10:12:03.056158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.053 [2024-07-25 10:12:03.056174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.053 [2024-07-25 10:12:03.056187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.053 [2024-07-25 10:12:03.056204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.053 [2024-07-25 10:12:03.056217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.053 [2024-07-25 10:12:03.056233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.053 [2024-07-25 10:12:03.056247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.053 [2024-07-25 10:12:03.056263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.053 [2024-07-25 10:12:03.056277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.053 [2024-07-25 10:12:03.056293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.053 [2024-07-25 10:12:03.056306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.053 [2024-07-25 10:12:03.056325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.053 [2024-07-25 10:12:03.056340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.053 [2024-07-25 10:12:03.056356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.053 [2024-07-25 10:12:03.056370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.053 [2024-07-25 10:12:03.056386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.053 [2024-07-25 10:12:03.056400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.053 [2024-07-25 10:12:03.056416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.053 [2024-07-25 10:12:03.056435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.053 [2024-07-25 10:12:03.056452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.053 [2024-07-25 10:12:03.056467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.053 [2024-07-25 10:12:03.056483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.053 [2024-07-25 10:12:03.056497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.053 [2024-07-25 10:12:03.056513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.053 [2024-07-25 10:12:03.056528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.053 [2024-07-25 10:12:03.056544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.053 [2024-07-25 10:12:03.056558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.053 [2024-07-25 10:12:03.056574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.053 [2024-07-25 10:12:03.056589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.054 [2024-07-25 10:12:03.056605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.054 [2024-07-25 10:12:03.056619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.054 [2024-07-25 10:12:03.056635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.054 [2024-07-25 10:12:03.056649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.054 [2024-07-25 10:12:03.056665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.054 [2024-07-25 10:12:03.056680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.054 [2024-07-25 10:12:03.056695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.054 [2024-07-25 10:12:03.056713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.054 [2024-07-25 10:12:03.056729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.054 [2024-07-25 10:12:03.056743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.054 [2024-07-25 10:12:03.056759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.054 [2024-07-25 10:12:03.056773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.054 [2024-07-25 10:12:03.056790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.054 [2024-07-25 10:12:03.056803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.054 [2024-07-25 10:12:03.056819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.054 [2024-07-25 10:12:03.056833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.054 [2024-07-25 10:12:03.056849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.054 [2024-07-25 10:12:03.056862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.054 [2024-07-25 10:12:03.056878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.054 [2024-07-25 10:12:03.056892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.054 [2024-07-25 10:12:03.056907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.054 [2024-07-25 10:12:03.056921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.054 [2024-07-25 10:12:03.056937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.054 [2024-07-25 10:12:03.056951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.054 [2024-07-25 10:12:03.056966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.054 [2024-07-25 10:12:03.056980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.054 [2024-07-25 10:12:03.056997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.054 [2024-07-25 10:12:03.057011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.054 [2024-07-25 10:12:03.057025] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd7a430 is same with the state(5) to be set 00:23:18.054 [2024-07-25 10:12:03.058635] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:18.054 [2024-07-25 10:12:03.058742] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:18.054 [2024-07-25 10:12:03.059094] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:23:18.054 [2024-07-25 10:12:03.059172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.054 [2024-07-25 10:12:03.059199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.054 [2024-07-25 10:12:03.059225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.054 [2024-07-25 10:12:03.059242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.054 [2024-07-25 10:12:03.059269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.054 [2024-07-25 10:12:03.059283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.054 [2024-07-25 10:12:03.059299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.054 [2024-07-25 10:12:03.059313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.054 [2024-07-25 10:12:03.059329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.054 [2024-07-25 10:12:03.059344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.054 [2024-07-25 10:12:03.059360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.054 [2024-07-25 10:12:03.059374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.054 [2024-07-25 10:12:03.059390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.054 [2024-07-25 10:12:03.059405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.054 [2024-07-25 10:12:03.059421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.054 [2024-07-25 10:12:03.059443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.054 [2024-07-25 10:12:03.059461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.054 [2024-07-25 10:12:03.059476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.054 [2024-07-25 10:12:03.059492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.054 [2024-07-25 10:12:03.059506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.054 [2024-07-25 10:12:03.059522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.054 [2024-07-25 10:12:03.059536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.054 [2024-07-25 10:12:03.059552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.054 [2024-07-25 10:12:03.059567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.054 [2024-07-25 10:12:03.059583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.054 [2024-07-25 10:12:03.059597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.054 [2024-07-25 10:12:03.059618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.054 [2024-07-25 10:12:03.059633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.055 [2024-07-25 10:12:03.059650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.055 [2024-07-25 10:12:03.059664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.055 [2024-07-25 10:12:03.059680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.055 [2024-07-25 10:12:03.059694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.055 [2024-07-25 10:12:03.059709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.055 [2024-07-25 10:12:03.059724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.055 [2024-07-25 10:12:03.059741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.055 [2024-07-25 10:12:03.059755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.055 [2024-07-25 10:12:03.059771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.055 [2024-07-25 10:12:03.059785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.055 [2024-07-25 10:12:03.059801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.055 [2024-07-25 10:12:03.059815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.055 [2024-07-25 10:12:03.059834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.055 [2024-07-25 10:12:03.059848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.055 [2024-07-25 10:12:03.059864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.055 [2024-07-25 10:12:03.059878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.055 [2024-07-25 10:12:03.059895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.055 [2024-07-25 10:12:03.059909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.055 [2024-07-25 10:12:03.059925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.055 [2024-07-25 10:12:03.059939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.055 [2024-07-25 10:12:03.059956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.055 [2024-07-25 10:12:03.059970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.055 [2024-07-25 10:12:03.059986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.055 [2024-07-25 10:12:03.060005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.055 [2024-07-25 10:12:03.060021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.055 [2024-07-25 10:12:03.060036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.055 [2024-07-25 10:12:03.060052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.055 [2024-07-25 10:12:03.060067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.055 [2024-07-25 10:12:03.060084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.055 [2024-07-25 10:12:03.060098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.055 [2024-07-25 10:12:03.060114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.055 [2024-07-25 10:12:03.060129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.055 [2024-07-25 10:12:03.060145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.055 [2024-07-25 10:12:03.060160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.055 [2024-07-25 10:12:03.060175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.055 [2024-07-25 10:12:03.060190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.055 [2024-07-25 10:12:03.060206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.055 [2024-07-25 10:12:03.060220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.055 [2024-07-25 10:12:03.060237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.055 [2024-07-25 10:12:03.060251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.055 [2024-07-25 10:12:03.060267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.055 [2024-07-25 10:12:03.060281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.055 [2024-07-25 10:12:03.060297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.055 [2024-07-25 10:12:03.060312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.055 [2024-07-25 10:12:03.060329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.055 [2024-07-25 10:12:03.060343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.055 [2024-07-25 10:12:03.060359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.055 [2024-07-25 10:12:03.060373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.055 [2024-07-25 10:12:03.060393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.055 [2024-07-25 10:12:03.060408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.055 [2024-07-25 10:12:03.060425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.055 [2024-07-25 10:12:03.060456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.055 [2024-07-25 10:12:03.060474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.055 [2024-07-25 10:12:03.060489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.055 [2024-07-25 10:12:03.060506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.055 [2024-07-25 10:12:03.060520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.055 [2024-07-25 10:12:03.060536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.055 [2024-07-25 10:12:03.060551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.055 [2024-07-25 10:12:03.060567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.055 [2024-07-25 10:12:03.060582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.055 [2024-07-25 10:12:03.060598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.055 [2024-07-25 10:12:03.060612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.056 [2024-07-25 10:12:03.060628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.056 [2024-07-25 10:12:03.060642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.056 [2024-07-25 10:12:03.060658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.056 [2024-07-25 10:12:03.060674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.056 [2024-07-25 10:12:03.060690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.056 [2024-07-25 10:12:03.060704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.056 [2024-07-25 10:12:03.060720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.056 [2024-07-25 10:12:03.060735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.056 [2024-07-25 10:12:03.060751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.056 [2024-07-25 10:12:03.060765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.056 [2024-07-25 10:12:03.060781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.056 [2024-07-25 10:12:03.060803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.056 [2024-07-25 10:12:03.060821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.056 [2024-07-25 10:12:03.060835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.056 [2024-07-25 10:12:03.060852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.056 [2024-07-25 10:12:03.060866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.056 [2024-07-25 10:12:03.060882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.056 [2024-07-25 10:12:03.060897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.056 [2024-07-25 10:12:03.060914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.056 [2024-07-25 10:12:03.060927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.056 [2024-07-25 10:12:03.060943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.056 [2024-07-25 10:12:03.060957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.056 [2024-07-25 10:12:03.060973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.056 [2024-07-25 10:12:03.060987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.056 [2024-07-25 10:12:03.061003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.056 [2024-07-25 10:12:03.061017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.056 [2024-07-25 10:12:03.061034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.056 [2024-07-25 10:12:03.061048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.056 [2024-07-25 10:12:03.061064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.056 [2024-07-25 10:12:03.061078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.056 [2024-07-25 10:12:03.061093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.056 [2024-07-25 10:12:03.061108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.056 [2024-07-25 10:12:03.061123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.056 [2024-07-25 10:12:03.061138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.056 [2024-07-25 10:12:03.061154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.056 [2024-07-25 10:12:03.061168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.056 [2024-07-25 10:12:03.061187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.056 [2024-07-25 10:12:03.061203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.056 [2024-07-25 10:12:03.061217] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfbdb0 is same with the state(5) to be set 00:23:18.056 [2024-07-25 10:12:03.062909] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:18.056 [2024-07-25 10:12:03.062942] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:23:18.056 [2024-07-25 10:12:03.062963] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:23:18.056 [2024-07-25 10:12:03.062981] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:23:18.056 [2024-07-25 10:12:03.063250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:18.056 [2024-07-25 10:12:03.063280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc11420 with addr=10.0.0.2, port=4420 00:23:18.056 [2024-07-25 10:12:03.063296] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc11420 is same with the state(5) to be set 00:23:18.056 [2024-07-25 10:12:03.063477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:18.056 [2024-07-25 10:12:03.063503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9cad0 with addr=10.0.0.2, port=4420 00:23:18.056 [2024-07-25 10:12:03.063520] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9cad0 is same with the state(5) to be set 00:23:18.056 [2024-07-25 10:12:03.063536] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:23:18.056 [2024-07-25 10:12:03.063549] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:23:18.056 [2024-07-25 10:12:03.063564] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:23:18.056 [2024-07-25 10:12:03.063626] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:18.056 [2024-07-25 10:12:03.063704] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd9cad0 (9): Bad file descriptor 00:23:18.056 [2024-07-25 10:12:03.063732] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc11420 (9): Bad file descriptor 00:23:18.056 [2024-07-25 10:12:03.063910] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:18.056 [2024-07-25 10:12:03.064119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:18.056 [2024-07-25 10:12:03.064146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7cf200 with addr=10.0.0.2, port=4420 00:23:18.056 [2024-07-25 10:12:03.064162] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7cf200 is same with the state(5) to be set 00:23:18.056 [2024-07-25 10:12:03.064331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:18.056 [2024-07-25 10:12:03.064366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdad950 with addr=10.0.0.2, port=4420 00:23:18.056 [2024-07-25 10:12:03.064381] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdad950 is same with the state(5) to be set 00:23:18.056 [2024-07-25 10:12:03.064542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:18.056 [2024-07-25 10:12:03.064567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x6e3610 with addr=10.0.0.2, port=4420 00:23:18.056 [2024-07-25 10:12:03.064582] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e3610 is same with the state(5) to be set 00:23:18.056 [2024-07-25 10:12:03.064709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:18.056 [2024-07-25 10:12:03.064739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9eb40 with addr=10.0.0.2, port=4420 00:23:18.057 [2024-07-25 10:12:03.064756] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9eb40 is same with the state(5) to be set 00:23:18.057 [2024-07-25 10:12:03.065384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.057 [2024-07-25 10:12:03.065407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.057 [2024-07-25 10:12:03.065438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.057 [2024-07-25 10:12:03.065456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.057 [2024-07-25 10:12:03.065474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.057 [2024-07-25 10:12:03.065488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.057 [2024-07-25 10:12:03.065505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.057 [2024-07-25 10:12:03.065519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.057 [2024-07-25 10:12:03.065535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.057 [2024-07-25 10:12:03.065549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.057 [2024-07-25 10:12:03.065565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.057 [2024-07-25 10:12:03.065580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.057 [2024-07-25 10:12:03.065596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.057 [2024-07-25 10:12:03.065610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.057 [2024-07-25 10:12:03.065627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.057 [2024-07-25 10:12:03.065641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.057 [2024-07-25 10:12:03.065657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.057 [2024-07-25 10:12:03.065671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.057 [2024-07-25 10:12:03.065687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.057 [2024-07-25 10:12:03.065701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.057 [2024-07-25 10:12:03.065717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.057 [2024-07-25 10:12:03.065731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.057 [2024-07-25 10:12:03.065747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.057 [2024-07-25 10:12:03.065762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.057 [2024-07-25 10:12:03.065783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.057 [2024-07-25 10:12:03.065799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.057 [2024-07-25 10:12:03.065815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.057 [2024-07-25 10:12:03.065829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.057 [2024-07-25 10:12:03.065846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.057 [2024-07-25 10:12:03.065860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.057 [2024-07-25 10:12:03.065876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.057 [2024-07-25 10:12:03.065891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.057 [2024-07-25 10:12:03.065907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.057 [2024-07-25 10:12:03.065921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.057 [2024-07-25 10:12:03.065937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.057 [2024-07-25 10:12:03.065952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.057 [2024-07-25 10:12:03.065968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.057 [2024-07-25 10:12:03.065982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.057 [2024-07-25 10:12:03.065999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.057 [2024-07-25 10:12:03.066012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.057 [2024-07-25 10:12:03.066029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.057 [2024-07-25 10:12:03.066043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.057 [2024-07-25 10:12:03.066059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.057 [2024-07-25 10:12:03.066074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.057 [2024-07-25 10:12:03.066090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.057 [2024-07-25 10:12:03.066104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.057 [2024-07-25 10:12:03.066119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.057 [2024-07-25 10:12:03.066133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.057 [2024-07-25 10:12:03.066150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.057 [2024-07-25 10:12:03.066168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.057 [2024-07-25 10:12:03.066185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.057 [2024-07-25 10:12:03.066200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.057 [2024-07-25 10:12:03.066216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.057 [2024-07-25 10:12:03.066230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.057 [2024-07-25 10:12:03.066246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.057 [2024-07-25 10:12:03.066261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.057 [2024-07-25 10:12:03.066277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.057 [2024-07-25 10:12:03.066292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.057 [2024-07-25 10:12:03.066320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.057 [2024-07-25 10:12:03.066335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.057 [2024-07-25 10:12:03.066351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.057 [2024-07-25 10:12:03.066365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.057 [2024-07-25 10:12:03.066381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.057 [2024-07-25 10:12:03.066397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.057 [2024-07-25 10:12:03.066413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.057 [2024-07-25 10:12:03.066432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.057 [2024-07-25 10:12:03.066450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.057 [2024-07-25 10:12:03.066465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.057 [2024-07-25 10:12:03.066481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.058 [2024-07-25 10:12:03.066495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.058 [2024-07-25 10:12:03.066511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.058 [2024-07-25 10:12:03.066525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.058 [2024-07-25 10:12:03.066542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.058 [2024-07-25 10:12:03.066556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.058 [2024-07-25 10:12:03.066576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.058 [2024-07-25 10:12:03.066590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.058 [2024-07-25 10:12:03.066606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.058 [2024-07-25 10:12:03.066620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.058 [2024-07-25 10:12:03.066636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.058 [2024-07-25 10:12:03.066650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.058 [2024-07-25 10:12:03.066665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.058 [2024-07-25 10:12:03.066679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.058 [2024-07-25 10:12:03.066694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.058 [2024-07-25 10:12:03.066709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.058 [2024-07-25 10:12:03.066724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.058 [2024-07-25 10:12:03.066738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.058 [2024-07-25 10:12:03.066754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.058 [2024-07-25 10:12:03.066768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.058 [2024-07-25 10:12:03.066784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.058 [2024-07-25 10:12:03.066798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.058 [2024-07-25 10:12:03.066819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.058 [2024-07-25 10:12:03.066834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.058 [2024-07-25 10:12:03.066851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.058 [2024-07-25 10:12:03.066864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.058 [2024-07-25 10:12:03.066881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.058 [2024-07-25 10:12:03.066895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.058 [2024-07-25 10:12:03.066911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.058 [2024-07-25 10:12:03.066925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.058 [2024-07-25 10:12:03.066941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.058 [2024-07-25 10:12:03.066958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.058 [2024-07-25 10:12:03.066974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.058 [2024-07-25 10:12:03.066989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.058 [2024-07-25 10:12:03.067005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.058 [2024-07-25 10:12:03.067019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.058 [2024-07-25 10:12:03.067034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.058 [2024-07-25 10:12:03.067048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.058 [2024-07-25 10:12:03.067064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.058 [2024-07-25 10:12:03.067078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.058 [2024-07-25 10:12:03.067094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.058 [2024-07-25 10:12:03.067108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.058 [2024-07-25 10:12:03.067123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.058 [2024-07-25 10:12:03.067137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.058 [2024-07-25 10:12:03.067153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.058 [2024-07-25 10:12:03.067167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.058 [2024-07-25 10:12:03.067183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.058 [2024-07-25 10:12:03.067197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.058 [2024-07-25 10:12:03.067213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.058 [2024-07-25 10:12:03.067227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.058 [2024-07-25 10:12:03.067243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.058 [2024-07-25 10:12:03.067257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.058 [2024-07-25 10:12:03.067272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.058 [2024-07-25 10:12:03.067286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.058 [2024-07-25 10:12:03.067303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.058 [2024-07-25 10:12:03.067317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.058 [2024-07-25 10:12:03.067336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.058 [2024-07-25 10:12:03.067351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.058 [2024-07-25 10:12:03.067367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.058 [2024-07-25 10:12:03.067381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.058 [2024-07-25 10:12:03.067396] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbdc920 is same with the state(5) to be set 00:23:18.058 [2024-07-25 10:12:03.068672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.058 [2024-07-25 10:12:03.068695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.059 [2024-07-25 10:12:03.068717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.059 [2024-07-25 10:12:03.068733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.059 [2024-07-25 10:12:03.068749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.059 [2024-07-25 10:12:03.068763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.059 [2024-07-25 10:12:03.068779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.059 [2024-07-25 10:12:03.068793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.059 [2024-07-25 10:12:03.068810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.059 [2024-07-25 10:12:03.068824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.059 [2024-07-25 10:12:03.068840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.059 [2024-07-25 10:12:03.068854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.059 [2024-07-25 10:12:03.068869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.059 [2024-07-25 10:12:03.068883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.059 [2024-07-25 10:12:03.068899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.059 [2024-07-25 10:12:03.068913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.059 [2024-07-25 10:12:03.068929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.059 [2024-07-25 10:12:03.068943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.059 [2024-07-25 10:12:03.068959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.059 [2024-07-25 10:12:03.068973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.059 [2024-07-25 10:12:03.068994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.059 [2024-07-25 10:12:03.069009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.059 [2024-07-25 10:12:03.069025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.059 [2024-07-25 10:12:03.069039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.059 [2024-07-25 10:12:03.069055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.059 [2024-07-25 10:12:03.069070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.059 [2024-07-25 10:12:03.069086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.059 [2024-07-25 10:12:03.069100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.059 [2024-07-25 10:12:03.069115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.059 [2024-07-25 10:12:03.069129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.059 [2024-07-25 10:12:03.069146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.059 [2024-07-25 10:12:03.069159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.059 [2024-07-25 10:12:03.069175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.059 [2024-07-25 10:12:03.069189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.059 [2024-07-25 10:12:03.069205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.059 [2024-07-25 10:12:03.069219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.059 [2024-07-25 10:12:03.069235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.059 [2024-07-25 10:12:03.069249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.059 [2024-07-25 10:12:03.069266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.059 [2024-07-25 10:12:03.069280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.059 [2024-07-25 10:12:03.069296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.059 [2024-07-25 10:12:03.069310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.059 [2024-07-25 10:12:03.069325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.059 [2024-07-25 10:12:03.069339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.059 [2024-07-25 10:12:03.069354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.059 [2024-07-25 10:12:03.069372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.059 [2024-07-25 10:12:03.069391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.059 [2024-07-25 10:12:03.069405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.059 [2024-07-25 10:12:03.069421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.059 [2024-07-25 10:12:03.069447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.059 [2024-07-25 10:12:03.069465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.059 [2024-07-25 10:12:03.069480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.059 [2024-07-25 10:12:03.069496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.059 [2024-07-25 10:12:03.069510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.059 [2024-07-25 10:12:03.069526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.060 [2024-07-25 10:12:03.069540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.060 [2024-07-25 10:12:03.069556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.060 [2024-07-25 10:12:03.069571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.060 [2024-07-25 10:12:03.069587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.060 [2024-07-25 10:12:03.069601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.060 [2024-07-25 10:12:03.069617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.060 [2024-07-25 10:12:03.069631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.060 [2024-07-25 10:12:03.069647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.060 [2024-07-25 10:12:03.069661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.060 [2024-07-25 10:12:03.069677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.060 [2024-07-25 10:12:03.069691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.060 [2024-07-25 10:12:03.069706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.060 [2024-07-25 10:12:03.069721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.060 [2024-07-25 10:12:03.069737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.060 [2024-07-25 10:12:03.069751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.060 [2024-07-25 10:12:03.069772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.060 [2024-07-25 10:12:03.069787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.060 [2024-07-25 10:12:03.069804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.060 [2024-07-25 10:12:03.069818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.060 [2024-07-25 10:12:03.069835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.060 [2024-07-25 10:12:03.069849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.060 [2024-07-25 10:12:03.069865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.060 [2024-07-25 10:12:03.069880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.060 [2024-07-25 10:12:03.069896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.060 [2024-07-25 10:12:03.069910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.060 [2024-07-25 10:12:03.069926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.060 [2024-07-25 10:12:03.069940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.060 [2024-07-25 10:12:03.069956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.060 [2024-07-25 10:12:03.069971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.060 [2024-07-25 10:12:03.069986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.060 [2024-07-25 10:12:03.070000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.060 [2024-07-25 10:12:03.070016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.060 [2024-07-25 10:12:03.070030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.060 [2024-07-25 10:12:03.070046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.060 [2024-07-25 10:12:03.070060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.060 [2024-07-25 10:12:03.070076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.060 [2024-07-25 10:12:03.070090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.060 [2024-07-25 10:12:03.070105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.060 [2024-07-25 10:12:03.070120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.060 [2024-07-25 10:12:03.070136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.060 [2024-07-25 10:12:03.070153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.060 [2024-07-25 10:12:03.070170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.060 [2024-07-25 10:12:03.070185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.060 [2024-07-25 10:12:03.070201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.060 [2024-07-25 10:12:03.070216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.060 [2024-07-25 10:12:03.070232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.060 [2024-07-25 10:12:03.070245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.060 [2024-07-25 10:12:03.070261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.060 [2024-07-25 10:12:03.070276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.060 [2024-07-25 10:12:03.070292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.060 [2024-07-25 10:12:03.070306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.060 [2024-07-25 10:12:03.070322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.060 [2024-07-25 10:12:03.070336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.060 [2024-07-25 10:12:03.070351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.060 [2024-07-25 10:12:03.070366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.060 [2024-07-25 10:12:03.070382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.060 [2024-07-25 10:12:03.070396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.060 [2024-07-25 10:12:03.070412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.060 [2024-07-25 10:12:03.070426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.060 [2024-07-25 10:12:03.070449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.060 [2024-07-25 10:12:03.070463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.061 [2024-07-25 10:12:03.070480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.061 [2024-07-25 10:12:03.070494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.061 [2024-07-25 10:12:03.070510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.061 [2024-07-25 10:12:03.070524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.061 [2024-07-25 10:12:03.070544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.061 [2024-07-25 10:12:03.070559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.061 [2024-07-25 10:12:03.070575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.061 [2024-07-25 10:12:03.070589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.061 [2024-07-25 10:12:03.070606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.061 [2024-07-25 10:12:03.070621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.061 [2024-07-25 10:12:03.070637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.061 [2024-07-25 10:12:03.070651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.061 [2024-07-25 10:12:03.070665] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x150f3b0 is same with the state(5) to be set 00:23:18.061 [2024-07-25 10:12:03.071911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.061 [2024-07-25 10:12:03.071935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.061 [2024-07-25 10:12:03.071956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.061 [2024-07-25 10:12:03.071971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.061 [2024-07-25 10:12:03.071987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.061 [2024-07-25 10:12:03.072002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.061 [2024-07-25 10:12:03.072018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.061 [2024-07-25 10:12:03.072032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.061 [2024-07-25 10:12:03.072048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.061 [2024-07-25 10:12:03.072062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.061 [2024-07-25 10:12:03.072078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.061 [2024-07-25 10:12:03.072094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.061 [2024-07-25 10:12:03.072110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.061 [2024-07-25 10:12:03.072125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.061 [2024-07-25 10:12:03.072141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.061 [2024-07-25 10:12:03.072155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.061 [2024-07-25 10:12:03.072175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.061 [2024-07-25 10:12:03.072191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.061 [2024-07-25 10:12:03.072207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.061 [2024-07-25 10:12:03.072221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.061 [2024-07-25 10:12:03.072237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.061 [2024-07-25 10:12:03.072251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.061 [2024-07-25 10:12:03.072267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.061 [2024-07-25 10:12:03.072281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.061 [2024-07-25 10:12:03.072297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.061 [2024-07-25 10:12:03.072310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.061 [2024-07-25 10:12:03.072326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.061 [2024-07-25 10:12:03.072340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.061 [2024-07-25 10:12:03.072356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.061 [2024-07-25 10:12:03.072371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.061 [2024-07-25 10:12:03.072386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.061 [2024-07-25 10:12:03.072400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.061 [2024-07-25 10:12:03.072416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.061 [2024-07-25 10:12:03.072438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.061 [2024-07-25 10:12:03.072455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.061 [2024-07-25 10:12:03.072469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.061 [2024-07-25 10:12:03.072485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.061 [2024-07-25 10:12:03.072499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.061 [2024-07-25 10:12:03.072514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.061 [2024-07-25 10:12:03.072528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.061 [2024-07-25 10:12:03.072544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.061 [2024-07-25 10:12:03.072562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.061 [2024-07-25 10:12:03.072579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.061 [2024-07-25 10:12:03.072593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.061 [2024-07-25 10:12:03.072609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.061 [2024-07-25 10:12:03.072623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.061 [2024-07-25 10:12:03.072639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.061 [2024-07-25 10:12:03.072653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.061 [2024-07-25 10:12:03.072669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.061 [2024-07-25 10:12:03.072683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.061 [2024-07-25 10:12:03.072699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.061 [2024-07-25 10:12:03.072712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.061 [2024-07-25 10:12:03.072728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.062 [2024-07-25 10:12:03.072742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.062 [2024-07-25 10:12:03.072758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.062 [2024-07-25 10:12:03.072772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.062 [2024-07-25 10:12:03.072787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.062 [2024-07-25 10:12:03.072801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.062 [2024-07-25 10:12:03.072817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.062 [2024-07-25 10:12:03.072831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.062 [2024-07-25 10:12:03.072847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.062 [2024-07-25 10:12:03.072860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.062 [2024-07-25 10:12:03.072876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.062 [2024-07-25 10:12:03.072890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.062 [2024-07-25 10:12:03.072906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.062 [2024-07-25 10:12:03.072919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.062 [2024-07-25 10:12:03.072935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.062 [2024-07-25 10:12:03.072952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.062 [2024-07-25 10:12:03.072969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.062 [2024-07-25 10:12:03.072983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.062 [2024-07-25 10:12:03.072999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.062 [2024-07-25 10:12:03.073013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.062 [2024-07-25 10:12:03.073029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.062 [2024-07-25 10:12:03.073043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.062 [2024-07-25 10:12:03.073059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.062 [2024-07-25 10:12:03.073073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.062 [2024-07-25 10:12:03.073089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.062 [2024-07-25 10:12:03.073103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.062 [2024-07-25 10:12:03.073118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.062 [2024-07-25 10:12:03.073132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.062 [2024-07-25 10:12:03.073147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.062 [2024-07-25 10:12:03.073161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.062 [2024-07-25 10:12:03.073176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.062 [2024-07-25 10:12:03.073190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.062 [2024-07-25 10:12:03.073206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.062 [2024-07-25 10:12:03.073220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.062 [2024-07-25 10:12:03.073235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.062 [2024-07-25 10:12:03.073249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.062 [2024-07-25 10:12:03.073265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.062 [2024-07-25 10:12:03.073279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.062 [2024-07-25 10:12:03.073295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.062 [2024-07-25 10:12:03.073308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.062 [2024-07-25 10:12:03.073331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.062 [2024-07-25 10:12:03.073346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.062 [2024-07-25 10:12:03.073362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.062 [2024-07-25 10:12:03.073376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.062 [2024-07-25 10:12:03.073392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.062 [2024-07-25 10:12:03.073405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.062 [2024-07-25 10:12:03.073421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.062 [2024-07-25 10:12:03.073442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.062 [2024-07-25 10:12:03.073458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.062 [2024-07-25 10:12:03.073473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.062 [2024-07-25 10:12:03.073488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.062 [2024-07-25 10:12:03.073502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.062 [2024-07-25 10:12:03.073519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.062 [2024-07-25 10:12:03.073534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.062 [2024-07-25 10:12:03.073550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.062 [2024-07-25 10:12:03.073564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.062 [2024-07-25 10:12:03.073580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.062 [2024-07-25 10:12:03.073594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.062 [2024-07-25 10:12:03.073610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.062 [2024-07-25 10:12:03.073624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.062 [2024-07-25 10:12:03.073640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.062 [2024-07-25 10:12:03.073654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.062 [2024-07-25 10:12:03.073669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.063 [2024-07-25 10:12:03.073683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.063 [2024-07-25 10:12:03.073699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.063 [2024-07-25 10:12:03.073718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.063 [2024-07-25 10:12:03.073734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.063 [2024-07-25 10:12:03.073748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.063 [2024-07-25 10:12:03.073764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.063 [2024-07-25 10:12:03.073778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.063 [2024-07-25 10:12:03.073794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.063 [2024-07-25 10:12:03.073807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.063 [2024-07-25 10:12:03.073823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.063 [2024-07-25 10:12:03.073837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.063 [2024-07-25 10:12:03.073853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:18.063 [2024-07-25 10:12:03.073867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:18.063 [2024-07-25 10:12:03.073882] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfa8d0 is same with the state(5) to be set 00:23:18.063 [2024-07-25 10:12:03.076683] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:23:18.063 [2024-07-25 10:12:03.076719] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:23:18.063 task offset: 22528 on job bdev=Nvme3n1 fails 00:23:18.063 00:23:18.063 Latency(us) 00:23:18.063 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:18.063 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:18.063 Job: Nvme1n1 ended in about 0.71 seconds with error 00:23:18.063 Verification LBA range: start 0x0 length 0x400 00:23:18.063 Nvme1n1 : 0.71 179.77 11.24 89.89 0.00 233972.62 29515.47 250104.79 00:23:18.063 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:18.063 Job: Nvme2n1 ended in about 0.72 seconds with error 00:23:18.063 Verification LBA range: start 0x0 length 0x400 00:23:18.063 Nvme2n1 : 0.72 178.96 11.18 89.48 0.00 228828.86 19320.98 250104.79 00:23:18.063 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:18.063 Job: Nvme3n1 ended in about 0.70 seconds with error 00:23:18.063 Verification LBA range: start 0x0 length 0x400 00:23:18.063 Nvme3n1 : 0.70 183.31 11.46 91.65 0.00 217126.94 25243.50 265639.25 00:23:18.063 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:18.063 Job: Nvme4n1 ended in about 0.71 seconds with error 00:23:18.063 Verification LBA range: start 0x0 length 0x400 00:23:18.063 Nvme4n1 : 0.71 181.34 11.33 90.67 0.00 213575.68 10582.85 265639.25 00:23:18.063 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:18.063 Job: Nvme5n1 ended in about 0.73 seconds with error 00:23:18.063 Verification LBA range: start 0x0 length 0x400 00:23:18.063 Nvme5n1 : 0.73 88.20 5.51 88.20 0.00 321205.29 35340.89 281173.71 00:23:18.063 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:18.063 Verification LBA range: start 0x0 length 0x400 00:23:18.063 Nvme6n1 : 0.70 182.30 11.39 0.00 0.00 300148.81 21651.15 260978.92 00:23:18.063 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:18.063 Job: Nvme7n1 ended in about 0.73 seconds with error 00:23:18.063 Verification LBA range: start 0x0 length 0x400 00:23:18.063 Nvme7n1 : 0.73 175.62 10.98 87.81 0.00 202920.14 16602.45 254765.13 00:23:18.063 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:18.063 Job: Nvme8n1 ended in about 0.71 seconds with error 00:23:18.063 Verification LBA range: start 0x0 length 0x400 00:23:18.063 Nvme8n1 : 0.71 180.98 11.31 90.49 0.00 189797.14 10874.12 237677.23 00:23:18.063 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:18.063 Job: Nvme9n1 ended in about 0.73 seconds with error 00:23:18.063 Verification LBA range: start 0x0 length 0x400 00:23:18.063 Nvme9n1 : 0.73 87.42 5.46 87.42 0.00 288323.51 20486.07 267192.70 00:23:18.063 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:18.063 Job: Nvme10n1 ended in about 0.72 seconds with error 00:23:18.063 Verification LBA range: start 0x0 length 0x400 00:23:18.063 Nvme10n1 : 0.72 88.96 5.56 88.96 0.00 273103.08 19320.98 295154.73 00:23:18.063 =================================================================================================================== 00:23:18.063 Total : 1526.86 95.43 804.57 0.00 239393.29 10582.85 295154.73 00:23:18.063 [2024-07-25 10:12:03.104083] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:23:18.063 [2024-07-25 10:12:03.104162] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:23:18.063 [2024-07-25 10:12:03.104264] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7cf200 (9): Bad file descriptor 00:23:18.063 [2024-07-25 10:12:03.104296] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdad950 (9): Bad file descriptor 00:23:18.063 [2024-07-25 10:12:03.104316] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6e3610 (9): Bad file descriptor 00:23:18.063 [2024-07-25 10:12:03.104335] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd9eb40 (9): Bad file descriptor 00:23:18.063 [2024-07-25 10:12:03.104353] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:23:18.063 [2024-07-25 10:12:03.104367] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:23:18.063 [2024-07-25 10:12:03.104383] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:23:18.063 [2024-07-25 10:12:03.104409] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:23:18.063 [2024-07-25 10:12:03.104424] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:23:18.063 [2024-07-25 10:12:03.104446] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:23:18.063 [2024-07-25 10:12:03.104523] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:18.063 [2024-07-25 10:12:03.104549] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:18.063 [2024-07-25 10:12:03.104568] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:18.063 [2024-07-25 10:12:03.104587] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:18.063 [2024-07-25 10:12:03.104605] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:18.063 [2024-07-25 10:12:03.104623] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:18.063 [2024-07-25 10:12:03.104773] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:18.063 [2024-07-25 10:12:03.104810] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:18.063 [2024-07-25 10:12:03.105077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:18.063 [2024-07-25 10:12:03.105113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0dba0 with addr=10.0.0.2, port=4420 00:23:18.063 [2024-07-25 10:12:03.105133] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0dba0 is same with the state(5) to be set 00:23:18.063 [2024-07-25 10:12:03.105335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:18.063 [2024-07-25 10:12:03.105362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd85b30 with addr=10.0.0.2, port=4420 00:23:18.063 [2024-07-25 10:12:03.105378] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd85b30 is same with the state(5) to be set 00:23:18.063 [2024-07-25 10:12:03.105552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:18.063 [2024-07-25 10:12:03.105580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0d320 with addr=10.0.0.2, port=4420 00:23:18.063 [2024-07-25 10:12:03.105596] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0d320 is same with the state(5) to be set 00:23:18.063 [2024-07-25 10:12:03.105611] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:18.064 [2024-07-25 10:12:03.105623] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:23:18.064 [2024-07-25 10:12:03.105637] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:18.064 [2024-07-25 10:12:03.105659] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:23:18.064 [2024-07-25 10:12:03.105683] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:23:18.064 [2024-07-25 10:12:03.105696] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:23:18.064 [2024-07-25 10:12:03.105713] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:23:18.064 [2024-07-25 10:12:03.105727] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:23:18.064 [2024-07-25 10:12:03.105740] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:23:18.064 [2024-07-25 10:12:03.105757] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:23:18.064 [2024-07-25 10:12:03.105771] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:23:18.064 [2024-07-25 10:12:03.105783] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:23:18.064 [2024-07-25 10:12:03.105818] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:18.064 [2024-07-25 10:12:03.105853] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:18.064 [2024-07-25 10:12:03.105873] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:18.064 [2024-07-25 10:12:03.105891] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:18.064 [2024-07-25 10:12:03.105909] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:23:18.064 [2024-07-25 10:12:03.106811] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:23:18.064 [2024-07-25 10:12:03.106856] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:18.064 [2024-07-25 10:12:03.106873] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:18.064 [2024-07-25 10:12:03.106891] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:18.064 [2024-07-25 10:12:03.106902] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:18.064 [2024-07-25 10:12:03.106936] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0dba0 (9): Bad file descriptor 00:23:18.064 [2024-07-25 10:12:03.106960] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd85b30 (9): Bad file descriptor 00:23:18.064 [2024-07-25 10:12:03.106978] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0d320 (9): Bad file descriptor 00:23:18.064 [2024-07-25 10:12:03.107343] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:23:18.064 [2024-07-25 10:12:03.107373] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:23:18.064 [2024-07-25 10:12:03.107574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:18.064 [2024-07-25 10:12:03.107603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc0e120 with addr=10.0.0.2, port=4420 00:23:18.064 [2024-07-25 10:12:03.107620] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc0e120 is same with the state(5) to be set 00:23:18.064 [2024-07-25 10:12:03.107635] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:23:18.064 [2024-07-25 10:12:03.107648] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:23:18.064 [2024-07-25 10:12:03.107660] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:23:18.064 [2024-07-25 10:12:03.107678] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:23:18.064 [2024-07-25 10:12:03.107692] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:23:18.064 [2024-07-25 10:12:03.107706] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:23:18.064 [2024-07-25 10:12:03.107720] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:23:18.064 [2024-07-25 10:12:03.107734] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:23:18.064 [2024-07-25 10:12:03.107747] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:23:18.064 [2024-07-25 10:12:03.107811] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:18.064 [2024-07-25 10:12:03.107831] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:18.064 [2024-07-25 10:12:03.107843] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:18.064 [2024-07-25 10:12:03.108015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:18.064 [2024-07-25 10:12:03.108041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd9cad0 with addr=10.0.0.2, port=4420 00:23:18.064 [2024-07-25 10:12:03.108056] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd9cad0 is same with the state(5) to be set 00:23:18.064 [2024-07-25 10:12:03.108229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:18.064 [2024-07-25 10:12:03.108254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc11420 with addr=10.0.0.2, port=4420 00:23:18.064 [2024-07-25 10:12:03.108269] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc11420 is same with the state(5) to be set 00:23:18.064 [2024-07-25 10:12:03.108287] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc0e120 (9): Bad file descriptor 00:23:18.064 [2024-07-25 10:12:03.108333] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd9cad0 (9): Bad file descriptor 00:23:18.064 [2024-07-25 10:12:03.108358] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc11420 (9): Bad file descriptor 00:23:18.064 [2024-07-25 10:12:03.108380] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:23:18.064 [2024-07-25 10:12:03.108394] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:23:18.064 [2024-07-25 10:12:03.108408] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:23:18.064 [2024-07-25 10:12:03.108451] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:18.064 [2024-07-25 10:12:03.108481] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:23:18.065 [2024-07-25 10:12:03.108495] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:23:18.065 [2024-07-25 10:12:03.108508] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:23:18.065 [2024-07-25 10:12:03.108524] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:23:18.065 [2024-07-25 10:12:03.108537] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:23:18.065 [2024-07-25 10:12:03.108550] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:23:18.065 [2024-07-25 10:12:03.108586] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:18.065 [2024-07-25 10:12:03.108602] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:18.630 10:12:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:23:18.631 10:12:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:23:19.598 10:12:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 488292 00:23:19.598 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (488292) - No such process 00:23:19.598 10:12:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:23:19.598 10:12:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:23:19.598 10:12:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:23:19.598 10:12:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:19.598 10:12:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:19.598 10:12:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:23:19.598 10:12:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:19.598 10:12:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:23:19.598 10:12:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:19.598 10:12:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:23:19.598 10:12:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:19.598 10:12:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:19.598 rmmod nvme_tcp 00:23:19.598 rmmod nvme_fabrics 00:23:19.598 rmmod nvme_keyring 00:23:19.598 10:12:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:19.598 10:12:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:23:19.598 10:12:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:23:19.598 10:12:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:23:19.598 10:12:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:19.598 10:12:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:19.598 10:12:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:19.598 10:12:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:19.598 10:12:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:19.598 10:12:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:19.598 10:12:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:19.598 10:12:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:22.134 10:12:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:22.134 00:23:22.134 real 0m7.568s 00:23:22.134 user 0m18.401s 00:23:22.134 sys 0m1.460s 00:23:22.134 10:12:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:22.134 10:12:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:22.134 ************************************ 00:23:22.134 END TEST nvmf_shutdown_tc3 00:23:22.134 ************************************ 00:23:22.134 10:12:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:23:22.134 00:23:22.134 real 0m28.131s 00:23:22.134 user 1m17.194s 00:23:22.134 sys 0m6.768s 00:23:22.134 10:12:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:22.134 10:12:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:22.134 ************************************ 00:23:22.134 END TEST nvmf_shutdown 00:23:22.134 ************************************ 00:23:22.134 10:12:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@66 -- # trap - SIGINT SIGTERM EXIT 00:23:22.134 00:23:22.134 real 11m56.207s 00:23:22.134 user 28m30.806s 00:23:22.134 sys 2m52.261s 00:23:22.134 10:12:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:22.134 10:12:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:22.134 ************************************ 00:23:22.134 END TEST nvmf_target_extra 00:23:22.134 ************************************ 00:23:22.134 10:12:06 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:23:22.134 10:12:06 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:22.134 10:12:06 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:22.134 10:12:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:22.134 ************************************ 00:23:22.134 START TEST nvmf_host 00:23:22.134 ************************************ 00:23:22.134 10:12:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:23:22.134 * Looking for test storage... 00:23:22.134 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:23:22.134 10:12:06 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:22.134 10:12:06 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:23:22.134 10:12:06 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:22.134 10:12:06 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:22.134 10:12:06 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:22.134 10:12:06 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:22.134 10:12:06 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:22.134 10:12:06 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:22.134 10:12:06 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:22.134 10:12:06 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:22.134 10:12:06 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:22.134 10:12:06 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:22.134 10:12:06 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:23:22.134 10:12:06 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:23:22.134 10:12:06 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:22.134 10:12:06 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:22.134 10:12:06 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:22.134 10:12:06 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:22.134 10:12:06 nvmf_tcp.nvmf_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:22.134 10:12:06 nvmf_tcp.nvmf_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:22.134 10:12:06 nvmf_tcp.nvmf_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:22.134 10:12:06 nvmf_tcp.nvmf_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:22.134 10:12:06 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:22.134 10:12:06 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:22.134 10:12:06 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:22.134 10:12:06 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:23:22.134 10:12:06 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:22.134 10:12:06 nvmf_tcp.nvmf_host -- nvmf/common.sh@47 -- # : 0 00:23:22.134 10:12:06 nvmf_tcp.nvmf_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:22.134 10:12:06 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:22.134 10:12:06 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:22.134 10:12:06 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:22.134 10:12:06 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:22.134 10:12:06 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:22.134 10:12:06 nvmf_tcp.nvmf_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:22.134 10:12:06 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:22.134 10:12:06 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:23:22.134 10:12:06 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:23:22.134 10:12:06 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:23:22.134 10:12:06 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:22.134 10:12:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:22.134 10:12:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:22.134 10:12:06 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.134 ************************************ 00:23:22.134 START TEST nvmf_multicontroller 00:23:22.134 ************************************ 00:23:22.134 10:12:06 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:23:22.134 * Looking for test storage... 00:23:22.134 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:22.134 10:12:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:22.134 10:12:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:23:22.134 10:12:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:22.134 10:12:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:22.134 10:12:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:22.134 10:12:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:22.134 10:12:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:22.134 10:12:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:22.134 10:12:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:22.134 10:12:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:22.134 10:12:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:22.134 10:12:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:22.134 10:12:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:23:22.134 10:12:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:23:22.134 10:12:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:22.134 10:12:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:22.134 10:12:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:22.134 10:12:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:22.135 10:12:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:22.135 10:12:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:22.135 10:12:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:22.135 10:12:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:22.135 10:12:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:22.135 10:12:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:22.135 10:12:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:22.135 10:12:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:23:22.135 10:12:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:22.135 10:12:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:23:22.135 10:12:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:22.135 10:12:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:22.135 10:12:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:22.135 10:12:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:22.135 10:12:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:22.135 10:12:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:22.135 10:12:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:22.135 10:12:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:22.135 10:12:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:22.135 10:12:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:22.135 10:12:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:23:22.135 10:12:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:23:22.135 10:12:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:22.135 10:12:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:23:22.135 10:12:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:23:22.135 10:12:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:22.135 10:12:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:22.135 10:12:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:22.135 10:12:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:22.135 10:12:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:22.135 10:12:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:22.135 10:12:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:22.135 10:12:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:22.135 10:12:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:22.135 10:12:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:22.135 10:12:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:23:22.135 10:12:07 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:24.662 10:12:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:24.662 10:12:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:23:24.662 10:12:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:24.662 10:12:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:24.662 10:12:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:24.662 10:12:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:24.662 10:12:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:24.662 10:12:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:23:24.662 10:12:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:24.662 10:12:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:23:24.662 10:12:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:23:24.662 10:12:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:23:24.662 10:12:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:23:24.662 10:12:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:23:24.662 10:12:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:23:24.662 10:12:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:24.662 10:12:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:24.662 10:12:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:24.662 10:12:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:24.662 10:12:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:24.662 10:12:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:24.662 10:12:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:24.662 10:12:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:24.662 10:12:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:24.662 10:12:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:24.662 10:12:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:24.662 10:12:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:24.662 10:12:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:24.662 10:12:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:24.662 10:12:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:24.662 10:12:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:24.662 10:12:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:24.662 10:12:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:24.662 10:12:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:23:24.662 Found 0000:84:00.0 (0x8086 - 0x159b) 00:23:24.662 10:12:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:24.662 10:12:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:24.662 10:12:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:24.662 10:12:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:24.662 10:12:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:24.662 10:12:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:24.663 10:12:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:23:24.663 Found 0000:84:00.1 (0x8086 - 0x159b) 00:23:24.663 10:12:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:24.663 10:12:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:24.663 10:12:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:24.663 10:12:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:24.663 10:12:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:24.663 10:12:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:24.663 10:12:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:24.663 10:12:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:24.663 10:12:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:24.663 10:12:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:24.663 10:12:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:24.663 10:12:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:24.663 10:12:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:24.663 10:12:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:24.663 10:12:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:24.663 10:12:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:23:24.663 Found net devices under 0000:84:00.0: cvl_0_0 00:23:24.663 10:12:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:24.663 10:12:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:24.663 10:12:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:24.663 10:12:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:24.663 10:12:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:24.663 10:12:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:24.663 10:12:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:24.663 10:12:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:24.663 10:12:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:23:24.663 Found net devices under 0000:84:00.1: cvl_0_1 00:23:24.663 10:12:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:24.663 10:12:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:24.663 10:12:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:23:24.663 10:12:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:24.663 10:12:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:24.663 10:12:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:24.663 10:12:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:24.663 10:12:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:24.663 10:12:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:24.663 10:12:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:24.663 10:12:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:24.663 10:12:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:24.663 10:12:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:24.663 10:12:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:24.663 10:12:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:24.663 10:12:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:24.663 10:12:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:24.663 10:12:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:24.663 10:12:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:24.663 10:12:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:24.663 10:12:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:24.663 10:12:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:24.663 10:12:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:24.663 10:12:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:24.663 10:12:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:24.663 10:12:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:24.663 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:24.663 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.239 ms 00:23:24.663 00:23:24.663 --- 10.0.0.2 ping statistics --- 00:23:24.663 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:24.663 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:23:24.663 10:12:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:24.663 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:24.663 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.120 ms 00:23:24.663 00:23:24.663 --- 10.0.0.1 ping statistics --- 00:23:24.663 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:24.663 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:23:24.663 10:12:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:24.663 10:12:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:23:24.663 10:12:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:24.663 10:12:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:24.663 10:12:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:24.663 10:12:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:24.663 10:12:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:24.663 10:12:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:24.663 10:12:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:24.663 10:12:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:23:24.663 10:12:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:24.663 10:12:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:24.663 10:12:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:24.663 10:12:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=491424 00:23:24.663 10:12:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:24.663 10:12:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 491424 00:23:24.663 10:12:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 491424 ']' 00:23:24.663 10:12:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:24.663 10:12:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:24.663 10:12:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:24.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:24.663 10:12:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:24.663 10:12:09 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:24.663 [2024-07-25 10:12:09.681163] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:23:24.663 [2024-07-25 10:12:09.681344] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:24.663 EAL: No free 2048 kB hugepages reported on node 1 00:23:24.663 [2024-07-25 10:12:09.770490] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:24.920 [2024-07-25 10:12:09.896864] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:24.921 [2024-07-25 10:12:09.896924] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:24.921 [2024-07-25 10:12:09.896941] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:24.921 [2024-07-25 10:12:09.896955] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:24.921 [2024-07-25 10:12:09.896967] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:24.921 [2024-07-25 10:12:09.897068] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:24.921 [2024-07-25 10:12:09.897122] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:24.921 [2024-07-25 10:12:09.897125] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:24.921 10:12:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:24.921 10:12:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:23:24.921 10:12:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:24.921 10:12:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:24.921 10:12:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:24.921 10:12:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:24.921 10:12:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:24.921 10:12:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.921 10:12:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:24.921 [2024-07-25 10:12:10.043384] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:24.921 10:12:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.921 10:12:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:24.921 10:12:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.921 10:12:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:24.921 Malloc0 00:23:24.921 10:12:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.921 10:12:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:24.921 10:12:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.921 10:12:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:25.178 10:12:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.178 10:12:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:25.179 10:12:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.179 10:12:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:25.179 10:12:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.179 10:12:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:25.179 10:12:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.179 10:12:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:25.179 [2024-07-25 10:12:10.100350] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:25.179 10:12:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.179 10:12:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:25.179 10:12:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.179 10:12:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:25.179 [2024-07-25 10:12:10.108186] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:25.179 10:12:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.179 10:12:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:25.179 10:12:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.179 10:12:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:25.179 Malloc1 00:23:25.179 10:12:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.179 10:12:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:23:25.179 10:12:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.179 10:12:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:25.179 10:12:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.179 10:12:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:23:25.179 10:12:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.179 10:12:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:25.179 10:12:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.179 10:12:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:23:25.179 10:12:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.179 10:12:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:25.179 10:12:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.179 10:12:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:23:25.179 10:12:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.179 10:12:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:25.179 10:12:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.179 10:12:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=491597 00:23:25.179 10:12:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:23:25.179 10:12:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:25.179 10:12:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 491597 /var/tmp/bdevperf.sock 00:23:25.179 10:12:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 491597 ']' 00:23:25.179 10:12:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:25.179 10:12:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:25.179 10:12:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:25.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:25.179 10:12:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:25.179 10:12:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:25.436 10:12:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:25.436 10:12:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:23:25.436 10:12:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:23:25.436 10:12:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.436 10:12:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:25.694 NVMe0n1 00:23:25.694 10:12:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.694 10:12:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:25.694 10:12:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:23:25.694 10:12:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.694 10:12:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:25.694 10:12:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.694 1 00:23:25.694 10:12:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:23:25.694 10:12:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:23:25.694 10:12:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:23:25.694 10:12:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:25.694 10:12:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:25.694 10:12:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:25.694 10:12:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:25.694 10:12:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:23:25.694 10:12:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.694 10:12:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:25.694 request: 00:23:25.694 { 00:23:25.694 "name": "NVMe0", 00:23:25.694 "trtype": "tcp", 00:23:25.694 "traddr": "10.0.0.2", 00:23:25.694 "adrfam": "ipv4", 00:23:25.694 "trsvcid": "4420", 00:23:25.694 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:25.694 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:23:25.694 "hostaddr": "10.0.0.2", 00:23:25.694 "hostsvcid": "60000", 00:23:25.694 "prchk_reftag": false, 00:23:25.694 "prchk_guard": false, 00:23:25.694 "hdgst": false, 00:23:25.695 "ddgst": false, 00:23:25.695 "method": "bdev_nvme_attach_controller", 00:23:25.695 "req_id": 1 00:23:25.695 } 00:23:25.695 Got JSON-RPC error response 00:23:25.695 response: 00:23:25.695 { 00:23:25.695 "code": -114, 00:23:25.695 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:23:25.695 } 00:23:25.695 10:12:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:25.695 10:12:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:23:25.695 10:12:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:25.695 10:12:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:25.695 10:12:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:25.695 10:12:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:23:25.695 10:12:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:23:25.695 10:12:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:23:25.695 10:12:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:25.695 10:12:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:25.695 10:12:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:25.695 10:12:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:25.695 10:12:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:23:25.695 10:12:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.695 10:12:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:25.695 request: 00:23:25.695 { 00:23:25.695 "name": "NVMe0", 00:23:25.695 "trtype": "tcp", 00:23:25.695 "traddr": "10.0.0.2", 00:23:25.695 "adrfam": "ipv4", 00:23:25.695 "trsvcid": "4420", 00:23:25.695 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:25.695 "hostaddr": "10.0.0.2", 00:23:25.695 "hostsvcid": "60000", 00:23:25.695 "prchk_reftag": false, 00:23:25.695 "prchk_guard": false, 00:23:25.695 "hdgst": false, 00:23:25.695 "ddgst": false, 00:23:25.695 "method": "bdev_nvme_attach_controller", 00:23:25.695 "req_id": 1 00:23:25.695 } 00:23:25.695 Got JSON-RPC error response 00:23:25.695 response: 00:23:25.695 { 00:23:25.695 "code": -114, 00:23:25.695 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:23:25.695 } 00:23:25.695 10:12:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:25.695 10:12:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:23:25.695 10:12:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:25.695 10:12:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:25.695 10:12:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:25.695 10:12:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:23:25.695 10:12:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:23:25.695 10:12:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:23:25.695 10:12:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:25.695 10:12:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:25.695 10:12:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:25.695 10:12:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:25.695 10:12:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:23:25.695 10:12:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.695 10:12:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:25.695 request: 00:23:25.695 { 00:23:25.695 "name": "NVMe0", 00:23:25.695 "trtype": "tcp", 00:23:25.695 "traddr": "10.0.0.2", 00:23:25.695 "adrfam": "ipv4", 00:23:25.695 "trsvcid": "4420", 00:23:25.695 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:25.695 "hostaddr": "10.0.0.2", 00:23:25.695 "hostsvcid": "60000", 00:23:25.695 "prchk_reftag": false, 00:23:25.695 "prchk_guard": false, 00:23:25.695 "hdgst": false, 00:23:25.695 "ddgst": false, 00:23:25.695 "multipath": "disable", 00:23:25.695 "method": "bdev_nvme_attach_controller", 00:23:25.695 "req_id": 1 00:23:25.695 } 00:23:25.695 Got JSON-RPC error response 00:23:25.695 response: 00:23:25.695 { 00:23:25.695 "code": -114, 00:23:25.695 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:23:25.695 } 00:23:25.695 10:12:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:25.695 10:12:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:23:25.695 10:12:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:25.695 10:12:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:25.695 10:12:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:25.695 10:12:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:23:25.695 10:12:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:23:25.695 10:12:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:23:25.695 10:12:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:25.695 10:12:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:25.695 10:12:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:25.695 10:12:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:25.695 10:12:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:23:25.695 10:12:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.695 10:12:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:25.695 request: 00:23:25.695 { 00:23:25.695 "name": "NVMe0", 00:23:25.695 "trtype": "tcp", 00:23:25.695 "traddr": "10.0.0.2", 00:23:25.695 "adrfam": "ipv4", 00:23:25.695 "trsvcid": "4420", 00:23:25.695 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:25.695 "hostaddr": "10.0.0.2", 00:23:25.695 "hostsvcid": "60000", 00:23:25.695 "prchk_reftag": false, 00:23:25.695 "prchk_guard": false, 00:23:25.695 "hdgst": false, 00:23:25.695 "ddgst": false, 00:23:25.695 "multipath": "failover", 00:23:25.695 "method": "bdev_nvme_attach_controller", 00:23:25.695 "req_id": 1 00:23:25.695 } 00:23:25.695 Got JSON-RPC error response 00:23:25.695 response: 00:23:25.695 { 00:23:25.695 "code": -114, 00:23:25.695 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:23:25.695 } 00:23:25.695 10:12:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:25.695 10:12:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:23:25.695 10:12:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:25.695 10:12:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:25.695 10:12:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:25.695 10:12:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:25.695 10:12:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.695 10:12:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:25.953 00:23:25.953 10:12:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.953 10:12:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:25.953 10:12:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.953 10:12:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:25.953 10:12:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.953 10:12:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:23:25.953 10:12:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.953 10:12:10 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:25.953 00:23:25.953 10:12:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.953 10:12:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:25.953 10:12:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:23:25.953 10:12:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.953 10:12:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:25.953 10:12:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.953 10:12:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:23:25.953 10:12:11 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:27.326 0 00:23:27.326 10:12:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:23:27.326 10:12:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.326 10:12:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:27.326 10:12:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.326 10:12:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 491597 00:23:27.326 10:12:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 491597 ']' 00:23:27.326 10:12:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 491597 00:23:27.326 10:12:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:23:27.326 10:12:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:27.326 10:12:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 491597 00:23:27.326 10:12:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:27.326 10:12:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:27.326 10:12:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 491597' 00:23:27.326 killing process with pid 491597 00:23:27.326 10:12:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 491597 00:23:27.326 10:12:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 491597 00:23:27.585 10:12:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:27.585 10:12:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.585 10:12:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:27.585 10:12:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.585 10:12:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:23:27.585 10:12:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.585 10:12:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:27.585 10:12:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.585 10:12:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:23:27.585 10:12:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:27.585 10:12:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:23:27.586 10:12:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:23:27.586 10:12:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:23:27.586 10:12:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:23:27.586 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:23:27.586 [2024-07-25 10:12:10.216316] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:23:27.586 [2024-07-25 10:12:10.216423] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid491597 ] 00:23:27.586 EAL: No free 2048 kB hugepages reported on node 1 00:23:27.586 [2024-07-25 10:12:10.286133] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:27.586 [2024-07-25 10:12:10.398209] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:27.586 [2024-07-25 10:12:11.008958] bdev.c:4633:bdev_name_add: *ERROR*: Bdev name e0dd29ca-2c4d-4f9b-96c6-02e544369010 already exists 00:23:27.586 [2024-07-25 10:12:11.008999] bdev.c:7755:bdev_register: *ERROR*: Unable to add uuid:e0dd29ca-2c4d-4f9b-96c6-02e544369010 alias for bdev NVMe1n1 00:23:27.586 [2024-07-25 10:12:11.009014] bdev_nvme.c:4318:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:23:27.586 Running I/O for 1 seconds... 00:23:27.586 00:23:27.586 Latency(us) 00:23:27.586 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:27.586 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:23:27.586 NVMe0n1 : 1.01 17628.98 68.86 0.00 0.00 7231.37 3956.43 11116.85 00:23:27.586 =================================================================================================================== 00:23:27.586 Total : 17628.98 68.86 0.00 0.00 7231.37 3956.43 11116.85 00:23:27.586 Received shutdown signal, test time was about 1.000000 seconds 00:23:27.586 00:23:27.586 Latency(us) 00:23:27.586 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:27.586 =================================================================================================================== 00:23:27.586 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:27.586 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:23:27.586 10:12:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:27.586 10:12:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:23:27.586 10:12:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:23:27.586 10:12:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:27.586 10:12:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:23:27.586 10:12:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:27.586 10:12:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:23:27.586 10:12:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:27.586 10:12:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:27.586 rmmod nvme_tcp 00:23:27.586 rmmod nvme_fabrics 00:23:27.586 rmmod nvme_keyring 00:23:27.586 10:12:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:27.586 10:12:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:23:27.586 10:12:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:23:27.586 10:12:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 491424 ']' 00:23:27.586 10:12:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 491424 00:23:27.586 10:12:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 491424 ']' 00:23:27.586 10:12:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 491424 00:23:27.586 10:12:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:23:27.586 10:12:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:27.586 10:12:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 491424 00:23:27.586 10:12:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:23:27.586 10:12:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:23:27.586 10:12:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 491424' 00:23:27.586 killing process with pid 491424 00:23:27.586 10:12:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 491424 00:23:27.586 10:12:12 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 491424 00:23:28.152 10:12:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:28.152 10:12:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:28.152 10:12:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:28.152 10:12:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:28.152 10:12:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:28.152 10:12:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:28.152 10:12:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:28.152 10:12:13 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:30.100 10:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:30.100 00:23:30.100 real 0m8.107s 00:23:30.100 user 0m12.659s 00:23:30.100 sys 0m2.782s 00:23:30.100 10:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:30.100 10:12:15 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:23:30.100 ************************************ 00:23:30.100 END TEST nvmf_multicontroller 00:23:30.100 ************************************ 00:23:30.100 10:12:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:30.100 10:12:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:30.100 10:12:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:30.100 10:12:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.100 ************************************ 00:23:30.100 START TEST nvmf_aer 00:23:30.100 ************************************ 00:23:30.100 10:12:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:23:30.100 * Looking for test storage... 00:23:30.100 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:30.100 10:12:15 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:30.100 10:12:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:23:30.100 10:12:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:30.100 10:12:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:30.100 10:12:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:30.100 10:12:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:30.100 10:12:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:30.100 10:12:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:30.100 10:12:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:30.100 10:12:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:30.100 10:12:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:30.100 10:12:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:30.100 10:12:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:23:30.100 10:12:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:23:30.100 10:12:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:30.100 10:12:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:30.100 10:12:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:30.100 10:12:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:30.101 10:12:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:30.101 10:12:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:30.101 10:12:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:30.101 10:12:15 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:30.101 10:12:15 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:30.101 10:12:15 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:30.101 10:12:15 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:30.101 10:12:15 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:23:30.101 10:12:15 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:30.101 10:12:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:23:30.101 10:12:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:30.101 10:12:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:30.101 10:12:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:30.101 10:12:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:30.101 10:12:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:30.101 10:12:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:30.101 10:12:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:30.101 10:12:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:30.101 10:12:15 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:23:30.101 10:12:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:30.101 10:12:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:30.101 10:12:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:30.101 10:12:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:30.101 10:12:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:30.101 10:12:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:30.101 10:12:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:30.101 10:12:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:30.101 10:12:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:30.101 10:12:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:30.101 10:12:15 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:23:30.101 10:12:15 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:32.630 10:12:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:32.630 10:12:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:23:32.630 10:12:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:32.630 10:12:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:32.630 10:12:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:32.630 10:12:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:32.630 10:12:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:32.630 10:12:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:23:32.630 10:12:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:32.630 10:12:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:23:32.630 10:12:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:23:32.630 10:12:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:23:32.630 10:12:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:23:32.630 10:12:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:23:32.630 10:12:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:23:32.630 10:12:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:32.630 10:12:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:32.630 10:12:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:32.630 10:12:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:32.630 10:12:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:32.630 10:12:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:32.630 10:12:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:32.630 10:12:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:32.630 10:12:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:32.630 10:12:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:32.630 10:12:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:32.630 10:12:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:32.630 10:12:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:32.630 10:12:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:32.630 10:12:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:32.630 10:12:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:32.630 10:12:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:32.630 10:12:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:32.630 10:12:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:23:32.630 Found 0000:84:00.0 (0x8086 - 0x159b) 00:23:32.630 10:12:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:32.630 10:12:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:32.630 10:12:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:32.630 10:12:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:32.631 10:12:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:32.631 10:12:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:32.631 10:12:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:23:32.631 Found 0000:84:00.1 (0x8086 - 0x159b) 00:23:32.631 10:12:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:32.631 10:12:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:32.631 10:12:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:32.631 10:12:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:32.631 10:12:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:32.631 10:12:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:32.631 10:12:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:32.631 10:12:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:32.631 10:12:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:32.631 10:12:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:32.631 10:12:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:32.631 10:12:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:32.631 10:12:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:32.631 10:12:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:32.631 10:12:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:32.631 10:12:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:23:32.631 Found net devices under 0000:84:00.0: cvl_0_0 00:23:32.631 10:12:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:32.631 10:12:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:32.631 10:12:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:32.631 10:12:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:32.631 10:12:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:32.631 10:12:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:32.631 10:12:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:32.631 10:12:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:32.631 10:12:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:23:32.631 Found net devices under 0000:84:00.1: cvl_0_1 00:23:32.631 10:12:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:32.631 10:12:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:32.631 10:12:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:23:32.631 10:12:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:32.631 10:12:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:32.631 10:12:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:32.631 10:12:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:32.631 10:12:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:32.631 10:12:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:32.631 10:12:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:32.631 10:12:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:32.631 10:12:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:32.631 10:12:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:32.631 10:12:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:32.631 10:12:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:32.631 10:12:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:32.631 10:12:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:32.631 10:12:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:32.631 10:12:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:32.890 10:12:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:32.890 10:12:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:32.890 10:12:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:32.890 10:12:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:32.890 10:12:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:32.890 10:12:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:32.890 10:12:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:32.890 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:32.890 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.131 ms 00:23:32.890 00:23:32.890 --- 10.0.0.2 ping statistics --- 00:23:32.890 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:32.890 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:23:32.890 10:12:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:32.890 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:32.890 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.136 ms 00:23:32.890 00:23:32.890 --- 10.0.0.1 ping statistics --- 00:23:32.890 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:32.890 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:23:32.890 10:12:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:32.890 10:12:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:23:32.890 10:12:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:32.890 10:12:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:32.890 10:12:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:32.890 10:12:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:32.890 10:12:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:32.890 10:12:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:32.890 10:12:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:32.890 10:12:17 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:23:32.890 10:12:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:32.890 10:12:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:32.890 10:12:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:32.890 10:12:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=493839 00:23:32.890 10:12:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:32.890 10:12:17 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 493839 00:23:32.890 10:12:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@831 -- # '[' -z 493839 ']' 00:23:32.890 10:12:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:32.890 10:12:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:32.890 10:12:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:32.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:32.890 10:12:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:32.890 10:12:17 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:32.890 [2024-07-25 10:12:17.971379] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:23:32.890 [2024-07-25 10:12:17.971503] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:32.890 EAL: No free 2048 kB hugepages reported on node 1 00:23:32.890 [2024-07-25 10:12:18.049753] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:33.148 [2024-07-25 10:12:18.177947] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:33.148 [2024-07-25 10:12:18.178011] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:33.148 [2024-07-25 10:12:18.178027] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:33.148 [2024-07-25 10:12:18.178040] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:33.148 [2024-07-25 10:12:18.178052] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:33.148 [2024-07-25 10:12:18.178145] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:33.148 [2024-07-25 10:12:18.178221] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:33.148 [2024-07-25 10:12:18.178275] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:33.148 [2024-07-25 10:12:18.178278] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:33.148 10:12:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:33.148 10:12:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # return 0 00:23:33.148 10:12:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:33.148 10:12:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:33.148 10:12:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:33.406 10:12:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:33.406 10:12:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:33.406 10:12:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.406 10:12:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:33.406 [2024-07-25 10:12:18.349220] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:33.406 10:12:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.406 10:12:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:23:33.406 10:12:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.406 10:12:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:33.406 Malloc0 00:23:33.406 10:12:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.406 10:12:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:23:33.406 10:12:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.406 10:12:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:33.406 10:12:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.406 10:12:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:33.406 10:12:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.406 10:12:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:33.406 10:12:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.406 10:12:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:33.406 10:12:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.406 10:12:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:33.406 [2024-07-25 10:12:18.403951] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:33.406 10:12:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.406 10:12:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:23:33.406 10:12:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.406 10:12:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:33.406 [ 00:23:33.406 { 00:23:33.406 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:33.406 "subtype": "Discovery", 00:23:33.406 "listen_addresses": [], 00:23:33.406 "allow_any_host": true, 00:23:33.406 "hosts": [] 00:23:33.406 }, 00:23:33.406 { 00:23:33.406 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:33.406 "subtype": "NVMe", 00:23:33.406 "listen_addresses": [ 00:23:33.406 { 00:23:33.406 "trtype": "TCP", 00:23:33.406 "adrfam": "IPv4", 00:23:33.406 "traddr": "10.0.0.2", 00:23:33.406 "trsvcid": "4420" 00:23:33.406 } 00:23:33.406 ], 00:23:33.406 "allow_any_host": true, 00:23:33.406 "hosts": [], 00:23:33.406 "serial_number": "SPDK00000000000001", 00:23:33.406 "model_number": "SPDK bdev Controller", 00:23:33.406 "max_namespaces": 2, 00:23:33.406 "min_cntlid": 1, 00:23:33.406 "max_cntlid": 65519, 00:23:33.406 "namespaces": [ 00:23:33.406 { 00:23:33.406 "nsid": 1, 00:23:33.406 "bdev_name": "Malloc0", 00:23:33.406 "name": "Malloc0", 00:23:33.406 "nguid": "C0B43C0DEC4F44E6BE25A76B5C0FA0F3", 00:23:33.406 "uuid": "c0b43c0d-ec4f-44e6-be25-a76b5c0fa0f3" 00:23:33.406 } 00:23:33.406 ] 00:23:33.406 } 00:23:33.406 ] 00:23:33.406 10:12:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.406 10:12:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:23:33.406 10:12:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:23:33.406 10:12:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=493968 00:23:33.407 10:12:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:23:33.407 10:12:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:23:33.407 10:12:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:23:33.407 10:12:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:33.407 10:12:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:23:33.407 10:12:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:23:33.407 10:12:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:23:33.407 EAL: No free 2048 kB hugepages reported on node 1 00:23:33.407 10:12:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:33.407 10:12:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:23:33.407 10:12:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:23:33.407 10:12:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:23:33.664 10:12:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:33.664 10:12:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:23:33.664 10:12:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:23:33.664 10:12:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:23:33.664 10:12:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.664 10:12:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:33.664 Malloc1 00:23:33.664 10:12:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.664 10:12:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:23:33.664 10:12:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.664 10:12:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:33.664 10:12:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.664 10:12:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:23:33.664 10:12:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.664 10:12:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:33.664 [ 00:23:33.664 { 00:23:33.664 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:33.664 "subtype": "Discovery", 00:23:33.664 "listen_addresses": [], 00:23:33.664 "allow_any_host": true, 00:23:33.664 "hosts": [] 00:23:33.664 }, 00:23:33.664 { 00:23:33.664 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:33.664 "subtype": "NVMe", 00:23:33.664 "listen_addresses": [ 00:23:33.664 { 00:23:33.664 "trtype": "TCP", 00:23:33.664 "adrfam": "IPv4", 00:23:33.664 "traddr": "10.0.0.2", 00:23:33.664 "trsvcid": "4420" 00:23:33.664 } 00:23:33.664 ], 00:23:33.664 "allow_any_host": true, 00:23:33.664 "hosts": [], 00:23:33.664 "serial_number": "SPDK00000000000001", 00:23:33.664 "model_number": "SPDK bdev Controller", 00:23:33.664 "max_namespaces": 2, 00:23:33.664 "min_cntlid": 1, 00:23:33.664 "max_cntlid": 65519, 00:23:33.664 "namespaces": [ 00:23:33.664 { 00:23:33.664 "nsid": 1, 00:23:33.664 "bdev_name": "Malloc0", 00:23:33.664 "name": "Malloc0", 00:23:33.664 "nguid": "C0B43C0DEC4F44E6BE25A76B5C0FA0F3", 00:23:33.664 "uuid": "c0b43c0d-ec4f-44e6-be25-a76b5c0fa0f3" 00:23:33.664 }, 00:23:33.664 { 00:23:33.664 "nsid": 2, 00:23:33.664 "bdev_name": "Malloc1", 00:23:33.664 "name": "Malloc1", 00:23:33.664 "nguid": "C31335A96CF14809B577616102F1932B", 00:23:33.664 "uuid": "c31335a9-6cf1-4809-b577-616102f1932b" 00:23:33.664 } 00:23:33.664 ] 00:23:33.664 } 00:23:33.664 ] 00:23:33.664 10:12:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.664 10:12:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 493968 00:23:33.664 Asynchronous Event Request test 00:23:33.664 Attaching to 10.0.0.2 00:23:33.664 Attached to 10.0.0.2 00:23:33.664 Registering asynchronous event callbacks... 00:23:33.664 Starting namespace attribute notice tests for all controllers... 00:23:33.664 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:23:33.664 aer_cb - Changed Namespace 00:23:33.664 Cleaning up... 00:23:33.664 10:12:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:23:33.664 10:12:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.664 10:12:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:33.664 10:12:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.664 10:12:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:23:33.664 10:12:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.664 10:12:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:33.664 10:12:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.664 10:12:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:33.664 10:12:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.664 10:12:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:33.664 10:12:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.664 10:12:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:23:33.664 10:12:18 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:23:33.664 10:12:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:33.664 10:12:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:23:33.664 10:12:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:33.664 10:12:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:23:33.664 10:12:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:33.664 10:12:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:33.664 rmmod nvme_tcp 00:23:33.664 rmmod nvme_fabrics 00:23:33.664 rmmod nvme_keyring 00:23:33.664 10:12:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:33.664 10:12:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:23:33.664 10:12:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:23:33.664 10:12:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 493839 ']' 00:23:33.664 10:12:18 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 493839 00:23:33.664 10:12:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@950 -- # '[' -z 493839 ']' 00:23:33.664 10:12:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # kill -0 493839 00:23:33.664 10:12:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # uname 00:23:33.664 10:12:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:33.664 10:12:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 493839 00:23:33.922 10:12:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:33.922 10:12:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:33.922 10:12:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@968 -- # echo 'killing process with pid 493839' 00:23:33.922 killing process with pid 493839 00:23:33.922 10:12:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@969 -- # kill 493839 00:23:33.922 10:12:18 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@974 -- # wait 493839 00:23:34.180 10:12:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:34.180 10:12:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:34.180 10:12:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:34.180 10:12:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:34.180 10:12:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:34.180 10:12:19 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:34.180 10:12:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:34.180 10:12:19 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:36.081 10:12:21 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:36.081 00:23:36.081 real 0m6.047s 00:23:36.081 user 0m4.463s 00:23:36.081 sys 0m2.417s 00:23:36.081 10:12:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:36.081 10:12:21 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:23:36.081 ************************************ 00:23:36.081 END TEST nvmf_aer 00:23:36.081 ************************************ 00:23:36.081 10:12:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:36.081 10:12:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:36.081 10:12:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:36.081 10:12:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.081 ************************************ 00:23:36.081 START TEST nvmf_async_init 00:23:36.081 ************************************ 00:23:36.081 10:12:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:23:36.340 * Looking for test storage... 00:23:36.340 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:36.340 10:12:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:36.340 10:12:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:23:36.340 10:12:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:36.340 10:12:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:36.340 10:12:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:36.340 10:12:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:36.340 10:12:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:36.340 10:12:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:36.340 10:12:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:36.340 10:12:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:36.340 10:12:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:36.340 10:12:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:36.340 10:12:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:23:36.340 10:12:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:23:36.340 10:12:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:36.340 10:12:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:36.340 10:12:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:36.340 10:12:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:36.340 10:12:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:36.340 10:12:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:36.340 10:12:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:36.340 10:12:21 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:36.340 10:12:21 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:36.340 10:12:21 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:36.340 10:12:21 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:36.340 10:12:21 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:23:36.340 10:12:21 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:36.340 10:12:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:23:36.340 10:12:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:36.340 10:12:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:36.340 10:12:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:36.340 10:12:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:36.340 10:12:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:36.340 10:12:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:36.340 10:12:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:36.340 10:12:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:36.340 10:12:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:23:36.340 10:12:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:23:36.340 10:12:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:23:36.340 10:12:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:23:36.340 10:12:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:23:36.340 10:12:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:23:36.340 10:12:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=90d1bd9288644484a735da17c6321527 00:23:36.340 10:12:21 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:23:36.340 10:12:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:36.340 10:12:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:36.340 10:12:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:36.340 10:12:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:36.340 10:12:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:36.340 10:12:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:36.340 10:12:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:36.340 10:12:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:36.340 10:12:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:36.341 10:12:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:36.341 10:12:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:23:36.341 10:12:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:38.872 10:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:38.872 10:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:23:38.872 10:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:38.872 10:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:38.872 10:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:38.872 10:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:38.872 10:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:38.872 10:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:23:38.872 10:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:38.872 10:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:23:38.872 10:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:23:38.872 10:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:23:38.872 10:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:23:38.872 10:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:23:38.872 10:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:23:38.872 10:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:38.872 10:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:38.872 10:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:38.872 10:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:38.872 10:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:38.872 10:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:38.872 10:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:38.872 10:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:38.872 10:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:38.872 10:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:38.872 10:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:38.872 10:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:38.872 10:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:38.872 10:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:38.872 10:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:38.872 10:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:38.872 10:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:38.872 10:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:38.872 10:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:23:38.872 Found 0000:84:00.0 (0x8086 - 0x159b) 00:23:38.872 10:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:38.872 10:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:38.872 10:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:38.872 10:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:38.872 10:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:38.872 10:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:38.872 10:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:23:38.872 Found 0000:84:00.1 (0x8086 - 0x159b) 00:23:38.872 10:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:38.872 10:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:38.872 10:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:38.872 10:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:38.872 10:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:38.872 10:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:38.872 10:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:38.872 10:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:38.872 10:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:38.872 10:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:38.872 10:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:38.872 10:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:38.872 10:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:38.872 10:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:38.872 10:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:38.872 10:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:23:38.872 Found net devices under 0000:84:00.0: cvl_0_0 00:23:38.872 10:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:38.873 10:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:38.873 10:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:38.873 10:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:38.873 10:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:38.873 10:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:38.873 10:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:38.873 10:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:38.873 10:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:23:38.873 Found net devices under 0000:84:00.1: cvl_0_1 00:23:38.873 10:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:38.873 10:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:38.873 10:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:23:38.873 10:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:38.873 10:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:38.873 10:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:38.873 10:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:38.873 10:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:38.873 10:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:38.873 10:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:38.873 10:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:38.873 10:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:38.873 10:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:38.873 10:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:38.873 10:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:38.873 10:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:38.873 10:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:38.873 10:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:38.873 10:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:38.873 10:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:38.873 10:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:38.873 10:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:38.873 10:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:38.873 10:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:38.873 10:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:38.873 10:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:38.873 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:38.873 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.249 ms 00:23:38.873 00:23:38.873 --- 10.0.0.2 ping statistics --- 00:23:38.873 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:38.873 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:23:38.873 10:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:38.873 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:38.873 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.132 ms 00:23:38.873 00:23:38.873 --- 10.0.0.1 ping statistics --- 00:23:38.873 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:38.873 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:23:38.873 10:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:38.873 10:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:23:38.873 10:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:38.873 10:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:38.873 10:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:38.873 10:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:38.873 10:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:38.873 10:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:38.873 10:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:38.873 10:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:23:38.873 10:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:38.873 10:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:38.873 10:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:38.873 10:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=496039 00:23:38.873 10:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:23:38.873 10:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 496039 00:23:38.873 10:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@831 -- # '[' -z 496039 ']' 00:23:38.873 10:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:38.873 10:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:38.873 10:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:38.873 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:38.873 10:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:38.873 10:12:23 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:38.873 [2024-07-25 10:12:23.985408] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:23:38.873 [2024-07-25 10:12:23.985545] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:39.131 EAL: No free 2048 kB hugepages reported on node 1 00:23:39.131 [2024-07-25 10:12:24.087985] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:39.131 [2024-07-25 10:12:24.208665] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:39.131 [2024-07-25 10:12:24.208728] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:39.131 [2024-07-25 10:12:24.208745] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:39.131 [2024-07-25 10:12:24.208758] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:39.131 [2024-07-25 10:12:24.208770] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:39.131 [2024-07-25 10:12:24.208800] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:40.065 10:12:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:40.065 10:12:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # return 0 00:23:40.065 10:12:24 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:40.065 10:12:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:40.065 10:12:24 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:40.065 10:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:40.065 10:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:23:40.065 10:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.065 10:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:40.065 [2024-07-25 10:12:25.017513] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:40.065 10:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.065 10:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:23:40.065 10:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.065 10:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:40.065 null0 00:23:40.065 10:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.065 10:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:23:40.065 10:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.065 10:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:40.065 10:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.065 10:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:23:40.065 10:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.065 10:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:40.065 10:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.065 10:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 90d1bd9288644484a735da17c6321527 00:23:40.065 10:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.065 10:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:40.065 10:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.065 10:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:40.065 10:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.065 10:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:40.065 [2024-07-25 10:12:25.057755] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:40.065 10:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.065 10:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:23:40.065 10:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.065 10:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:40.323 nvme0n1 00:23:40.323 10:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.323 10:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:40.323 10:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.323 10:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:40.323 [ 00:23:40.323 { 00:23:40.323 "name": "nvme0n1", 00:23:40.323 "aliases": [ 00:23:40.323 "90d1bd92-8864-4484-a735-da17c6321527" 00:23:40.323 ], 00:23:40.323 "product_name": "NVMe disk", 00:23:40.323 "block_size": 512, 00:23:40.323 "num_blocks": 2097152, 00:23:40.323 "uuid": "90d1bd92-8864-4484-a735-da17c6321527", 00:23:40.323 "assigned_rate_limits": { 00:23:40.323 "rw_ios_per_sec": 0, 00:23:40.323 "rw_mbytes_per_sec": 0, 00:23:40.323 "r_mbytes_per_sec": 0, 00:23:40.323 "w_mbytes_per_sec": 0 00:23:40.323 }, 00:23:40.323 "claimed": false, 00:23:40.323 "zoned": false, 00:23:40.323 "supported_io_types": { 00:23:40.323 "read": true, 00:23:40.323 "write": true, 00:23:40.323 "unmap": false, 00:23:40.323 "flush": true, 00:23:40.323 "reset": true, 00:23:40.323 "nvme_admin": true, 00:23:40.323 "nvme_io": true, 00:23:40.323 "nvme_io_md": false, 00:23:40.323 "write_zeroes": true, 00:23:40.323 "zcopy": false, 00:23:40.323 "get_zone_info": false, 00:23:40.323 "zone_management": false, 00:23:40.323 "zone_append": false, 00:23:40.323 "compare": true, 00:23:40.323 "compare_and_write": true, 00:23:40.323 "abort": true, 00:23:40.323 "seek_hole": false, 00:23:40.323 "seek_data": false, 00:23:40.323 "copy": true, 00:23:40.323 "nvme_iov_md": false 00:23:40.323 }, 00:23:40.323 "memory_domains": [ 00:23:40.323 { 00:23:40.323 "dma_device_id": "system", 00:23:40.323 "dma_device_type": 1 00:23:40.323 } 00:23:40.323 ], 00:23:40.323 "driver_specific": { 00:23:40.323 "nvme": [ 00:23:40.323 { 00:23:40.323 "trid": { 00:23:40.324 "trtype": "TCP", 00:23:40.324 "adrfam": "IPv4", 00:23:40.324 "traddr": "10.0.0.2", 00:23:40.324 "trsvcid": "4420", 00:23:40.324 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:40.324 }, 00:23:40.324 "ctrlr_data": { 00:23:40.324 "cntlid": 1, 00:23:40.324 "vendor_id": "0x8086", 00:23:40.324 "model_number": "SPDK bdev Controller", 00:23:40.324 "serial_number": "00000000000000000000", 00:23:40.324 "firmware_revision": "24.09", 00:23:40.324 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:40.324 "oacs": { 00:23:40.324 "security": 0, 00:23:40.324 "format": 0, 00:23:40.324 "firmware": 0, 00:23:40.324 "ns_manage": 0 00:23:40.324 }, 00:23:40.324 "multi_ctrlr": true, 00:23:40.324 "ana_reporting": false 00:23:40.324 }, 00:23:40.324 "vs": { 00:23:40.324 "nvme_version": "1.3" 00:23:40.324 }, 00:23:40.324 "ns_data": { 00:23:40.324 "id": 1, 00:23:40.324 "can_share": true 00:23:40.324 } 00:23:40.324 } 00:23:40.324 ], 00:23:40.324 "mp_policy": "active_passive" 00:23:40.324 } 00:23:40.324 } 00:23:40.324 ] 00:23:40.324 10:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.324 10:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:23:40.324 10:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.324 10:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:40.324 [2024-07-25 10:12:25.310930] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:40.324 [2024-07-25 10:12:25.311020] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x78f700 (9): Bad file descriptor 00:23:40.324 [2024-07-25 10:12:25.443588] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:40.324 10:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.324 10:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:40.324 10:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.324 10:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:40.324 [ 00:23:40.324 { 00:23:40.324 "name": "nvme0n1", 00:23:40.324 "aliases": [ 00:23:40.324 "90d1bd92-8864-4484-a735-da17c6321527" 00:23:40.324 ], 00:23:40.324 "product_name": "NVMe disk", 00:23:40.324 "block_size": 512, 00:23:40.324 "num_blocks": 2097152, 00:23:40.324 "uuid": "90d1bd92-8864-4484-a735-da17c6321527", 00:23:40.324 "assigned_rate_limits": { 00:23:40.324 "rw_ios_per_sec": 0, 00:23:40.324 "rw_mbytes_per_sec": 0, 00:23:40.324 "r_mbytes_per_sec": 0, 00:23:40.324 "w_mbytes_per_sec": 0 00:23:40.324 }, 00:23:40.324 "claimed": false, 00:23:40.324 "zoned": false, 00:23:40.324 "supported_io_types": { 00:23:40.324 "read": true, 00:23:40.324 "write": true, 00:23:40.324 "unmap": false, 00:23:40.324 "flush": true, 00:23:40.324 "reset": true, 00:23:40.324 "nvme_admin": true, 00:23:40.324 "nvme_io": true, 00:23:40.324 "nvme_io_md": false, 00:23:40.324 "write_zeroes": true, 00:23:40.324 "zcopy": false, 00:23:40.324 "get_zone_info": false, 00:23:40.324 "zone_management": false, 00:23:40.324 "zone_append": false, 00:23:40.324 "compare": true, 00:23:40.324 "compare_and_write": true, 00:23:40.324 "abort": true, 00:23:40.324 "seek_hole": false, 00:23:40.324 "seek_data": false, 00:23:40.324 "copy": true, 00:23:40.324 "nvme_iov_md": false 00:23:40.324 }, 00:23:40.324 "memory_domains": [ 00:23:40.324 { 00:23:40.324 "dma_device_id": "system", 00:23:40.324 "dma_device_type": 1 00:23:40.324 } 00:23:40.324 ], 00:23:40.324 "driver_specific": { 00:23:40.324 "nvme": [ 00:23:40.324 { 00:23:40.324 "trid": { 00:23:40.324 "trtype": "TCP", 00:23:40.324 "adrfam": "IPv4", 00:23:40.324 "traddr": "10.0.0.2", 00:23:40.324 "trsvcid": "4420", 00:23:40.324 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:40.324 }, 00:23:40.324 "ctrlr_data": { 00:23:40.324 "cntlid": 2, 00:23:40.324 "vendor_id": "0x8086", 00:23:40.324 "model_number": "SPDK bdev Controller", 00:23:40.324 "serial_number": "00000000000000000000", 00:23:40.324 "firmware_revision": "24.09", 00:23:40.324 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:40.324 "oacs": { 00:23:40.324 "security": 0, 00:23:40.324 "format": 0, 00:23:40.324 "firmware": 0, 00:23:40.324 "ns_manage": 0 00:23:40.324 }, 00:23:40.324 "multi_ctrlr": true, 00:23:40.324 "ana_reporting": false 00:23:40.324 }, 00:23:40.324 "vs": { 00:23:40.324 "nvme_version": "1.3" 00:23:40.324 }, 00:23:40.324 "ns_data": { 00:23:40.324 "id": 1, 00:23:40.324 "can_share": true 00:23:40.324 } 00:23:40.324 } 00:23:40.324 ], 00:23:40.324 "mp_policy": "active_passive" 00:23:40.324 } 00:23:40.324 } 00:23:40.324 ] 00:23:40.324 10:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.324 10:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:40.324 10:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.324 10:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:40.324 10:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.324 10:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:23:40.324 10:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.ojGQSBay5H 00:23:40.324 10:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:23:40.324 10:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.ojGQSBay5H 00:23:40.324 10:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:23:40.324 10:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.324 10:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:40.582 10:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.582 10:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:23:40.582 10:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.582 10:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:40.582 [2024-07-25 10:12:25.499612] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:40.582 [2024-07-25 10:12:25.499770] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:40.582 10:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.582 10:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ojGQSBay5H 00:23:40.582 10:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.582 10:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:40.582 [2024-07-25 10:12:25.507627] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:40.582 10:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.582 10:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.ojGQSBay5H 00:23:40.582 10:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.582 10:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:40.582 [2024-07-25 10:12:25.515658] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:40.582 [2024-07-25 10:12:25.515731] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:40.582 nvme0n1 00:23:40.582 10:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.582 10:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:23:40.582 10:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.582 10:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:40.582 [ 00:23:40.582 { 00:23:40.582 "name": "nvme0n1", 00:23:40.582 "aliases": [ 00:23:40.582 "90d1bd92-8864-4484-a735-da17c6321527" 00:23:40.582 ], 00:23:40.582 "product_name": "NVMe disk", 00:23:40.582 "block_size": 512, 00:23:40.582 "num_blocks": 2097152, 00:23:40.582 "uuid": "90d1bd92-8864-4484-a735-da17c6321527", 00:23:40.582 "assigned_rate_limits": { 00:23:40.582 "rw_ios_per_sec": 0, 00:23:40.582 "rw_mbytes_per_sec": 0, 00:23:40.582 "r_mbytes_per_sec": 0, 00:23:40.582 "w_mbytes_per_sec": 0 00:23:40.582 }, 00:23:40.582 "claimed": false, 00:23:40.582 "zoned": false, 00:23:40.582 "supported_io_types": { 00:23:40.582 "read": true, 00:23:40.582 "write": true, 00:23:40.582 "unmap": false, 00:23:40.582 "flush": true, 00:23:40.582 "reset": true, 00:23:40.582 "nvme_admin": true, 00:23:40.582 "nvme_io": true, 00:23:40.582 "nvme_io_md": false, 00:23:40.582 "write_zeroes": true, 00:23:40.582 "zcopy": false, 00:23:40.582 "get_zone_info": false, 00:23:40.582 "zone_management": false, 00:23:40.582 "zone_append": false, 00:23:40.582 "compare": true, 00:23:40.582 "compare_and_write": true, 00:23:40.582 "abort": true, 00:23:40.582 "seek_hole": false, 00:23:40.582 "seek_data": false, 00:23:40.582 "copy": true, 00:23:40.582 "nvme_iov_md": false 00:23:40.582 }, 00:23:40.582 "memory_domains": [ 00:23:40.582 { 00:23:40.582 "dma_device_id": "system", 00:23:40.582 "dma_device_type": 1 00:23:40.582 } 00:23:40.582 ], 00:23:40.582 "driver_specific": { 00:23:40.582 "nvme": [ 00:23:40.582 { 00:23:40.582 "trid": { 00:23:40.582 "trtype": "TCP", 00:23:40.582 "adrfam": "IPv4", 00:23:40.582 "traddr": "10.0.0.2", 00:23:40.582 "trsvcid": "4421", 00:23:40.582 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:23:40.582 }, 00:23:40.582 "ctrlr_data": { 00:23:40.582 "cntlid": 3, 00:23:40.582 "vendor_id": "0x8086", 00:23:40.582 "model_number": "SPDK bdev Controller", 00:23:40.582 "serial_number": "00000000000000000000", 00:23:40.582 "firmware_revision": "24.09", 00:23:40.582 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:23:40.582 "oacs": { 00:23:40.582 "security": 0, 00:23:40.582 "format": 0, 00:23:40.582 "firmware": 0, 00:23:40.582 "ns_manage": 0 00:23:40.582 }, 00:23:40.582 "multi_ctrlr": true, 00:23:40.582 "ana_reporting": false 00:23:40.582 }, 00:23:40.582 "vs": { 00:23:40.582 "nvme_version": "1.3" 00:23:40.582 }, 00:23:40.582 "ns_data": { 00:23:40.582 "id": 1, 00:23:40.582 "can_share": true 00:23:40.582 } 00:23:40.582 } 00:23:40.582 ], 00:23:40.582 "mp_policy": "active_passive" 00:23:40.582 } 00:23:40.582 } 00:23:40.582 ] 00:23:40.582 10:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.582 10:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:40.582 10:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.583 10:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:40.583 10:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.583 10:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.ojGQSBay5H 00:23:40.583 10:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:23:40.583 10:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:23:40.583 10:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:40.583 10:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:23:40.583 10:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:40.583 10:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:23:40.583 10:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:40.583 10:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:40.583 rmmod nvme_tcp 00:23:40.583 rmmod nvme_fabrics 00:23:40.583 rmmod nvme_keyring 00:23:40.583 10:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:40.583 10:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:23:40.583 10:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:23:40.583 10:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 496039 ']' 00:23:40.583 10:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 496039 00:23:40.583 10:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@950 -- # '[' -z 496039 ']' 00:23:40.583 10:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # kill -0 496039 00:23:40.583 10:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # uname 00:23:40.583 10:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:40.583 10:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 496039 00:23:40.583 10:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:40.583 10:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:40.583 10:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 496039' 00:23:40.583 killing process with pid 496039 00:23:40.583 10:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@969 -- # kill 496039 00:23:40.583 [2024-07-25 10:12:25.704449] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:40.583 [2024-07-25 10:12:25.704492] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:40.583 10:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@974 -- # wait 496039 00:23:40.841 10:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:40.841 10:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:40.841 10:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:40.841 10:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:40.841 10:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:40.841 10:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:40.841 10:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:40.841 10:12:25 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:43.373 10:12:28 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:43.373 00:23:43.373 real 0m6.798s 00:23:43.373 user 0m3.190s 00:23:43.373 sys 0m2.314s 00:23:43.373 10:12:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:43.373 10:12:28 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:23:43.373 ************************************ 00:23:43.373 END TEST nvmf_async_init 00:23:43.373 ************************************ 00:23:43.373 10:12:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:23:43.373 10:12:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:43.373 10:12:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:43.373 10:12:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.373 ************************************ 00:23:43.373 START TEST dma 00:23:43.373 ************************************ 00:23:43.373 10:12:28 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:23:43.373 * Looking for test storage... 00:23:43.373 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:43.373 10:12:28 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:43.373 10:12:28 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:23:43.373 10:12:28 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:43.373 10:12:28 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:43.373 10:12:28 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:43.373 10:12:28 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:43.373 10:12:28 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:43.373 10:12:28 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:43.373 10:12:28 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:43.373 10:12:28 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:43.373 10:12:28 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:43.373 10:12:28 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:43.373 10:12:28 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:23:43.373 10:12:28 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:23:43.373 10:12:28 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:43.373 10:12:28 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:43.373 10:12:28 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:43.373 10:12:28 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:43.373 10:12:28 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:43.373 10:12:28 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:43.373 10:12:28 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:43.373 10:12:28 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:43.373 10:12:28 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:43.373 10:12:28 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:43.373 10:12:28 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:43.373 10:12:28 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:23:43.373 10:12:28 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:43.373 10:12:28 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@47 -- # : 0 00:23:43.373 10:12:28 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:43.373 10:12:28 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:43.373 10:12:28 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:43.373 10:12:28 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:43.373 10:12:28 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:43.373 10:12:28 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:43.373 10:12:28 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:43.373 10:12:28 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:43.373 10:12:28 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:23:43.373 10:12:28 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:23:43.373 00:23:43.373 real 0m0.080s 00:23:43.373 user 0m0.031s 00:23:43.373 sys 0m0.054s 00:23:43.373 10:12:28 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:43.373 10:12:28 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:23:43.373 ************************************ 00:23:43.373 END TEST dma 00:23:43.373 ************************************ 00:23:43.373 10:12:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:23:43.373 10:12:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:43.373 10:12:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:43.373 10:12:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.373 ************************************ 00:23:43.373 START TEST nvmf_identify 00:23:43.373 ************************************ 00:23:43.373 10:12:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:23:43.373 * Looking for test storage... 00:23:43.374 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:43.374 10:12:28 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:43.374 10:12:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:23:43.374 10:12:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:43.374 10:12:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:43.374 10:12:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:43.374 10:12:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:43.374 10:12:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:43.374 10:12:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:43.374 10:12:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:43.374 10:12:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:43.374 10:12:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:43.374 10:12:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:43.374 10:12:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:23:43.374 10:12:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:23:43.374 10:12:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:43.374 10:12:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:43.374 10:12:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:43.374 10:12:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:43.374 10:12:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:43.374 10:12:28 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:43.374 10:12:28 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:43.374 10:12:28 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:43.374 10:12:28 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:43.374 10:12:28 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:43.374 10:12:28 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:43.374 10:12:28 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:23:43.374 10:12:28 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:43.374 10:12:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:23:43.374 10:12:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:43.374 10:12:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:43.374 10:12:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:43.374 10:12:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:43.374 10:12:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:43.374 10:12:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:43.374 10:12:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:43.374 10:12:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:43.374 10:12:28 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:43.374 10:12:28 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:43.374 10:12:28 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:23:43.374 10:12:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:43.374 10:12:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:43.374 10:12:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:43.374 10:12:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:43.374 10:12:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:43.374 10:12:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:43.374 10:12:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:43.374 10:12:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:43.374 10:12:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:43.374 10:12:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:43.374 10:12:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:23:43.374 10:12:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:45.905 10:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:45.905 10:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:23:45.905 10:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:45.905 10:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:45.905 10:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:45.905 10:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:45.905 10:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:45.905 10:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:23:45.905 10:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:45.905 10:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:23:45.905 10:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:23:45.905 10:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:23:45.905 10:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:23:45.905 10:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:23:45.905 10:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:23:45.905 10:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:45.905 10:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:45.905 10:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:45.905 10:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:45.905 10:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:45.905 10:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:45.905 10:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:45.905 10:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:45.905 10:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:45.905 10:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:45.905 10:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:45.905 10:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:45.905 10:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:45.905 10:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:45.905 10:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:45.905 10:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:45.905 10:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:45.905 10:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:45.905 10:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:23:45.905 Found 0000:84:00.0 (0x8086 - 0x159b) 00:23:45.905 10:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:45.905 10:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:45.905 10:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:45.905 10:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:45.905 10:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:45.905 10:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:45.905 10:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:23:45.905 Found 0000:84:00.1 (0x8086 - 0x159b) 00:23:45.905 10:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:45.905 10:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:45.905 10:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:45.905 10:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:45.905 10:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:45.905 10:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:45.905 10:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:45.905 10:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:45.905 10:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:45.905 10:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:45.905 10:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:45.905 10:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:45.905 10:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:45.905 10:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:45.905 10:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:45.905 10:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:23:45.905 Found net devices under 0000:84:00.0: cvl_0_0 00:23:45.905 10:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:45.905 10:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:45.905 10:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:45.905 10:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:45.905 10:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:45.905 10:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:45.905 10:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:45.905 10:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:45.905 10:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:23:45.905 Found net devices under 0000:84:00.1: cvl_0_1 00:23:45.905 10:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:45.905 10:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:45.905 10:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:23:45.905 10:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:45.905 10:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:45.905 10:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:45.905 10:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:45.905 10:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:45.905 10:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:45.905 10:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:45.905 10:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:45.905 10:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:45.905 10:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:45.905 10:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:45.905 10:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:45.905 10:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:45.905 10:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:45.905 10:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:45.905 10:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:45.905 10:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:45.905 10:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:45.906 10:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:45.906 10:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:45.906 10:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:45.906 10:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:45.906 10:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:45.906 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:45.906 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.117 ms 00:23:45.906 00:23:45.906 --- 10.0.0.2 ping statistics --- 00:23:45.906 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:45.906 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:23:45.906 10:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:45.906 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:45.906 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.082 ms 00:23:45.906 00:23:45.906 --- 10.0.0.1 ping statistics --- 00:23:45.906 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:45.906 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:23:45.906 10:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:45.906 10:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:23:45.906 10:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:45.906 10:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:45.906 10:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:45.906 10:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:45.906 10:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:45.906 10:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:45.906 10:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:45.906 10:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:23:45.906 10:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:45.906 10:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:45.906 10:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=498312 00:23:45.906 10:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:45.906 10:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:45.906 10:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 498312 00:23:45.906 10:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@831 -- # '[' -z 498312 ']' 00:23:45.906 10:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:45.906 10:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:45.906 10:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:45.906 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:45.906 10:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:45.906 10:12:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:45.906 [2024-07-25 10:12:30.951914] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:23:45.906 [2024-07-25 10:12:30.952015] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:45.906 EAL: No free 2048 kB hugepages reported on node 1 00:23:45.906 [2024-07-25 10:12:31.032057] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:46.163 [2024-07-25 10:12:31.160734] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:46.163 [2024-07-25 10:12:31.160800] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:46.163 [2024-07-25 10:12:31.160817] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:46.163 [2024-07-25 10:12:31.160830] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:46.163 [2024-07-25 10:12:31.160841] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:46.163 [2024-07-25 10:12:31.160933] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:46.164 [2024-07-25 10:12:31.160991] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:46.164 [2024-07-25 10:12:31.161017] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:46.164 [2024-07-25 10:12:31.161021] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:46.164 10:12:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:46.164 10:12:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # return 0 00:23:46.164 10:12:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:46.164 10:12:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:46.164 10:12:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:46.164 [2024-07-25 10:12:31.296867] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:46.164 10:12:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:46.164 10:12:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:23:46.164 10:12:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:46.164 10:12:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:46.423 10:12:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:46.423 10:12:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:46.423 10:12:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:46.423 Malloc0 00:23:46.423 10:12:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:46.423 10:12:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:46.423 10:12:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:46.423 10:12:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:46.423 10:12:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:46.423 10:12:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:23:46.423 10:12:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:46.423 10:12:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:46.423 10:12:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:46.423 10:12:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:46.423 10:12:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:46.423 10:12:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:46.423 [2024-07-25 10:12:31.382471] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:46.423 10:12:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:46.423 10:12:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:46.423 10:12:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:46.423 10:12:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:46.423 10:12:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:46.423 10:12:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:23:46.423 10:12:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:46.423 10:12:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:46.423 [ 00:23:46.423 { 00:23:46.423 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:23:46.423 "subtype": "Discovery", 00:23:46.423 "listen_addresses": [ 00:23:46.423 { 00:23:46.423 "trtype": "TCP", 00:23:46.423 "adrfam": "IPv4", 00:23:46.423 "traddr": "10.0.0.2", 00:23:46.423 "trsvcid": "4420" 00:23:46.423 } 00:23:46.423 ], 00:23:46.423 "allow_any_host": true, 00:23:46.423 "hosts": [] 00:23:46.423 }, 00:23:46.423 { 00:23:46.423 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:46.423 "subtype": "NVMe", 00:23:46.423 "listen_addresses": [ 00:23:46.423 { 00:23:46.423 "trtype": "TCP", 00:23:46.423 "adrfam": "IPv4", 00:23:46.423 "traddr": "10.0.0.2", 00:23:46.423 "trsvcid": "4420" 00:23:46.423 } 00:23:46.423 ], 00:23:46.423 "allow_any_host": true, 00:23:46.423 "hosts": [], 00:23:46.423 "serial_number": "SPDK00000000000001", 00:23:46.423 "model_number": "SPDK bdev Controller", 00:23:46.423 "max_namespaces": 32, 00:23:46.423 "min_cntlid": 1, 00:23:46.423 "max_cntlid": 65519, 00:23:46.423 "namespaces": [ 00:23:46.423 { 00:23:46.423 "nsid": 1, 00:23:46.423 "bdev_name": "Malloc0", 00:23:46.423 "name": "Malloc0", 00:23:46.423 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:23:46.423 "eui64": "ABCDEF0123456789", 00:23:46.423 "uuid": "ad191506-f798-47ab-977f-5d47afd23598" 00:23:46.423 } 00:23:46.423 ] 00:23:46.423 } 00:23:46.423 ] 00:23:46.423 10:12:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:46.423 10:12:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:23:46.423 [2024-07-25 10:12:31.424525] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:23:46.423 [2024-07-25 10:12:31.424571] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid498341 ] 00:23:46.423 EAL: No free 2048 kB hugepages reported on node 1 00:23:46.423 [2024-07-25 10:12:31.459688] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:23:46.423 [2024-07-25 10:12:31.459767] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:23:46.423 [2024-07-25 10:12:31.459778] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:23:46.423 [2024-07-25 10:12:31.459791] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:23:46.423 [2024-07-25 10:12:31.459804] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:23:46.423 [2024-07-25 10:12:31.463475] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:23:46.423 [2024-07-25 10:12:31.463527] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1ca6540 0 00:23:46.423 [2024-07-25 10:12:31.470438] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:23:46.423 [2024-07-25 10:12:31.470464] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:23:46.423 [2024-07-25 10:12:31.470474] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:23:46.423 [2024-07-25 10:12:31.470480] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:23:46.423 [2024-07-25 10:12:31.470531] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.423 [2024-07-25 10:12:31.470543] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.423 [2024-07-25 10:12:31.470550] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ca6540) 00:23:46.423 [2024-07-25 10:12:31.470566] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:23:46.423 [2024-07-25 10:12:31.470592] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d063c0, cid 0, qid 0 00:23:46.423 [2024-07-25 10:12:31.478445] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.423 [2024-07-25 10:12:31.478463] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.423 [2024-07-25 10:12:31.478470] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.423 [2024-07-25 10:12:31.478477] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d063c0) on tqpair=0x1ca6540 00:23:46.423 [2024-07-25 10:12:31.478491] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:23:46.423 [2024-07-25 10:12:31.478502] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:23:46.423 [2024-07-25 10:12:31.478511] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:23:46.423 [2024-07-25 10:12:31.478539] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.423 [2024-07-25 10:12:31.478548] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.423 [2024-07-25 10:12:31.478554] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ca6540) 00:23:46.423 [2024-07-25 10:12:31.478565] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.423 [2024-07-25 10:12:31.478588] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d063c0, cid 0, qid 0 00:23:46.423 [2024-07-25 10:12:31.478739] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.423 [2024-07-25 10:12:31.478754] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.423 [2024-07-25 10:12:31.478760] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.424 [2024-07-25 10:12:31.478767] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d063c0) on tqpair=0x1ca6540 00:23:46.424 [2024-07-25 10:12:31.478779] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:23:46.424 [2024-07-25 10:12:31.478792] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:23:46.424 [2024-07-25 10:12:31.478804] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.424 [2024-07-25 10:12:31.478811] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.424 [2024-07-25 10:12:31.478817] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ca6540) 00:23:46.424 [2024-07-25 10:12:31.478827] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.424 [2024-07-25 10:12:31.478848] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d063c0, cid 0, qid 0 00:23:46.424 [2024-07-25 10:12:31.478964] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.424 [2024-07-25 10:12:31.478975] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.424 [2024-07-25 10:12:31.478981] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.424 [2024-07-25 10:12:31.478987] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d063c0) on tqpair=0x1ca6540 00:23:46.424 [2024-07-25 10:12:31.478995] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:23:46.424 [2024-07-25 10:12:31.479008] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:23:46.424 [2024-07-25 10:12:31.479020] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.424 [2024-07-25 10:12:31.479026] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.424 [2024-07-25 10:12:31.479032] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ca6540) 00:23:46.424 [2024-07-25 10:12:31.479042] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.424 [2024-07-25 10:12:31.479061] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d063c0, cid 0, qid 0 00:23:46.424 [2024-07-25 10:12:31.479166] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.424 [2024-07-25 10:12:31.479180] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.424 [2024-07-25 10:12:31.479187] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.424 [2024-07-25 10:12:31.479193] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d063c0) on tqpair=0x1ca6540 00:23:46.424 [2024-07-25 10:12:31.479201] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:23:46.424 [2024-07-25 10:12:31.479217] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.424 [2024-07-25 10:12:31.479225] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.424 [2024-07-25 10:12:31.479231] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ca6540) 00:23:46.424 [2024-07-25 10:12:31.479244] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.424 [2024-07-25 10:12:31.479265] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d063c0, cid 0, qid 0 00:23:46.424 [2024-07-25 10:12:31.479367] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.424 [2024-07-25 10:12:31.479381] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.424 [2024-07-25 10:12:31.479387] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.424 [2024-07-25 10:12:31.479393] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d063c0) on tqpair=0x1ca6540 00:23:46.424 [2024-07-25 10:12:31.479401] nvme_ctrlr.c:3873:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:23:46.424 [2024-07-25 10:12:31.479424] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:23:46.424 [2024-07-25 10:12:31.479446] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:23:46.424 [2024-07-25 10:12:31.479556] nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:23:46.424 [2024-07-25 10:12:31.479565] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:23:46.424 [2024-07-25 10:12:31.479578] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.424 [2024-07-25 10:12:31.479585] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.424 [2024-07-25 10:12:31.479591] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ca6540) 00:23:46.424 [2024-07-25 10:12:31.479601] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.424 [2024-07-25 10:12:31.479623] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d063c0, cid 0, qid 0 00:23:46.424 [2024-07-25 10:12:31.479749] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.424 [2024-07-25 10:12:31.479763] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.424 [2024-07-25 10:12:31.479770] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.424 [2024-07-25 10:12:31.479776] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d063c0) on tqpair=0x1ca6540 00:23:46.424 [2024-07-25 10:12:31.479784] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:23:46.424 [2024-07-25 10:12:31.479799] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.424 [2024-07-25 10:12:31.479808] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.424 [2024-07-25 10:12:31.479814] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ca6540) 00:23:46.424 [2024-07-25 10:12:31.479824] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.424 [2024-07-25 10:12:31.479844] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d063c0, cid 0, qid 0 00:23:46.424 [2024-07-25 10:12:31.479945] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.424 [2024-07-25 10:12:31.479959] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.424 [2024-07-25 10:12:31.479965] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.424 [2024-07-25 10:12:31.479971] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d063c0) on tqpair=0x1ca6540 00:23:46.424 [2024-07-25 10:12:31.479978] nvme_ctrlr.c:3908:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:23:46.424 [2024-07-25 10:12:31.479986] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:23:46.424 [2024-07-25 10:12:31.480004] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:23:46.424 [2024-07-25 10:12:31.480017] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:23:46.424 [2024-07-25 10:12:31.480031] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.424 [2024-07-25 10:12:31.480038] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ca6540) 00:23:46.424 [2024-07-25 10:12:31.480048] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.424 [2024-07-25 10:12:31.480068] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d063c0, cid 0, qid 0 00:23:46.424 [2024-07-25 10:12:31.480211] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:46.424 [2024-07-25 10:12:31.480226] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:46.424 [2024-07-25 10:12:31.480232] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:46.424 [2024-07-25 10:12:31.480238] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ca6540): datao=0, datal=4096, cccid=0 00:23:46.424 [2024-07-25 10:12:31.480245] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d063c0) on tqpair(0x1ca6540): expected_datao=0, payload_size=4096 00:23:46.424 [2024-07-25 10:12:31.480253] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.424 [2024-07-25 10:12:31.480270] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:46.424 [2024-07-25 10:12:31.480279] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:46.424 [2024-07-25 10:12:31.521454] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.424 [2024-07-25 10:12:31.521473] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.424 [2024-07-25 10:12:31.521481] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.424 [2024-07-25 10:12:31.521488] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d063c0) on tqpair=0x1ca6540 00:23:46.424 [2024-07-25 10:12:31.521499] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:23:46.424 [2024-07-25 10:12:31.521508] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:23:46.424 [2024-07-25 10:12:31.521515] nvme_ctrlr.c:2064:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:23:46.424 [2024-07-25 10:12:31.521524] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:23:46.424 [2024-07-25 10:12:31.521531] nvme_ctrlr.c:2103:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:23:46.424 [2024-07-25 10:12:31.521539] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:23:46.424 [2024-07-25 10:12:31.521554] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:23:46.424 [2024-07-25 10:12:31.521572] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.424 [2024-07-25 10:12:31.521580] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.424 [2024-07-25 10:12:31.521587] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ca6540) 00:23:46.424 [2024-07-25 10:12:31.521598] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:46.424 [2024-07-25 10:12:31.521621] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d063c0, cid 0, qid 0 00:23:46.424 [2024-07-25 10:12:31.521751] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.424 [2024-07-25 10:12:31.521766] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.424 [2024-07-25 10:12:31.521773] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.424 [2024-07-25 10:12:31.521783] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d063c0) on tqpair=0x1ca6540 00:23:46.424 [2024-07-25 10:12:31.521795] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.424 [2024-07-25 10:12:31.521802] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.424 [2024-07-25 10:12:31.521808] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1ca6540) 00:23:46.425 [2024-07-25 10:12:31.521817] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:46.425 [2024-07-25 10:12:31.521827] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.425 [2024-07-25 10:12:31.521833] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.425 [2024-07-25 10:12:31.521839] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1ca6540) 00:23:46.425 [2024-07-25 10:12:31.521847] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:46.425 [2024-07-25 10:12:31.521856] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.425 [2024-07-25 10:12:31.521862] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.425 [2024-07-25 10:12:31.521868] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1ca6540) 00:23:46.425 [2024-07-25 10:12:31.521876] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:46.425 [2024-07-25 10:12:31.521885] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.425 [2024-07-25 10:12:31.521891] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.425 [2024-07-25 10:12:31.521897] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ca6540) 00:23:46.425 [2024-07-25 10:12:31.521905] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:46.425 [2024-07-25 10:12:31.521913] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:23:46.425 [2024-07-25 10:12:31.521932] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:23:46.425 [2024-07-25 10:12:31.521944] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.425 [2024-07-25 10:12:31.521951] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ca6540) 00:23:46.425 [2024-07-25 10:12:31.521961] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.425 [2024-07-25 10:12:31.521983] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d063c0, cid 0, qid 0 00:23:46.425 [2024-07-25 10:12:31.521993] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d06540, cid 1, qid 0 00:23:46.425 [2024-07-25 10:12:31.522000] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d066c0, cid 2, qid 0 00:23:46.425 [2024-07-25 10:12:31.522008] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d06840, cid 3, qid 0 00:23:46.425 [2024-07-25 10:12:31.522015] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d069c0, cid 4, qid 0 00:23:46.425 [2024-07-25 10:12:31.522153] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.425 [2024-07-25 10:12:31.522167] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.425 [2024-07-25 10:12:31.522174] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.425 [2024-07-25 10:12:31.522180] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d069c0) on tqpair=0x1ca6540 00:23:46.425 [2024-07-25 10:12:31.522188] nvme_ctrlr.c:3026:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:23:46.425 [2024-07-25 10:12:31.522196] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:23:46.425 [2024-07-25 10:12:31.522217] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.425 [2024-07-25 10:12:31.522227] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ca6540) 00:23:46.425 [2024-07-25 10:12:31.522237] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.425 [2024-07-25 10:12:31.522256] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d069c0, cid 4, qid 0 00:23:46.425 [2024-07-25 10:12:31.522368] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:46.425 [2024-07-25 10:12:31.522382] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:46.425 [2024-07-25 10:12:31.522389] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:46.425 [2024-07-25 10:12:31.522395] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ca6540): datao=0, datal=4096, cccid=4 00:23:46.425 [2024-07-25 10:12:31.522402] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d069c0) on tqpair(0x1ca6540): expected_datao=0, payload_size=4096 00:23:46.425 [2024-07-25 10:12:31.522424] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.425 [2024-07-25 10:12:31.522452] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:46.425 [2024-07-25 10:12:31.522461] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:46.425 [2024-07-25 10:12:31.522519] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.425 [2024-07-25 10:12:31.522534] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.425 [2024-07-25 10:12:31.522540] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.425 [2024-07-25 10:12:31.522547] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d069c0) on tqpair=0x1ca6540 00:23:46.425 [2024-07-25 10:12:31.522565] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:23:46.425 [2024-07-25 10:12:31.522600] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.425 [2024-07-25 10:12:31.522610] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ca6540) 00:23:46.425 [2024-07-25 10:12:31.522621] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.425 [2024-07-25 10:12:31.522632] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.425 [2024-07-25 10:12:31.522638] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.425 [2024-07-25 10:12:31.522645] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1ca6540) 00:23:46.425 [2024-07-25 10:12:31.522653] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:23:46.425 [2024-07-25 10:12:31.522679] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d069c0, cid 4, qid 0 00:23:46.425 [2024-07-25 10:12:31.522690] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d06b40, cid 5, qid 0 00:23:46.425 [2024-07-25 10:12:31.522849] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:46.425 [2024-07-25 10:12:31.522863] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:46.425 [2024-07-25 10:12:31.522869] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:46.425 [2024-07-25 10:12:31.522875] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ca6540): datao=0, datal=1024, cccid=4 00:23:46.425 [2024-07-25 10:12:31.522882] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d069c0) on tqpair(0x1ca6540): expected_datao=0, payload_size=1024 00:23:46.425 [2024-07-25 10:12:31.522889] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.425 [2024-07-25 10:12:31.522898] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:46.425 [2024-07-25 10:12:31.522905] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:46.425 [2024-07-25 10:12:31.522913] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.425 [2024-07-25 10:12:31.522925] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.425 [2024-07-25 10:12:31.522932] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.425 [2024-07-25 10:12:31.522938] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d06b40) on tqpair=0x1ca6540 00:23:46.425 [2024-07-25 10:12:31.563551] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.425 [2024-07-25 10:12:31.563570] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.425 [2024-07-25 10:12:31.563577] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.425 [2024-07-25 10:12:31.563584] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d069c0) on tqpair=0x1ca6540 00:23:46.425 [2024-07-25 10:12:31.563601] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.425 [2024-07-25 10:12:31.563610] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ca6540) 00:23:46.425 [2024-07-25 10:12:31.563621] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.425 [2024-07-25 10:12:31.563650] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d069c0, cid 4, qid 0 00:23:46.425 [2024-07-25 10:12:31.563808] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:46.425 [2024-07-25 10:12:31.563823] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:46.425 [2024-07-25 10:12:31.563829] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:46.425 [2024-07-25 10:12:31.563835] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ca6540): datao=0, datal=3072, cccid=4 00:23:46.425 [2024-07-25 10:12:31.563842] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d069c0) on tqpair(0x1ca6540): expected_datao=0, payload_size=3072 00:23:46.425 [2024-07-25 10:12:31.563849] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.425 [2024-07-25 10:12:31.563869] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:46.425 [2024-07-25 10:12:31.563878] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:46.688 [2024-07-25 10:12:31.604557] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.688 [2024-07-25 10:12:31.604578] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.688 [2024-07-25 10:12:31.604586] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.688 [2024-07-25 10:12:31.604593] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d069c0) on tqpair=0x1ca6540 00:23:46.688 [2024-07-25 10:12:31.604609] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.688 [2024-07-25 10:12:31.604618] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1ca6540) 00:23:46.688 [2024-07-25 10:12:31.604630] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.688 [2024-07-25 10:12:31.604660] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d069c0, cid 4, qid 0 00:23:46.688 [2024-07-25 10:12:31.604804] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:46.688 [2024-07-25 10:12:31.604817] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:46.688 [2024-07-25 10:12:31.604823] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:46.688 [2024-07-25 10:12:31.604829] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1ca6540): datao=0, datal=8, cccid=4 00:23:46.688 [2024-07-25 10:12:31.604836] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1d069c0) on tqpair(0x1ca6540): expected_datao=0, payload_size=8 00:23:46.688 [2024-07-25 10:12:31.604843] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.688 [2024-07-25 10:12:31.604853] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:46.688 [2024-07-25 10:12:31.604860] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:46.688 [2024-07-25 10:12:31.646444] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.688 [2024-07-25 10:12:31.646474] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.688 [2024-07-25 10:12:31.646486] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.688 [2024-07-25 10:12:31.646494] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d069c0) on tqpair=0x1ca6540 00:23:46.688 ===================================================== 00:23:46.688 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:23:46.688 ===================================================== 00:23:46.688 Controller Capabilities/Features 00:23:46.688 ================================ 00:23:46.688 Vendor ID: 0000 00:23:46.688 Subsystem Vendor ID: 0000 00:23:46.688 Serial Number: .................... 00:23:46.688 Model Number: ........................................ 00:23:46.688 Firmware Version: 24.09 00:23:46.688 Recommended Arb Burst: 0 00:23:46.688 IEEE OUI Identifier: 00 00 00 00:23:46.688 Multi-path I/O 00:23:46.688 May have multiple subsystem ports: No 00:23:46.688 May have multiple controllers: No 00:23:46.688 Associated with SR-IOV VF: No 00:23:46.688 Max Data Transfer Size: 131072 00:23:46.688 Max Number of Namespaces: 0 00:23:46.688 Max Number of I/O Queues: 1024 00:23:46.688 NVMe Specification Version (VS): 1.3 00:23:46.688 NVMe Specification Version (Identify): 1.3 00:23:46.688 Maximum Queue Entries: 128 00:23:46.688 Contiguous Queues Required: Yes 00:23:46.688 Arbitration Mechanisms Supported 00:23:46.688 Weighted Round Robin: Not Supported 00:23:46.688 Vendor Specific: Not Supported 00:23:46.689 Reset Timeout: 15000 ms 00:23:46.689 Doorbell Stride: 4 bytes 00:23:46.689 NVM Subsystem Reset: Not Supported 00:23:46.689 Command Sets Supported 00:23:46.689 NVM Command Set: Supported 00:23:46.689 Boot Partition: Not Supported 00:23:46.689 Memory Page Size Minimum: 4096 bytes 00:23:46.689 Memory Page Size Maximum: 4096 bytes 00:23:46.689 Persistent Memory Region: Not Supported 00:23:46.689 Optional Asynchronous Events Supported 00:23:46.689 Namespace Attribute Notices: Not Supported 00:23:46.689 Firmware Activation Notices: Not Supported 00:23:46.689 ANA Change Notices: Not Supported 00:23:46.689 PLE Aggregate Log Change Notices: Not Supported 00:23:46.689 LBA Status Info Alert Notices: Not Supported 00:23:46.689 EGE Aggregate Log Change Notices: Not Supported 00:23:46.689 Normal NVM Subsystem Shutdown event: Not Supported 00:23:46.689 Zone Descriptor Change Notices: Not Supported 00:23:46.689 Discovery Log Change Notices: Supported 00:23:46.689 Controller Attributes 00:23:46.689 128-bit Host Identifier: Not Supported 00:23:46.689 Non-Operational Permissive Mode: Not Supported 00:23:46.689 NVM Sets: Not Supported 00:23:46.689 Read Recovery Levels: Not Supported 00:23:46.689 Endurance Groups: Not Supported 00:23:46.689 Predictable Latency Mode: Not Supported 00:23:46.689 Traffic Based Keep ALive: Not Supported 00:23:46.689 Namespace Granularity: Not Supported 00:23:46.689 SQ Associations: Not Supported 00:23:46.689 UUID List: Not Supported 00:23:46.689 Multi-Domain Subsystem: Not Supported 00:23:46.689 Fixed Capacity Management: Not Supported 00:23:46.689 Variable Capacity Management: Not Supported 00:23:46.689 Delete Endurance Group: Not Supported 00:23:46.689 Delete NVM Set: Not Supported 00:23:46.689 Extended LBA Formats Supported: Not Supported 00:23:46.689 Flexible Data Placement Supported: Not Supported 00:23:46.689 00:23:46.689 Controller Memory Buffer Support 00:23:46.689 ================================ 00:23:46.689 Supported: No 00:23:46.689 00:23:46.689 Persistent Memory Region Support 00:23:46.689 ================================ 00:23:46.689 Supported: No 00:23:46.689 00:23:46.689 Admin Command Set Attributes 00:23:46.689 ============================ 00:23:46.689 Security Send/Receive: Not Supported 00:23:46.689 Format NVM: Not Supported 00:23:46.689 Firmware Activate/Download: Not Supported 00:23:46.689 Namespace Management: Not Supported 00:23:46.689 Device Self-Test: Not Supported 00:23:46.689 Directives: Not Supported 00:23:46.689 NVMe-MI: Not Supported 00:23:46.689 Virtualization Management: Not Supported 00:23:46.689 Doorbell Buffer Config: Not Supported 00:23:46.689 Get LBA Status Capability: Not Supported 00:23:46.689 Command & Feature Lockdown Capability: Not Supported 00:23:46.689 Abort Command Limit: 1 00:23:46.689 Async Event Request Limit: 4 00:23:46.689 Number of Firmware Slots: N/A 00:23:46.689 Firmware Slot 1 Read-Only: N/A 00:23:46.689 Firmware Activation Without Reset: N/A 00:23:46.689 Multiple Update Detection Support: N/A 00:23:46.689 Firmware Update Granularity: No Information Provided 00:23:46.689 Per-Namespace SMART Log: No 00:23:46.689 Asymmetric Namespace Access Log Page: Not Supported 00:23:46.689 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:23:46.689 Command Effects Log Page: Not Supported 00:23:46.689 Get Log Page Extended Data: Supported 00:23:46.689 Telemetry Log Pages: Not Supported 00:23:46.689 Persistent Event Log Pages: Not Supported 00:23:46.689 Supported Log Pages Log Page: May Support 00:23:46.689 Commands Supported & Effects Log Page: Not Supported 00:23:46.689 Feature Identifiers & Effects Log Page:May Support 00:23:46.689 NVMe-MI Commands & Effects Log Page: May Support 00:23:46.689 Data Area 4 for Telemetry Log: Not Supported 00:23:46.689 Error Log Page Entries Supported: 128 00:23:46.689 Keep Alive: Not Supported 00:23:46.689 00:23:46.689 NVM Command Set Attributes 00:23:46.689 ========================== 00:23:46.689 Submission Queue Entry Size 00:23:46.689 Max: 1 00:23:46.689 Min: 1 00:23:46.689 Completion Queue Entry Size 00:23:46.689 Max: 1 00:23:46.689 Min: 1 00:23:46.689 Number of Namespaces: 0 00:23:46.689 Compare Command: Not Supported 00:23:46.689 Write Uncorrectable Command: Not Supported 00:23:46.689 Dataset Management Command: Not Supported 00:23:46.689 Write Zeroes Command: Not Supported 00:23:46.689 Set Features Save Field: Not Supported 00:23:46.689 Reservations: Not Supported 00:23:46.689 Timestamp: Not Supported 00:23:46.689 Copy: Not Supported 00:23:46.689 Volatile Write Cache: Not Present 00:23:46.689 Atomic Write Unit (Normal): 1 00:23:46.689 Atomic Write Unit (PFail): 1 00:23:46.689 Atomic Compare & Write Unit: 1 00:23:46.689 Fused Compare & Write: Supported 00:23:46.689 Scatter-Gather List 00:23:46.689 SGL Command Set: Supported 00:23:46.689 SGL Keyed: Supported 00:23:46.689 SGL Bit Bucket Descriptor: Not Supported 00:23:46.689 SGL Metadata Pointer: Not Supported 00:23:46.689 Oversized SGL: Not Supported 00:23:46.689 SGL Metadata Address: Not Supported 00:23:46.689 SGL Offset: Supported 00:23:46.689 Transport SGL Data Block: Not Supported 00:23:46.689 Replay Protected Memory Block: Not Supported 00:23:46.689 00:23:46.689 Firmware Slot Information 00:23:46.689 ========================= 00:23:46.689 Active slot: 0 00:23:46.689 00:23:46.689 00:23:46.689 Error Log 00:23:46.689 ========= 00:23:46.689 00:23:46.689 Active Namespaces 00:23:46.689 ================= 00:23:46.689 Discovery Log Page 00:23:46.689 ================== 00:23:46.689 Generation Counter: 2 00:23:46.689 Number of Records: 2 00:23:46.689 Record Format: 0 00:23:46.689 00:23:46.689 Discovery Log Entry 0 00:23:46.689 ---------------------- 00:23:46.689 Transport Type: 3 (TCP) 00:23:46.689 Address Family: 1 (IPv4) 00:23:46.689 Subsystem Type: 3 (Current Discovery Subsystem) 00:23:46.689 Entry Flags: 00:23:46.689 Duplicate Returned Information: 1 00:23:46.689 Explicit Persistent Connection Support for Discovery: 1 00:23:46.689 Transport Requirements: 00:23:46.689 Secure Channel: Not Required 00:23:46.689 Port ID: 0 (0x0000) 00:23:46.689 Controller ID: 65535 (0xffff) 00:23:46.689 Admin Max SQ Size: 128 00:23:46.689 Transport Service Identifier: 4420 00:23:46.689 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:23:46.689 Transport Address: 10.0.0.2 00:23:46.689 Discovery Log Entry 1 00:23:46.689 ---------------------- 00:23:46.689 Transport Type: 3 (TCP) 00:23:46.689 Address Family: 1 (IPv4) 00:23:46.689 Subsystem Type: 2 (NVM Subsystem) 00:23:46.689 Entry Flags: 00:23:46.689 Duplicate Returned Information: 0 00:23:46.689 Explicit Persistent Connection Support for Discovery: 0 00:23:46.689 Transport Requirements: 00:23:46.689 Secure Channel: Not Required 00:23:46.689 Port ID: 0 (0x0000) 00:23:46.689 Controller ID: 65535 (0xffff) 00:23:46.689 Admin Max SQ Size: 128 00:23:46.689 Transport Service Identifier: 4420 00:23:46.689 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:23:46.689 Transport Address: 10.0.0.2 [2024-07-25 10:12:31.646600] nvme_ctrlr.c:4361:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:23:46.689 [2024-07-25 10:12:31.646622] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d063c0) on tqpair=0x1ca6540 00:23:46.689 [2024-07-25 10:12:31.646633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.689 [2024-07-25 10:12:31.646642] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d06540) on tqpair=0x1ca6540 00:23:46.689 [2024-07-25 10:12:31.646650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.689 [2024-07-25 10:12:31.646658] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d066c0) on tqpair=0x1ca6540 00:23:46.689 [2024-07-25 10:12:31.646665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.689 [2024-07-25 10:12:31.646673] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d06840) on tqpair=0x1ca6540 00:23:46.689 [2024-07-25 10:12:31.646681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.689 [2024-07-25 10:12:31.646698] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.689 [2024-07-25 10:12:31.646707] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.689 [2024-07-25 10:12:31.646729] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ca6540) 00:23:46.690 [2024-07-25 10:12:31.646740] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.690 [2024-07-25 10:12:31.646765] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d06840, cid 3, qid 0 00:23:46.690 [2024-07-25 10:12:31.646867] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.690 [2024-07-25 10:12:31.646881] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.690 [2024-07-25 10:12:31.646888] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.690 [2024-07-25 10:12:31.646895] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d06840) on tqpair=0x1ca6540 00:23:46.690 [2024-07-25 10:12:31.646905] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.690 [2024-07-25 10:12:31.646913] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.690 [2024-07-25 10:12:31.646919] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ca6540) 00:23:46.690 [2024-07-25 10:12:31.646929] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.690 [2024-07-25 10:12:31.646954] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d06840, cid 3, qid 0 00:23:46.690 [2024-07-25 10:12:31.647069] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.690 [2024-07-25 10:12:31.647084] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.690 [2024-07-25 10:12:31.647090] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.690 [2024-07-25 10:12:31.647097] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d06840) on tqpair=0x1ca6540 00:23:46.690 [2024-07-25 10:12:31.647105] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:23:46.690 [2024-07-25 10:12:31.647113] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:23:46.690 [2024-07-25 10:12:31.647128] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.690 [2024-07-25 10:12:31.647137] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.690 [2024-07-25 10:12:31.647143] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ca6540) 00:23:46.690 [2024-07-25 10:12:31.647157] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.690 [2024-07-25 10:12:31.647178] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d06840, cid 3, qid 0 00:23:46.690 [2024-07-25 10:12:31.647295] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.690 [2024-07-25 10:12:31.647306] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.690 [2024-07-25 10:12:31.647313] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.690 [2024-07-25 10:12:31.647319] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d06840) on tqpair=0x1ca6540 00:23:46.690 [2024-07-25 10:12:31.647335] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.690 [2024-07-25 10:12:31.647344] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.690 [2024-07-25 10:12:31.647350] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ca6540) 00:23:46.690 [2024-07-25 10:12:31.647360] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.690 [2024-07-25 10:12:31.647379] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d06840, cid 3, qid 0 00:23:46.690 [2024-07-25 10:12:31.647517] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.690 [2024-07-25 10:12:31.647531] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.690 [2024-07-25 10:12:31.647538] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.690 [2024-07-25 10:12:31.647545] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d06840) on tqpair=0x1ca6540 00:23:46.690 [2024-07-25 10:12:31.647561] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.690 [2024-07-25 10:12:31.647570] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.690 [2024-07-25 10:12:31.647576] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ca6540) 00:23:46.690 [2024-07-25 10:12:31.647587] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.690 [2024-07-25 10:12:31.647608] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d06840, cid 3, qid 0 00:23:46.690 [2024-07-25 10:12:31.647736] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.690 [2024-07-25 10:12:31.647762] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.690 [2024-07-25 10:12:31.647768] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.690 [2024-07-25 10:12:31.647774] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d06840) on tqpair=0x1ca6540 00:23:46.690 [2024-07-25 10:12:31.647790] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.690 [2024-07-25 10:12:31.647799] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.690 [2024-07-25 10:12:31.647805] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ca6540) 00:23:46.690 [2024-07-25 10:12:31.647815] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.690 [2024-07-25 10:12:31.647834] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d06840, cid 3, qid 0 00:23:46.690 [2024-07-25 10:12:31.647938] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.690 [2024-07-25 10:12:31.647952] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.690 [2024-07-25 10:12:31.647959] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.690 [2024-07-25 10:12:31.647965] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d06840) on tqpair=0x1ca6540 00:23:46.690 [2024-07-25 10:12:31.647980] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.690 [2024-07-25 10:12:31.647989] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.690 [2024-07-25 10:12:31.647995] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ca6540) 00:23:46.690 [2024-07-25 10:12:31.648005] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.690 [2024-07-25 10:12:31.648029] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d06840, cid 3, qid 0 00:23:46.690 [2024-07-25 10:12:31.648141] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.690 [2024-07-25 10:12:31.648153] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.690 [2024-07-25 10:12:31.648159] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.690 [2024-07-25 10:12:31.648165] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d06840) on tqpair=0x1ca6540 00:23:46.690 [2024-07-25 10:12:31.648180] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.690 [2024-07-25 10:12:31.648189] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.690 [2024-07-25 10:12:31.648195] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ca6540) 00:23:46.690 [2024-07-25 10:12:31.648205] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.690 [2024-07-25 10:12:31.648225] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d06840, cid 3, qid 0 00:23:46.690 [2024-07-25 10:12:31.648342] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.690 [2024-07-25 10:12:31.648354] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.690 [2024-07-25 10:12:31.648360] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.690 [2024-07-25 10:12:31.648366] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d06840) on tqpair=0x1ca6540 00:23:46.690 [2024-07-25 10:12:31.648381] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.690 [2024-07-25 10:12:31.648390] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.690 [2024-07-25 10:12:31.648396] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ca6540) 00:23:46.690 [2024-07-25 10:12:31.648406] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.690 [2024-07-25 10:12:31.648451] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d06840, cid 3, qid 0 00:23:46.690 [2024-07-25 10:12:31.648576] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.690 [2024-07-25 10:12:31.648591] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.690 [2024-07-25 10:12:31.648597] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.690 [2024-07-25 10:12:31.648604] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d06840) on tqpair=0x1ca6540 00:23:46.690 [2024-07-25 10:12:31.648621] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.690 [2024-07-25 10:12:31.648629] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.690 [2024-07-25 10:12:31.648636] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ca6540) 00:23:46.690 [2024-07-25 10:12:31.648646] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.690 [2024-07-25 10:12:31.648667] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d06840, cid 3, qid 0 00:23:46.690 [2024-07-25 10:12:31.648809] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.690 [2024-07-25 10:12:31.648820] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.690 [2024-07-25 10:12:31.648827] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.690 [2024-07-25 10:12:31.648833] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d06840) on tqpair=0x1ca6540 00:23:46.690 [2024-07-25 10:12:31.648849] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.690 [2024-07-25 10:12:31.648857] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.690 [2024-07-25 10:12:31.648863] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ca6540) 00:23:46.690 [2024-07-25 10:12:31.648873] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.690 [2024-07-25 10:12:31.648892] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d06840, cid 3, qid 0 00:23:46.690 [2024-07-25 10:12:31.648998] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.690 [2024-07-25 10:12:31.649012] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.690 [2024-07-25 10:12:31.649019] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.690 [2024-07-25 10:12:31.649025] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d06840) on tqpair=0x1ca6540 00:23:46.690 [2024-07-25 10:12:31.649041] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.690 [2024-07-25 10:12:31.649050] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.690 [2024-07-25 10:12:31.649056] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ca6540) 00:23:46.690 [2024-07-25 10:12:31.649065] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.690 [2024-07-25 10:12:31.649085] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d06840, cid 3, qid 0 00:23:46.690 [2024-07-25 10:12:31.649192] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.691 [2024-07-25 10:12:31.649206] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.691 [2024-07-25 10:12:31.649213] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.691 [2024-07-25 10:12:31.649219] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d06840) on tqpair=0x1ca6540 00:23:46.691 [2024-07-25 10:12:31.649235] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.691 [2024-07-25 10:12:31.649243] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.691 [2024-07-25 10:12:31.649249] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ca6540) 00:23:46.691 [2024-07-25 10:12:31.649259] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.691 [2024-07-25 10:12:31.649279] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d06840, cid 3, qid 0 00:23:46.691 [2024-07-25 10:12:31.649379] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.691 [2024-07-25 10:12:31.649394] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.691 [2024-07-25 10:12:31.649401] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.691 [2024-07-25 10:12:31.649407] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d06840) on tqpair=0x1ca6540 00:23:46.691 [2024-07-25 10:12:31.649446] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.691 [2024-07-25 10:12:31.649457] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.691 [2024-07-25 10:12:31.649463] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ca6540) 00:23:46.691 [2024-07-25 10:12:31.649473] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.691 [2024-07-25 10:12:31.649494] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d06840, cid 3, qid 0 00:23:46.691 [2024-07-25 10:12:31.649600] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.691 [2024-07-25 10:12:31.649615] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.691 [2024-07-25 10:12:31.649621] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.691 [2024-07-25 10:12:31.649628] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d06840) on tqpair=0x1ca6540 00:23:46.691 [2024-07-25 10:12:31.649644] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.691 [2024-07-25 10:12:31.649653] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.691 [2024-07-25 10:12:31.649659] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ca6540) 00:23:46.691 [2024-07-25 10:12:31.649669] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.691 [2024-07-25 10:12:31.649690] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d06840, cid 3, qid 0 00:23:46.691 [2024-07-25 10:12:31.649808] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.691 [2024-07-25 10:12:31.649822] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.691 [2024-07-25 10:12:31.649829] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.691 [2024-07-25 10:12:31.649835] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d06840) on tqpair=0x1ca6540 00:23:46.691 [2024-07-25 10:12:31.649851] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.691 [2024-07-25 10:12:31.649860] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.691 [2024-07-25 10:12:31.649866] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ca6540) 00:23:46.691 [2024-07-25 10:12:31.649876] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.691 [2024-07-25 10:12:31.649896] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d06840, cid 3, qid 0 00:23:46.691 [2024-07-25 10:12:31.650007] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.691 [2024-07-25 10:12:31.650019] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.691 [2024-07-25 10:12:31.650025] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.691 [2024-07-25 10:12:31.650031] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d06840) on tqpair=0x1ca6540 00:23:46.691 [2024-07-25 10:12:31.650046] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.691 [2024-07-25 10:12:31.650055] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.691 [2024-07-25 10:12:31.650061] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ca6540) 00:23:46.691 [2024-07-25 10:12:31.650071] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.691 [2024-07-25 10:12:31.650090] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d06840, cid 3, qid 0 00:23:46.691 [2024-07-25 10:12:31.650189] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.691 [2024-07-25 10:12:31.650203] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.691 [2024-07-25 10:12:31.650209] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.691 [2024-07-25 10:12:31.650216] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d06840) on tqpair=0x1ca6540 00:23:46.691 [2024-07-25 10:12:31.650231] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.691 [2024-07-25 10:12:31.650240] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.691 [2024-07-25 10:12:31.650246] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ca6540) 00:23:46.691 [2024-07-25 10:12:31.650256] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.691 [2024-07-25 10:12:31.650276] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d06840, cid 3, qid 0 00:23:46.691 [2024-07-25 10:12:31.650386] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.691 [2024-07-25 10:12:31.650397] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.691 [2024-07-25 10:12:31.650403] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.691 [2024-07-25 10:12:31.650425] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d06840) on tqpair=0x1ca6540 00:23:46.691 [2024-07-25 10:12:31.650451] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.691 [2024-07-25 10:12:31.650460] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.691 [2024-07-25 10:12:31.650466] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ca6540) 00:23:46.691 [2024-07-25 10:12:31.650476] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.691 [2024-07-25 10:12:31.650497] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d06840, cid 3, qid 0 00:23:46.691 [2024-07-25 10:12:31.650621] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.691 [2024-07-25 10:12:31.650636] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.691 [2024-07-25 10:12:31.650646] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.691 [2024-07-25 10:12:31.650654] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d06840) on tqpair=0x1ca6540 00:23:46.691 [2024-07-25 10:12:31.650671] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.691 [2024-07-25 10:12:31.650679] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.691 [2024-07-25 10:12:31.650686] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ca6540) 00:23:46.691 [2024-07-25 10:12:31.650696] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.691 [2024-07-25 10:12:31.650731] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d06840, cid 3, qid 0 00:23:46.691 [2024-07-25 10:12:31.650835] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.691 [2024-07-25 10:12:31.650849] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.691 [2024-07-25 10:12:31.650856] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.691 [2024-07-25 10:12:31.650863] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d06840) on tqpair=0x1ca6540 00:23:46.691 [2024-07-25 10:12:31.650878] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.691 [2024-07-25 10:12:31.650887] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.691 [2024-07-25 10:12:31.650893] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ca6540) 00:23:46.691 [2024-07-25 10:12:31.650903] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.691 [2024-07-25 10:12:31.650923] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d06840, cid 3, qid 0 00:23:46.691 [2024-07-25 10:12:31.651033] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.691 [2024-07-25 10:12:31.651045] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.691 [2024-07-25 10:12:31.651051] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.691 [2024-07-25 10:12:31.651058] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d06840) on tqpair=0x1ca6540 00:23:46.691 [2024-07-25 10:12:31.651073] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.691 [2024-07-25 10:12:31.651081] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.691 [2024-07-25 10:12:31.651087] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ca6540) 00:23:46.691 [2024-07-25 10:12:31.651097] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.691 [2024-07-25 10:12:31.651117] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d06840, cid 3, qid 0 00:23:46.691 [2024-07-25 10:12:31.651219] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.691 [2024-07-25 10:12:31.651233] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.691 [2024-07-25 10:12:31.651240] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.691 [2024-07-25 10:12:31.651246] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d06840) on tqpair=0x1ca6540 00:23:46.691 [2024-07-25 10:12:31.651262] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.691 [2024-07-25 10:12:31.651271] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.691 [2024-07-25 10:12:31.651277] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ca6540) 00:23:46.691 [2024-07-25 10:12:31.651287] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.691 [2024-07-25 10:12:31.651307] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d06840, cid 3, qid 0 00:23:46.691 [2024-07-25 10:12:31.651422] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.691 [2024-07-25 10:12:31.655450] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.691 [2024-07-25 10:12:31.655459] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.691 [2024-07-25 10:12:31.655470] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d06840) on tqpair=0x1ca6540 00:23:46.691 [2024-07-25 10:12:31.655489] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.692 [2024-07-25 10:12:31.655498] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.692 [2024-07-25 10:12:31.655505] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1ca6540) 00:23:46.692 [2024-07-25 10:12:31.655515] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.692 [2024-07-25 10:12:31.655537] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1d06840, cid 3, qid 0 00:23:46.692 [2024-07-25 10:12:31.655652] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.692 [2024-07-25 10:12:31.655668] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.692 [2024-07-25 10:12:31.655674] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.692 [2024-07-25 10:12:31.655681] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1d06840) on tqpair=0x1ca6540 00:23:46.692 [2024-07-25 10:12:31.655694] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 8 milliseconds 00:23:46.692 00:23:46.692 10:12:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:23:46.692 [2024-07-25 10:12:31.694547] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:23:46.692 [2024-07-25 10:12:31.694606] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid498350 ] 00:23:46.692 EAL: No free 2048 kB hugepages reported on node 1 00:23:46.692 [2024-07-25 10:12:31.733126] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:23:46.692 [2024-07-25 10:12:31.733173] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:23:46.692 [2024-07-25 10:12:31.733183] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:23:46.692 [2024-07-25 10:12:31.733199] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:23:46.692 [2024-07-25 10:12:31.733211] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:23:46.692 [2024-07-25 10:12:31.733495] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:23:46.692 [2024-07-25 10:12:31.733532] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1c6a540 0 00:23:46.692 [2024-07-25 10:12:31.740440] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:23:46.692 [2024-07-25 10:12:31.740463] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:23:46.692 [2024-07-25 10:12:31.740472] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:23:46.692 [2024-07-25 10:12:31.740478] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:23:46.692 [2024-07-25 10:12:31.740515] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.692 [2024-07-25 10:12:31.740526] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.692 [2024-07-25 10:12:31.740532] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c6a540) 00:23:46.692 [2024-07-25 10:12:31.740547] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:23:46.692 [2024-07-25 10:12:31.740572] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cca3c0, cid 0, qid 0 00:23:46.692 [2024-07-25 10:12:31.747441] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.692 [2024-07-25 10:12:31.747459] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.692 [2024-07-25 10:12:31.747466] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.692 [2024-07-25 10:12:31.747473] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cca3c0) on tqpair=0x1c6a540 00:23:46.692 [2024-07-25 10:12:31.747487] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:23:46.692 [2024-07-25 10:12:31.747497] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:23:46.692 [2024-07-25 10:12:31.747506] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:23:46.692 [2024-07-25 10:12:31.747524] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.692 [2024-07-25 10:12:31.747533] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.692 [2024-07-25 10:12:31.747539] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c6a540) 00:23:46.692 [2024-07-25 10:12:31.747550] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.692 [2024-07-25 10:12:31.747574] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cca3c0, cid 0, qid 0 00:23:46.692 [2024-07-25 10:12:31.747740] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.692 [2024-07-25 10:12:31.747752] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.692 [2024-07-25 10:12:31.747759] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.692 [2024-07-25 10:12:31.747765] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cca3c0) on tqpair=0x1c6a540 00:23:46.692 [2024-07-25 10:12:31.747776] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:23:46.692 [2024-07-25 10:12:31.747790] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:23:46.692 [2024-07-25 10:12:31.747801] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.692 [2024-07-25 10:12:31.747808] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.692 [2024-07-25 10:12:31.747814] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c6a540) 00:23:46.692 [2024-07-25 10:12:31.747824] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.692 [2024-07-25 10:12:31.747845] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cca3c0, cid 0, qid 0 00:23:46.692 [2024-07-25 10:12:31.747963] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.692 [2024-07-25 10:12:31.747978] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.692 [2024-07-25 10:12:31.747984] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.692 [2024-07-25 10:12:31.747990] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cca3c0) on tqpair=0x1c6a540 00:23:46.692 [2024-07-25 10:12:31.747998] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:23:46.692 [2024-07-25 10:12:31.748011] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:23:46.692 [2024-07-25 10:12:31.748022] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.692 [2024-07-25 10:12:31.748029] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.692 [2024-07-25 10:12:31.748035] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c6a540) 00:23:46.692 [2024-07-25 10:12:31.748045] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.692 [2024-07-25 10:12:31.748065] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cca3c0, cid 0, qid 0 00:23:46.692 [2024-07-25 10:12:31.748166] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.692 [2024-07-25 10:12:31.748181] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.692 [2024-07-25 10:12:31.748187] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.692 [2024-07-25 10:12:31.748193] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cca3c0) on tqpair=0x1c6a540 00:23:46.692 [2024-07-25 10:12:31.748201] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:23:46.692 [2024-07-25 10:12:31.748217] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.692 [2024-07-25 10:12:31.748225] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.692 [2024-07-25 10:12:31.748231] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c6a540) 00:23:46.692 [2024-07-25 10:12:31.748241] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.692 [2024-07-25 10:12:31.748261] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cca3c0, cid 0, qid 0 00:23:46.692 [2024-07-25 10:12:31.748372] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.692 [2024-07-25 10:12:31.748384] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.692 [2024-07-25 10:12:31.748390] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.692 [2024-07-25 10:12:31.748396] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cca3c0) on tqpair=0x1c6a540 00:23:46.692 [2024-07-25 10:12:31.748403] nvme_ctrlr.c:3873:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:23:46.692 [2024-07-25 10:12:31.748426] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:23:46.692 [2024-07-25 10:12:31.748450] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:23:46.692 [2024-07-25 10:12:31.748560] nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:23:46.692 [2024-07-25 10:12:31.748566] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:23:46.692 [2024-07-25 10:12:31.748578] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.692 [2024-07-25 10:12:31.748586] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.692 [2024-07-25 10:12:31.748592] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c6a540) 00:23:46.692 [2024-07-25 10:12:31.748602] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.692 [2024-07-25 10:12:31.748624] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cca3c0, cid 0, qid 0 00:23:46.692 [2024-07-25 10:12:31.748753] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.692 [2024-07-25 10:12:31.748768] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.692 [2024-07-25 10:12:31.748775] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.692 [2024-07-25 10:12:31.748781] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cca3c0) on tqpair=0x1c6a540 00:23:46.692 [2024-07-25 10:12:31.748788] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:23:46.692 [2024-07-25 10:12:31.748805] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.692 [2024-07-25 10:12:31.748813] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.692 [2024-07-25 10:12:31.748819] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c6a540) 00:23:46.692 [2024-07-25 10:12:31.748829] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.693 [2024-07-25 10:12:31.748849] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cca3c0, cid 0, qid 0 00:23:46.693 [2024-07-25 10:12:31.748963] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.693 [2024-07-25 10:12:31.748975] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.693 [2024-07-25 10:12:31.748981] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.693 [2024-07-25 10:12:31.748988] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cca3c0) on tqpair=0x1c6a540 00:23:46.693 [2024-07-25 10:12:31.748995] nvme_ctrlr.c:3908:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:23:46.693 [2024-07-25 10:12:31.749002] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:23:46.693 [2024-07-25 10:12:31.749015] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:23:46.693 [2024-07-25 10:12:31.749028] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:23:46.693 [2024-07-25 10:12:31.749040] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.693 [2024-07-25 10:12:31.749048] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c6a540) 00:23:46.693 [2024-07-25 10:12:31.749058] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.693 [2024-07-25 10:12:31.749078] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cca3c0, cid 0, qid 0 00:23:46.693 [2024-07-25 10:12:31.749214] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:46.693 [2024-07-25 10:12:31.749229] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:46.693 [2024-07-25 10:12:31.749235] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:46.693 [2024-07-25 10:12:31.749241] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c6a540): datao=0, datal=4096, cccid=0 00:23:46.693 [2024-07-25 10:12:31.749248] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1cca3c0) on tqpair(0x1c6a540): expected_datao=0, payload_size=4096 00:23:46.693 [2024-07-25 10:12:31.749255] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.693 [2024-07-25 10:12:31.749272] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:46.693 [2024-07-25 10:12:31.749280] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:46.693 [2024-07-25 10:12:31.793441] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.693 [2024-07-25 10:12:31.793459] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.693 [2024-07-25 10:12:31.793468] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.693 [2024-07-25 10:12:31.793475] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cca3c0) on tqpair=0x1c6a540 00:23:46.693 [2024-07-25 10:12:31.793485] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:23:46.693 [2024-07-25 10:12:31.793494] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:23:46.693 [2024-07-25 10:12:31.793501] nvme_ctrlr.c:2064:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:23:46.693 [2024-07-25 10:12:31.793508] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:23:46.693 [2024-07-25 10:12:31.793515] nvme_ctrlr.c:2103:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:23:46.693 [2024-07-25 10:12:31.793523] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:23:46.693 [2024-07-25 10:12:31.793537] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:23:46.693 [2024-07-25 10:12:31.793554] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.693 [2024-07-25 10:12:31.793562] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.693 [2024-07-25 10:12:31.793572] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c6a540) 00:23:46.693 [2024-07-25 10:12:31.793583] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:46.693 [2024-07-25 10:12:31.793607] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cca3c0, cid 0, qid 0 00:23:46.693 [2024-07-25 10:12:31.793739] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.693 [2024-07-25 10:12:31.793751] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.693 [2024-07-25 10:12:31.793758] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.693 [2024-07-25 10:12:31.793764] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cca3c0) on tqpair=0x1c6a540 00:23:46.693 [2024-07-25 10:12:31.793773] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.693 [2024-07-25 10:12:31.793780] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.693 [2024-07-25 10:12:31.793786] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1c6a540) 00:23:46.693 [2024-07-25 10:12:31.793796] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:46.693 [2024-07-25 10:12:31.793805] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.693 [2024-07-25 10:12:31.793811] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.693 [2024-07-25 10:12:31.793817] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1c6a540) 00:23:46.693 [2024-07-25 10:12:31.793825] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:46.693 [2024-07-25 10:12:31.793834] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.693 [2024-07-25 10:12:31.793841] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.693 [2024-07-25 10:12:31.793847] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1c6a540) 00:23:46.693 [2024-07-25 10:12:31.793855] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:46.693 [2024-07-25 10:12:31.793864] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.693 [2024-07-25 10:12:31.793870] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.693 [2024-07-25 10:12:31.793876] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c6a540) 00:23:46.693 [2024-07-25 10:12:31.793884] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:46.693 [2024-07-25 10:12:31.793892] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:23:46.693 [2024-07-25 10:12:31.793910] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:23:46.693 [2024-07-25 10:12:31.793922] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.693 [2024-07-25 10:12:31.793929] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c6a540) 00:23:46.693 [2024-07-25 10:12:31.793938] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.693 [2024-07-25 10:12:31.793961] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cca3c0, cid 0, qid 0 00:23:46.693 [2024-07-25 10:12:31.793971] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cca540, cid 1, qid 0 00:23:46.693 [2024-07-25 10:12:31.793979] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cca6c0, cid 2, qid 0 00:23:46.693 [2024-07-25 10:12:31.793986] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cca840, cid 3, qid 0 00:23:46.693 [2024-07-25 10:12:31.793993] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cca9c0, cid 4, qid 0 00:23:46.693 [2024-07-25 10:12:31.794159] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.693 [2024-07-25 10:12:31.794174] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.693 [2024-07-25 10:12:31.794180] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.693 [2024-07-25 10:12:31.794186] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cca9c0) on tqpair=0x1c6a540 00:23:46.693 [2024-07-25 10:12:31.794194] nvme_ctrlr.c:3026:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:23:46.693 [2024-07-25 10:12:31.794202] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:23:46.693 [2024-07-25 10:12:31.794219] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:23:46.693 [2024-07-25 10:12:31.794230] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:23:46.694 [2024-07-25 10:12:31.794240] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.694 [2024-07-25 10:12:31.794247] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.694 [2024-07-25 10:12:31.794253] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c6a540) 00:23:46.694 [2024-07-25 10:12:31.794263] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:23:46.694 [2024-07-25 10:12:31.794283] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cca9c0, cid 4, qid 0 00:23:46.694 [2024-07-25 10:12:31.794387] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.694 [2024-07-25 10:12:31.794401] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.694 [2024-07-25 10:12:31.794422] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.694 [2024-07-25 10:12:31.794437] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cca9c0) on tqpair=0x1c6a540 00:23:46.694 [2024-07-25 10:12:31.794504] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:23:46.694 [2024-07-25 10:12:31.794524] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:23:46.694 [2024-07-25 10:12:31.794538] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.694 [2024-07-25 10:12:31.794545] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c6a540) 00:23:46.694 [2024-07-25 10:12:31.794556] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.694 [2024-07-25 10:12:31.794578] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cca9c0, cid 4, qid 0 00:23:46.694 [2024-07-25 10:12:31.794743] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:46.694 [2024-07-25 10:12:31.794755] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:46.694 [2024-07-25 10:12:31.794761] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:46.694 [2024-07-25 10:12:31.794767] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c6a540): datao=0, datal=4096, cccid=4 00:23:46.694 [2024-07-25 10:12:31.794774] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1cca9c0) on tqpair(0x1c6a540): expected_datao=0, payload_size=4096 00:23:46.694 [2024-07-25 10:12:31.794781] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.694 [2024-07-25 10:12:31.794791] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:46.694 [2024-07-25 10:12:31.794798] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:46.694 [2024-07-25 10:12:31.794826] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.694 [2024-07-25 10:12:31.794836] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.694 [2024-07-25 10:12:31.794842] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.694 [2024-07-25 10:12:31.794852] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cca9c0) on tqpair=0x1c6a540 00:23:46.694 [2024-07-25 10:12:31.794872] nvme_ctrlr.c:4697:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:23:46.694 [2024-07-25 10:12:31.794886] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:23:46.694 [2024-07-25 10:12:31.794903] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:23:46.694 [2024-07-25 10:12:31.794915] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.694 [2024-07-25 10:12:31.794922] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c6a540) 00:23:46.694 [2024-07-25 10:12:31.794932] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.694 [2024-07-25 10:12:31.794953] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cca9c0, cid 4, qid 0 00:23:46.694 [2024-07-25 10:12:31.795088] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:46.694 [2024-07-25 10:12:31.795103] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:46.694 [2024-07-25 10:12:31.795109] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:46.694 [2024-07-25 10:12:31.795115] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c6a540): datao=0, datal=4096, cccid=4 00:23:46.694 [2024-07-25 10:12:31.795122] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1cca9c0) on tqpair(0x1c6a540): expected_datao=0, payload_size=4096 00:23:46.694 [2024-07-25 10:12:31.795129] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.694 [2024-07-25 10:12:31.795138] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:46.694 [2024-07-25 10:12:31.795146] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:46.694 [2024-07-25 10:12:31.795165] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.694 [2024-07-25 10:12:31.795175] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.694 [2024-07-25 10:12:31.795182] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.694 [2024-07-25 10:12:31.795188] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cca9c0) on tqpair=0x1c6a540 00:23:46.694 [2024-07-25 10:12:31.795208] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:23:46.694 [2024-07-25 10:12:31.795226] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:23:46.694 [2024-07-25 10:12:31.795239] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.694 [2024-07-25 10:12:31.795246] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c6a540) 00:23:46.694 [2024-07-25 10:12:31.795256] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.694 [2024-07-25 10:12:31.795277] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cca9c0, cid 4, qid 0 00:23:46.694 [2024-07-25 10:12:31.795395] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:46.694 [2024-07-25 10:12:31.795424] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:46.694 [2024-07-25 10:12:31.795441] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:46.694 [2024-07-25 10:12:31.795448] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c6a540): datao=0, datal=4096, cccid=4 00:23:46.694 [2024-07-25 10:12:31.795455] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1cca9c0) on tqpair(0x1c6a540): expected_datao=0, payload_size=4096 00:23:46.694 [2024-07-25 10:12:31.795463] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.694 [2024-07-25 10:12:31.795473] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:46.694 [2024-07-25 10:12:31.795480] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:46.694 [2024-07-25 10:12:31.795510] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.694 [2024-07-25 10:12:31.795521] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.694 [2024-07-25 10:12:31.795527] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.694 [2024-07-25 10:12:31.795534] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cca9c0) on tqpair=0x1c6a540 00:23:46.694 [2024-07-25 10:12:31.795546] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:23:46.694 [2024-07-25 10:12:31.795560] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:23:46.694 [2024-07-25 10:12:31.795574] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:23:46.694 [2024-07-25 10:12:31.795586] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:23:46.694 [2024-07-25 10:12:31.795595] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:23:46.694 [2024-07-25 10:12:31.795603] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:23:46.694 [2024-07-25 10:12:31.795611] nvme_ctrlr.c:3114:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:23:46.694 [2024-07-25 10:12:31.795619] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:23:46.694 [2024-07-25 10:12:31.795627] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:23:46.694 [2024-07-25 10:12:31.795645] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.694 [2024-07-25 10:12:31.795653] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c6a540) 00:23:46.694 [2024-07-25 10:12:31.795663] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.694 [2024-07-25 10:12:31.795674] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.694 [2024-07-25 10:12:31.795681] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.694 [2024-07-25 10:12:31.795687] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1c6a540) 00:23:46.694 [2024-07-25 10:12:31.795696] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:23:46.694 [2024-07-25 10:12:31.795736] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cca9c0, cid 4, qid 0 00:23:46.694 [2024-07-25 10:12:31.795747] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ccab40, cid 5, qid 0 00:23:46.694 [2024-07-25 10:12:31.795921] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.694 [2024-07-25 10:12:31.795932] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.694 [2024-07-25 10:12:31.795939] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.694 [2024-07-25 10:12:31.795945] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cca9c0) on tqpair=0x1c6a540 00:23:46.694 [2024-07-25 10:12:31.795954] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.694 [2024-07-25 10:12:31.795963] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.694 [2024-07-25 10:12:31.795968] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.694 [2024-07-25 10:12:31.795974] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ccab40) on tqpair=0x1c6a540 00:23:46.694 [2024-07-25 10:12:31.795989] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.694 [2024-07-25 10:12:31.795997] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1c6a540) 00:23:46.694 [2024-07-25 10:12:31.796010] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.694 [2024-07-25 10:12:31.796030] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ccab40, cid 5, qid 0 00:23:46.694 [2024-07-25 10:12:31.796144] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.694 [2024-07-25 10:12:31.796158] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.694 [2024-07-25 10:12:31.796165] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.695 [2024-07-25 10:12:31.796171] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ccab40) on tqpair=0x1c6a540 00:23:46.695 [2024-07-25 10:12:31.796187] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.695 [2024-07-25 10:12:31.796195] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1c6a540) 00:23:46.695 [2024-07-25 10:12:31.796205] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.695 [2024-07-25 10:12:31.796225] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ccab40, cid 5, qid 0 00:23:46.695 [2024-07-25 10:12:31.796325] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.695 [2024-07-25 10:12:31.796339] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.695 [2024-07-25 10:12:31.796346] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.695 [2024-07-25 10:12:31.796352] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ccab40) on tqpair=0x1c6a540 00:23:46.695 [2024-07-25 10:12:31.796368] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.695 [2024-07-25 10:12:31.796376] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1c6a540) 00:23:46.695 [2024-07-25 10:12:31.796386] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.695 [2024-07-25 10:12:31.796419] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ccab40, cid 5, qid 0 00:23:46.695 [2024-07-25 10:12:31.796557] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.695 [2024-07-25 10:12:31.796570] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.695 [2024-07-25 10:12:31.796576] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.695 [2024-07-25 10:12:31.796583] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ccab40) on tqpair=0x1c6a540 00:23:46.695 [2024-07-25 10:12:31.796606] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.695 [2024-07-25 10:12:31.796617] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1c6a540) 00:23:46.695 [2024-07-25 10:12:31.796627] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.695 [2024-07-25 10:12:31.796639] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.695 [2024-07-25 10:12:31.796646] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1c6a540) 00:23:46.695 [2024-07-25 10:12:31.796656] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.695 [2024-07-25 10:12:31.796667] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.695 [2024-07-25 10:12:31.796674] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1c6a540) 00:23:46.695 [2024-07-25 10:12:31.796683] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.695 [2024-07-25 10:12:31.796695] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.695 [2024-07-25 10:12:31.796702] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1c6a540) 00:23:46.695 [2024-07-25 10:12:31.796711] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.695 [2024-07-25 10:12:31.796738] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ccab40, cid 5, qid 0 00:23:46.695 [2024-07-25 10:12:31.796750] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cca9c0, cid 4, qid 0 00:23:46.695 [2024-07-25 10:12:31.796772] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ccacc0, cid 6, qid 0 00:23:46.695 [2024-07-25 10:12:31.796780] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ccae40, cid 7, qid 0 00:23:46.695 [2024-07-25 10:12:31.796980] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:46.695 [2024-07-25 10:12:31.796995] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:46.695 [2024-07-25 10:12:31.797001] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:46.695 [2024-07-25 10:12:31.797007] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c6a540): datao=0, datal=8192, cccid=5 00:23:46.695 [2024-07-25 10:12:31.797014] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ccab40) on tqpair(0x1c6a540): expected_datao=0, payload_size=8192 00:23:46.695 [2024-07-25 10:12:31.797021] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.695 [2024-07-25 10:12:31.797110] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:46.695 [2024-07-25 10:12:31.797120] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:46.695 [2024-07-25 10:12:31.797129] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:46.695 [2024-07-25 10:12:31.797137] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:46.695 [2024-07-25 10:12:31.797143] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:46.695 [2024-07-25 10:12:31.797149] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c6a540): datao=0, datal=512, cccid=4 00:23:46.695 [2024-07-25 10:12:31.797156] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1cca9c0) on tqpair(0x1c6a540): expected_datao=0, payload_size=512 00:23:46.695 [2024-07-25 10:12:31.797163] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.695 [2024-07-25 10:12:31.797171] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:46.695 [2024-07-25 10:12:31.797178] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:46.695 [2024-07-25 10:12:31.797186] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:46.695 [2024-07-25 10:12:31.797194] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:46.695 [2024-07-25 10:12:31.797200] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:46.695 [2024-07-25 10:12:31.797206] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c6a540): datao=0, datal=512, cccid=6 00:23:46.695 [2024-07-25 10:12:31.797213] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ccacc0) on tqpair(0x1c6a540): expected_datao=0, payload_size=512 00:23:46.695 [2024-07-25 10:12:31.797219] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.695 [2024-07-25 10:12:31.797228] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:46.695 [2024-07-25 10:12:31.797235] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:46.695 [2024-07-25 10:12:31.797243] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:23:46.695 [2024-07-25 10:12:31.797251] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:23:46.695 [2024-07-25 10:12:31.797257] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:23:46.695 [2024-07-25 10:12:31.797262] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1c6a540): datao=0, datal=4096, cccid=7 00:23:46.695 [2024-07-25 10:12:31.797269] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1ccae40) on tqpair(0x1c6a540): expected_datao=0, payload_size=4096 00:23:46.695 [2024-07-25 10:12:31.797276] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.695 [2024-07-25 10:12:31.797285] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:23:46.695 [2024-07-25 10:12:31.797292] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:23:46.695 [2024-07-25 10:12:31.797303] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.695 [2024-07-25 10:12:31.797315] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.695 [2024-07-25 10:12:31.797321] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.695 [2024-07-25 10:12:31.797327] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ccab40) on tqpair=0x1c6a540 00:23:46.695 [2024-07-25 10:12:31.797344] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.695 [2024-07-25 10:12:31.797354] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.695 [2024-07-25 10:12:31.797360] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.695 [2024-07-25 10:12:31.797366] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cca9c0) on tqpair=0x1c6a540 00:23:46.695 [2024-07-25 10:12:31.797380] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.695 [2024-07-25 10:12:31.797389] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.695 [2024-07-25 10:12:31.797395] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.695 [2024-07-25 10:12:31.797402] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ccacc0) on tqpair=0x1c6a540 00:23:46.695 [2024-07-25 10:12:31.801435] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.695 [2024-07-25 10:12:31.801452] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.695 [2024-07-25 10:12:31.801459] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.695 [2024-07-25 10:12:31.801465] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ccae40) on tqpair=0x1c6a540 00:23:46.695 ===================================================== 00:23:46.695 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:46.695 ===================================================== 00:23:46.695 Controller Capabilities/Features 00:23:46.695 ================================ 00:23:46.695 Vendor ID: 8086 00:23:46.695 Subsystem Vendor ID: 8086 00:23:46.695 Serial Number: SPDK00000000000001 00:23:46.695 Model Number: SPDK bdev Controller 00:23:46.695 Firmware Version: 24.09 00:23:46.695 Recommended Arb Burst: 6 00:23:46.695 IEEE OUI Identifier: e4 d2 5c 00:23:46.695 Multi-path I/O 00:23:46.695 May have multiple subsystem ports: Yes 00:23:46.695 May have multiple controllers: Yes 00:23:46.695 Associated with SR-IOV VF: No 00:23:46.695 Max Data Transfer Size: 131072 00:23:46.695 Max Number of Namespaces: 32 00:23:46.695 Max Number of I/O Queues: 127 00:23:46.695 NVMe Specification Version (VS): 1.3 00:23:46.695 NVMe Specification Version (Identify): 1.3 00:23:46.695 Maximum Queue Entries: 128 00:23:46.695 Contiguous Queues Required: Yes 00:23:46.695 Arbitration Mechanisms Supported 00:23:46.695 Weighted Round Robin: Not Supported 00:23:46.695 Vendor Specific: Not Supported 00:23:46.695 Reset Timeout: 15000 ms 00:23:46.695 Doorbell Stride: 4 bytes 00:23:46.695 NVM Subsystem Reset: Not Supported 00:23:46.695 Command Sets Supported 00:23:46.695 NVM Command Set: Supported 00:23:46.695 Boot Partition: Not Supported 00:23:46.695 Memory Page Size Minimum: 4096 bytes 00:23:46.695 Memory Page Size Maximum: 4096 bytes 00:23:46.695 Persistent Memory Region: Not Supported 00:23:46.695 Optional Asynchronous Events Supported 00:23:46.695 Namespace Attribute Notices: Supported 00:23:46.695 Firmware Activation Notices: Not Supported 00:23:46.695 ANA Change Notices: Not Supported 00:23:46.695 PLE Aggregate Log Change Notices: Not Supported 00:23:46.695 LBA Status Info Alert Notices: Not Supported 00:23:46.695 EGE Aggregate Log Change Notices: Not Supported 00:23:46.696 Normal NVM Subsystem Shutdown event: Not Supported 00:23:46.696 Zone Descriptor Change Notices: Not Supported 00:23:46.696 Discovery Log Change Notices: Not Supported 00:23:46.696 Controller Attributes 00:23:46.696 128-bit Host Identifier: Supported 00:23:46.696 Non-Operational Permissive Mode: Not Supported 00:23:46.696 NVM Sets: Not Supported 00:23:46.696 Read Recovery Levels: Not Supported 00:23:46.696 Endurance Groups: Not Supported 00:23:46.696 Predictable Latency Mode: Not Supported 00:23:46.696 Traffic Based Keep ALive: Not Supported 00:23:46.696 Namespace Granularity: Not Supported 00:23:46.696 SQ Associations: Not Supported 00:23:46.696 UUID List: Not Supported 00:23:46.696 Multi-Domain Subsystem: Not Supported 00:23:46.696 Fixed Capacity Management: Not Supported 00:23:46.696 Variable Capacity Management: Not Supported 00:23:46.696 Delete Endurance Group: Not Supported 00:23:46.696 Delete NVM Set: Not Supported 00:23:46.696 Extended LBA Formats Supported: Not Supported 00:23:46.696 Flexible Data Placement Supported: Not Supported 00:23:46.696 00:23:46.696 Controller Memory Buffer Support 00:23:46.696 ================================ 00:23:46.696 Supported: No 00:23:46.696 00:23:46.696 Persistent Memory Region Support 00:23:46.696 ================================ 00:23:46.696 Supported: No 00:23:46.696 00:23:46.696 Admin Command Set Attributes 00:23:46.696 ============================ 00:23:46.696 Security Send/Receive: Not Supported 00:23:46.696 Format NVM: Not Supported 00:23:46.696 Firmware Activate/Download: Not Supported 00:23:46.696 Namespace Management: Not Supported 00:23:46.696 Device Self-Test: Not Supported 00:23:46.696 Directives: Not Supported 00:23:46.696 NVMe-MI: Not Supported 00:23:46.696 Virtualization Management: Not Supported 00:23:46.696 Doorbell Buffer Config: Not Supported 00:23:46.696 Get LBA Status Capability: Not Supported 00:23:46.696 Command & Feature Lockdown Capability: Not Supported 00:23:46.696 Abort Command Limit: 4 00:23:46.696 Async Event Request Limit: 4 00:23:46.696 Number of Firmware Slots: N/A 00:23:46.696 Firmware Slot 1 Read-Only: N/A 00:23:46.696 Firmware Activation Without Reset: N/A 00:23:46.696 Multiple Update Detection Support: N/A 00:23:46.696 Firmware Update Granularity: No Information Provided 00:23:46.696 Per-Namespace SMART Log: No 00:23:46.696 Asymmetric Namespace Access Log Page: Not Supported 00:23:46.696 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:23:46.696 Command Effects Log Page: Supported 00:23:46.696 Get Log Page Extended Data: Supported 00:23:46.696 Telemetry Log Pages: Not Supported 00:23:46.696 Persistent Event Log Pages: Not Supported 00:23:46.696 Supported Log Pages Log Page: May Support 00:23:46.696 Commands Supported & Effects Log Page: Not Supported 00:23:46.696 Feature Identifiers & Effects Log Page:May Support 00:23:46.696 NVMe-MI Commands & Effects Log Page: May Support 00:23:46.696 Data Area 4 for Telemetry Log: Not Supported 00:23:46.696 Error Log Page Entries Supported: 128 00:23:46.696 Keep Alive: Supported 00:23:46.696 Keep Alive Granularity: 10000 ms 00:23:46.696 00:23:46.696 NVM Command Set Attributes 00:23:46.696 ========================== 00:23:46.696 Submission Queue Entry Size 00:23:46.696 Max: 64 00:23:46.696 Min: 64 00:23:46.696 Completion Queue Entry Size 00:23:46.696 Max: 16 00:23:46.696 Min: 16 00:23:46.696 Number of Namespaces: 32 00:23:46.696 Compare Command: Supported 00:23:46.696 Write Uncorrectable Command: Not Supported 00:23:46.696 Dataset Management Command: Supported 00:23:46.696 Write Zeroes Command: Supported 00:23:46.696 Set Features Save Field: Not Supported 00:23:46.696 Reservations: Supported 00:23:46.696 Timestamp: Not Supported 00:23:46.696 Copy: Supported 00:23:46.696 Volatile Write Cache: Present 00:23:46.696 Atomic Write Unit (Normal): 1 00:23:46.696 Atomic Write Unit (PFail): 1 00:23:46.696 Atomic Compare & Write Unit: 1 00:23:46.696 Fused Compare & Write: Supported 00:23:46.696 Scatter-Gather List 00:23:46.696 SGL Command Set: Supported 00:23:46.696 SGL Keyed: Supported 00:23:46.696 SGL Bit Bucket Descriptor: Not Supported 00:23:46.696 SGL Metadata Pointer: Not Supported 00:23:46.696 Oversized SGL: Not Supported 00:23:46.696 SGL Metadata Address: Not Supported 00:23:46.696 SGL Offset: Supported 00:23:46.696 Transport SGL Data Block: Not Supported 00:23:46.696 Replay Protected Memory Block: Not Supported 00:23:46.696 00:23:46.696 Firmware Slot Information 00:23:46.696 ========================= 00:23:46.696 Active slot: 1 00:23:46.696 Slot 1 Firmware Revision: 24.09 00:23:46.696 00:23:46.696 00:23:46.696 Commands Supported and Effects 00:23:46.696 ============================== 00:23:46.696 Admin Commands 00:23:46.696 -------------- 00:23:46.696 Get Log Page (02h): Supported 00:23:46.696 Identify (06h): Supported 00:23:46.696 Abort (08h): Supported 00:23:46.696 Set Features (09h): Supported 00:23:46.696 Get Features (0Ah): Supported 00:23:46.696 Asynchronous Event Request (0Ch): Supported 00:23:46.696 Keep Alive (18h): Supported 00:23:46.696 I/O Commands 00:23:46.696 ------------ 00:23:46.696 Flush (00h): Supported LBA-Change 00:23:46.696 Write (01h): Supported LBA-Change 00:23:46.696 Read (02h): Supported 00:23:46.696 Compare (05h): Supported 00:23:46.696 Write Zeroes (08h): Supported LBA-Change 00:23:46.696 Dataset Management (09h): Supported LBA-Change 00:23:46.696 Copy (19h): Supported LBA-Change 00:23:46.696 00:23:46.696 Error Log 00:23:46.696 ========= 00:23:46.696 00:23:46.696 Arbitration 00:23:46.696 =========== 00:23:46.696 Arbitration Burst: 1 00:23:46.696 00:23:46.696 Power Management 00:23:46.696 ================ 00:23:46.696 Number of Power States: 1 00:23:46.696 Current Power State: Power State #0 00:23:46.696 Power State #0: 00:23:46.696 Max Power: 0.00 W 00:23:46.696 Non-Operational State: Operational 00:23:46.696 Entry Latency: Not Reported 00:23:46.696 Exit Latency: Not Reported 00:23:46.696 Relative Read Throughput: 0 00:23:46.696 Relative Read Latency: 0 00:23:46.696 Relative Write Throughput: 0 00:23:46.696 Relative Write Latency: 0 00:23:46.696 Idle Power: Not Reported 00:23:46.696 Active Power: Not Reported 00:23:46.696 Non-Operational Permissive Mode: Not Supported 00:23:46.696 00:23:46.696 Health Information 00:23:46.696 ================== 00:23:46.696 Critical Warnings: 00:23:46.696 Available Spare Space: OK 00:23:46.696 Temperature: OK 00:23:46.696 Device Reliability: OK 00:23:46.696 Read Only: No 00:23:46.696 Volatile Memory Backup: OK 00:23:46.696 Current Temperature: 0 Kelvin (-273 Celsius) 00:23:46.696 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:23:46.696 Available Spare: 0% 00:23:46.696 Available Spare Threshold: 0% 00:23:46.696 Life Percentage Used:[2024-07-25 10:12:31.801578] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.696 [2024-07-25 10:12:31.801589] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1c6a540) 00:23:46.696 [2024-07-25 10:12:31.801600] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.696 [2024-07-25 10:12:31.801623] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1ccae40, cid 7, qid 0 00:23:46.696 [2024-07-25 10:12:31.801800] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.696 [2024-07-25 10:12:31.801815] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.696 [2024-07-25 10:12:31.801821] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.696 [2024-07-25 10:12:31.801827] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1ccae40) on tqpair=0x1c6a540 00:23:46.696 [2024-07-25 10:12:31.801868] nvme_ctrlr.c:4361:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:23:46.696 [2024-07-25 10:12:31.801886] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cca3c0) on tqpair=0x1c6a540 00:23:46.696 [2024-07-25 10:12:31.801896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.696 [2024-07-25 10:12:31.801904] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cca540) on tqpair=0x1c6a540 00:23:46.696 [2024-07-25 10:12:31.801911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.696 [2024-07-25 10:12:31.801918] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cca6c0) on tqpair=0x1c6a540 00:23:46.696 [2024-07-25 10:12:31.801925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.696 [2024-07-25 10:12:31.801933] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cca840) on tqpair=0x1c6a540 00:23:46.696 [2024-07-25 10:12:31.801940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:46.696 [2024-07-25 10:12:31.801952] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.697 [2024-07-25 10:12:31.801959] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.697 [2024-07-25 10:12:31.801965] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c6a540) 00:23:46.697 [2024-07-25 10:12:31.801978] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.697 [2024-07-25 10:12:31.802001] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cca840, cid 3, qid 0 00:23:46.697 [2024-07-25 10:12:31.802138] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.697 [2024-07-25 10:12:31.802153] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.697 [2024-07-25 10:12:31.802159] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.697 [2024-07-25 10:12:31.802165] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cca840) on tqpair=0x1c6a540 00:23:46.697 [2024-07-25 10:12:31.802176] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.697 [2024-07-25 10:12:31.802183] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.697 [2024-07-25 10:12:31.802189] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c6a540) 00:23:46.697 [2024-07-25 10:12:31.802198] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.697 [2024-07-25 10:12:31.802223] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cca840, cid 3, qid 0 00:23:46.697 [2024-07-25 10:12:31.802354] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.697 [2024-07-25 10:12:31.802365] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.697 [2024-07-25 10:12:31.802371] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.697 [2024-07-25 10:12:31.802377] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cca840) on tqpair=0x1c6a540 00:23:46.697 [2024-07-25 10:12:31.802384] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:23:46.697 [2024-07-25 10:12:31.802392] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:23:46.697 [2024-07-25 10:12:31.802406] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.697 [2024-07-25 10:12:31.802438] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.697 [2024-07-25 10:12:31.802446] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c6a540) 00:23:46.697 [2024-07-25 10:12:31.802456] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.697 [2024-07-25 10:12:31.802478] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cca840, cid 3, qid 0 00:23:46.697 [2024-07-25 10:12:31.802588] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.697 [2024-07-25 10:12:31.802603] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.697 [2024-07-25 10:12:31.802610] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.697 [2024-07-25 10:12:31.802616] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cca840) on tqpair=0x1c6a540 00:23:46.697 [2024-07-25 10:12:31.802633] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.697 [2024-07-25 10:12:31.802641] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.697 [2024-07-25 10:12:31.802647] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c6a540) 00:23:46.697 [2024-07-25 10:12:31.802658] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.697 [2024-07-25 10:12:31.802684] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cca840, cid 3, qid 0 00:23:46.697 [2024-07-25 10:12:31.802834] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.697 [2024-07-25 10:12:31.802846] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.697 [2024-07-25 10:12:31.802852] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.697 [2024-07-25 10:12:31.802858] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cca840) on tqpair=0x1c6a540 00:23:46.697 [2024-07-25 10:12:31.802873] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.697 [2024-07-25 10:12:31.802881] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.697 [2024-07-25 10:12:31.802891] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c6a540) 00:23:46.697 [2024-07-25 10:12:31.802901] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.697 [2024-07-25 10:12:31.802921] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cca840, cid 3, qid 0 00:23:46.697 [2024-07-25 10:12:31.803021] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.697 [2024-07-25 10:12:31.803035] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.697 [2024-07-25 10:12:31.803042] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.697 [2024-07-25 10:12:31.803048] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cca840) on tqpair=0x1c6a540 00:23:46.697 [2024-07-25 10:12:31.803063] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.697 [2024-07-25 10:12:31.803072] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.697 [2024-07-25 10:12:31.803078] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c6a540) 00:23:46.697 [2024-07-25 10:12:31.803088] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.697 [2024-07-25 10:12:31.803107] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cca840, cid 3, qid 0 00:23:46.697 [2024-07-25 10:12:31.803220] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.697 [2024-07-25 10:12:31.803232] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.697 [2024-07-25 10:12:31.803238] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.697 [2024-07-25 10:12:31.803244] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cca840) on tqpair=0x1c6a540 00:23:46.697 [2024-07-25 10:12:31.803259] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.697 [2024-07-25 10:12:31.803267] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.697 [2024-07-25 10:12:31.803273] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c6a540) 00:23:46.697 [2024-07-25 10:12:31.803283] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.697 [2024-07-25 10:12:31.803302] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cca840, cid 3, qid 0 00:23:46.697 [2024-07-25 10:12:31.803423] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.697 [2024-07-25 10:12:31.803447] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.697 [2024-07-25 10:12:31.803455] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.697 [2024-07-25 10:12:31.803461] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cca840) on tqpair=0x1c6a540 00:23:46.697 [2024-07-25 10:12:31.803478] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.697 [2024-07-25 10:12:31.803487] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.697 [2024-07-25 10:12:31.803493] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c6a540) 00:23:46.697 [2024-07-25 10:12:31.803503] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.697 [2024-07-25 10:12:31.803525] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cca840, cid 3, qid 0 00:23:46.697 [2024-07-25 10:12:31.803648] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.697 [2024-07-25 10:12:31.803663] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.697 [2024-07-25 10:12:31.803669] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.697 [2024-07-25 10:12:31.803676] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cca840) on tqpair=0x1c6a540 00:23:46.697 [2024-07-25 10:12:31.803692] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.697 [2024-07-25 10:12:31.803700] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.697 [2024-07-25 10:12:31.803707] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c6a540) 00:23:46.697 [2024-07-25 10:12:31.803720] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.697 [2024-07-25 10:12:31.803741] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cca840, cid 3, qid 0 00:23:46.697 [2024-07-25 10:12:31.803877] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.697 [2024-07-25 10:12:31.803889] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.697 [2024-07-25 10:12:31.803895] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.697 [2024-07-25 10:12:31.803901] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cca840) on tqpair=0x1c6a540 00:23:46.697 [2024-07-25 10:12:31.803917] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.697 [2024-07-25 10:12:31.803939] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.697 [2024-07-25 10:12:31.803946] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c6a540) 00:23:46.697 [2024-07-25 10:12:31.803956] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.697 [2024-07-25 10:12:31.803977] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cca840, cid 3, qid 0 00:23:46.697 [2024-07-25 10:12:31.804082] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.697 [2024-07-25 10:12:31.804097] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.697 [2024-07-25 10:12:31.804104] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.697 [2024-07-25 10:12:31.804110] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cca840) on tqpair=0x1c6a540 00:23:46.697 [2024-07-25 10:12:31.804126] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.697 [2024-07-25 10:12:31.804135] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.697 [2024-07-25 10:12:31.804141] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c6a540) 00:23:46.697 [2024-07-25 10:12:31.804151] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.697 [2024-07-25 10:12:31.804171] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cca840, cid 3, qid 0 00:23:46.697 [2024-07-25 10:12:31.804299] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.697 [2024-07-25 10:12:31.804311] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.697 [2024-07-25 10:12:31.804317] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.697 [2024-07-25 10:12:31.804323] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cca840) on tqpair=0x1c6a540 00:23:46.697 [2024-07-25 10:12:31.804338] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.697 [2024-07-25 10:12:31.804346] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.697 [2024-07-25 10:12:31.804352] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c6a540) 00:23:46.698 [2024-07-25 10:12:31.804362] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.698 [2024-07-25 10:12:31.804382] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cca840, cid 3, qid 0 00:23:46.698 [2024-07-25 10:12:31.804525] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.698 [2024-07-25 10:12:31.804538] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.698 [2024-07-25 10:12:31.804545] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.698 [2024-07-25 10:12:31.804551] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cca840) on tqpair=0x1c6a540 00:23:46.698 [2024-07-25 10:12:31.804567] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.698 [2024-07-25 10:12:31.804575] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.698 [2024-07-25 10:12:31.804581] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c6a540) 00:23:46.698 [2024-07-25 10:12:31.804591] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.698 [2024-07-25 10:12:31.804616] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cca840, cid 3, qid 0 00:23:46.698 [2024-07-25 10:12:31.804727] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.698 [2024-07-25 10:12:31.804739] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.698 [2024-07-25 10:12:31.804745] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.698 [2024-07-25 10:12:31.804751] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cca840) on tqpair=0x1c6a540 00:23:46.698 [2024-07-25 10:12:31.804766] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.698 [2024-07-25 10:12:31.804774] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.698 [2024-07-25 10:12:31.804780] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c6a540) 00:23:46.698 [2024-07-25 10:12:31.804790] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.698 [2024-07-25 10:12:31.804810] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cca840, cid 3, qid 0 00:23:46.698 [2024-07-25 10:12:31.804914] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.698 [2024-07-25 10:12:31.804928] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.698 [2024-07-25 10:12:31.804934] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.698 [2024-07-25 10:12:31.804940] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cca840) on tqpair=0x1c6a540 00:23:46.698 [2024-07-25 10:12:31.804956] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.698 [2024-07-25 10:12:31.804964] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.698 [2024-07-25 10:12:31.804970] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c6a540) 00:23:46.698 [2024-07-25 10:12:31.804980] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.698 [2024-07-25 10:12:31.805000] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cca840, cid 3, qid 0 00:23:46.698 [2024-07-25 10:12:31.805102] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.698 [2024-07-25 10:12:31.805116] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.698 [2024-07-25 10:12:31.805122] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.698 [2024-07-25 10:12:31.805128] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cca840) on tqpair=0x1c6a540 00:23:46.698 [2024-07-25 10:12:31.805144] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.698 [2024-07-25 10:12:31.805152] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.698 [2024-07-25 10:12:31.805158] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c6a540) 00:23:46.698 [2024-07-25 10:12:31.805168] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.698 [2024-07-25 10:12:31.805188] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cca840, cid 3, qid 0 00:23:46.698 [2024-07-25 10:12:31.805299] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.698 [2024-07-25 10:12:31.805311] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.698 [2024-07-25 10:12:31.805317] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.698 [2024-07-25 10:12:31.805323] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cca840) on tqpair=0x1c6a540 00:23:46.698 [2024-07-25 10:12:31.805338] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.698 [2024-07-25 10:12:31.805346] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.698 [2024-07-25 10:12:31.805352] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c6a540) 00:23:46.698 [2024-07-25 10:12:31.805362] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.698 [2024-07-25 10:12:31.805385] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cca840, cid 3, qid 0 00:23:46.698 [2024-07-25 10:12:31.809462] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.698 [2024-07-25 10:12:31.809479] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.698 [2024-07-25 10:12:31.809486] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.698 [2024-07-25 10:12:31.809492] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cca840) on tqpair=0x1c6a540 00:23:46.698 [2024-07-25 10:12:31.809510] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:23:46.698 [2024-07-25 10:12:31.809519] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:23:46.698 [2024-07-25 10:12:31.809525] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1c6a540) 00:23:46.698 [2024-07-25 10:12:31.809535] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:46.698 [2024-07-25 10:12:31.809558] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1cca840, cid 3, qid 0 00:23:46.698 [2024-07-25 10:12:31.809708] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:23:46.698 [2024-07-25 10:12:31.809723] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:23:46.698 [2024-07-25 10:12:31.809729] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:23:46.698 [2024-07-25 10:12:31.809751] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1cca840) on tqpair=0x1c6a540 00:23:46.698 [2024-07-25 10:12:31.809765] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:23:46.698 0% 00:23:46.698 Data Units Read: 0 00:23:46.698 Data Units Written: 0 00:23:46.698 Host Read Commands: 0 00:23:46.698 Host Write Commands: 0 00:23:46.698 Controller Busy Time: 0 minutes 00:23:46.698 Power Cycles: 0 00:23:46.698 Power On Hours: 0 hours 00:23:46.698 Unsafe Shutdowns: 0 00:23:46.698 Unrecoverable Media Errors: 0 00:23:46.698 Lifetime Error Log Entries: 0 00:23:46.698 Warning Temperature Time: 0 minutes 00:23:46.698 Critical Temperature Time: 0 minutes 00:23:46.698 00:23:46.698 Number of Queues 00:23:46.698 ================ 00:23:46.698 Number of I/O Submission Queues: 127 00:23:46.698 Number of I/O Completion Queues: 127 00:23:46.698 00:23:46.698 Active Namespaces 00:23:46.698 ================= 00:23:46.698 Namespace ID:1 00:23:46.698 Error Recovery Timeout: Unlimited 00:23:46.698 Command Set Identifier: NVM (00h) 00:23:46.698 Deallocate: Supported 00:23:46.698 Deallocated/Unwritten Error: Not Supported 00:23:46.698 Deallocated Read Value: Unknown 00:23:46.698 Deallocate in Write Zeroes: Not Supported 00:23:46.698 Deallocated Guard Field: 0xFFFF 00:23:46.698 Flush: Supported 00:23:46.698 Reservation: Supported 00:23:46.698 Namespace Sharing Capabilities: Multiple Controllers 00:23:46.698 Size (in LBAs): 131072 (0GiB) 00:23:46.698 Capacity (in LBAs): 131072 (0GiB) 00:23:46.698 Utilization (in LBAs): 131072 (0GiB) 00:23:46.698 NGUID: ABCDEF0123456789ABCDEF0123456789 00:23:46.698 EUI64: ABCDEF0123456789 00:23:46.698 UUID: ad191506-f798-47ab-977f-5d47afd23598 00:23:46.698 Thin Provisioning: Not Supported 00:23:46.698 Per-NS Atomic Units: Yes 00:23:46.698 Atomic Boundary Size (Normal): 0 00:23:46.698 Atomic Boundary Size (PFail): 0 00:23:46.698 Atomic Boundary Offset: 0 00:23:46.698 Maximum Single Source Range Length: 65535 00:23:46.698 Maximum Copy Length: 65535 00:23:46.698 Maximum Source Range Count: 1 00:23:46.698 NGUID/EUI64 Never Reused: No 00:23:46.698 Namespace Write Protected: No 00:23:46.698 Number of LBA Formats: 1 00:23:46.698 Current LBA Format: LBA Format #00 00:23:46.698 LBA Format #00: Data Size: 512 Metadata Size: 0 00:23:46.699 00:23:46.699 10:12:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:23:46.699 10:12:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:46.699 10:12:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:46.699 10:12:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:46.699 10:12:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:46.699 10:12:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:23:46.699 10:12:31 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:23:46.699 10:12:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:46.699 10:12:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:23:46.699 10:12:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:46.699 10:12:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:23:46.699 10:12:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:46.699 10:12:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:46.699 rmmod nvme_tcp 00:23:46.959 rmmod nvme_fabrics 00:23:46.959 rmmod nvme_keyring 00:23:46.959 10:12:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:46.959 10:12:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:23:46.959 10:12:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:23:46.959 10:12:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 498312 ']' 00:23:46.959 10:12:31 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 498312 00:23:46.959 10:12:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@950 -- # '[' -z 498312 ']' 00:23:46.959 10:12:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # kill -0 498312 00:23:46.959 10:12:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # uname 00:23:46.959 10:12:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:46.959 10:12:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 498312 00:23:46.959 10:12:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:46.959 10:12:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:46.959 10:12:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@968 -- # echo 'killing process with pid 498312' 00:23:46.959 killing process with pid 498312 00:23:46.959 10:12:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@969 -- # kill 498312 00:23:46.959 10:12:31 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@974 -- # wait 498312 00:23:47.249 10:12:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:47.249 10:12:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:47.249 10:12:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:47.249 10:12:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:47.249 10:12:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:47.249 10:12:32 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:47.249 10:12:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:47.249 10:12:32 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:49.152 10:12:34 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:49.152 00:23:49.152 real 0m6.051s 00:23:49.152 user 0m4.855s 00:23:49.152 sys 0m2.320s 00:23:49.152 10:12:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:49.152 10:12:34 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:23:49.152 ************************************ 00:23:49.152 END TEST nvmf_identify 00:23:49.152 ************************************ 00:23:49.152 10:12:34 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:23:49.152 10:12:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:49.152 10:12:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:49.152 10:12:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:49.411 ************************************ 00:23:49.411 START TEST nvmf_perf 00:23:49.411 ************************************ 00:23:49.411 10:12:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:23:49.411 * Looking for test storage... 00:23:49.411 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:49.411 10:12:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:49.411 10:12:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:23:49.411 10:12:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:49.411 10:12:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:49.411 10:12:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:49.411 10:12:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:49.411 10:12:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:49.411 10:12:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:49.411 10:12:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:49.411 10:12:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:49.411 10:12:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:49.411 10:12:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:49.411 10:12:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:23:49.411 10:12:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:23:49.411 10:12:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:49.411 10:12:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:49.411 10:12:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:49.411 10:12:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:49.411 10:12:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:49.411 10:12:34 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:49.411 10:12:34 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:49.411 10:12:34 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:49.411 10:12:34 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:49.411 10:12:34 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:49.411 10:12:34 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:49.411 10:12:34 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:23:49.411 10:12:34 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:49.411 10:12:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:23:49.411 10:12:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:49.411 10:12:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:49.411 10:12:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:49.411 10:12:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:49.411 10:12:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:49.411 10:12:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:49.412 10:12:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:49.412 10:12:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:49.412 10:12:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:49.412 10:12:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:49.412 10:12:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:49.412 10:12:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:23:49.412 10:12:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:49.412 10:12:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:49.412 10:12:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:49.412 10:12:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:49.412 10:12:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:49.412 10:12:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:49.412 10:12:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:49.412 10:12:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:49.412 10:12:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:49.412 10:12:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:49.412 10:12:34 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:23:49.412 10:12:34 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:51.943 10:12:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:51.943 10:12:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:23:51.943 10:12:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:51.943 10:12:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:51.943 10:12:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:51.943 10:12:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:51.943 10:12:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:51.943 10:12:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:23:51.943 10:12:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:51.943 10:12:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:23:51.943 10:12:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:23:51.943 10:12:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:23:51.943 10:12:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:23:51.943 10:12:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:23:51.943 10:12:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:23:51.943 10:12:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:51.943 10:12:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:51.943 10:12:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:51.943 10:12:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:51.943 10:12:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:51.943 10:12:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:51.943 10:12:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:51.943 10:12:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:51.943 10:12:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:51.943 10:12:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:51.943 10:12:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:51.943 10:12:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:51.943 10:12:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:51.943 10:12:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:51.943 10:12:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:51.943 10:12:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:51.943 10:12:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:51.943 10:12:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:51.943 10:12:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:23:51.943 Found 0000:84:00.0 (0x8086 - 0x159b) 00:23:51.943 10:12:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:51.943 10:12:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:51.943 10:12:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:51.943 10:12:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:51.943 10:12:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:51.943 10:12:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:51.943 10:12:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:23:51.943 Found 0000:84:00.1 (0x8086 - 0x159b) 00:23:51.943 10:12:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:51.943 10:12:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:51.943 10:12:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:51.943 10:12:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:51.943 10:12:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:51.943 10:12:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:51.943 10:12:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:51.943 10:12:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:51.943 10:12:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:51.943 10:12:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:51.943 10:12:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:51.943 10:12:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:51.943 10:12:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:51.943 10:12:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:51.943 10:12:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:51.943 10:12:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:23:51.943 Found net devices under 0000:84:00.0: cvl_0_0 00:23:51.943 10:12:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:51.943 10:12:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:51.943 10:12:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:51.943 10:12:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:51.943 10:12:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:51.943 10:12:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:51.943 10:12:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:51.944 10:12:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:51.944 10:12:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:23:51.944 Found net devices under 0000:84:00.1: cvl_0_1 00:23:51.944 10:12:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:51.944 10:12:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:51.944 10:12:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:23:51.944 10:12:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:51.944 10:12:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:51.944 10:12:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:51.944 10:12:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:51.944 10:12:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:51.944 10:12:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:51.944 10:12:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:51.944 10:12:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:51.944 10:12:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:51.944 10:12:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:51.944 10:12:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:51.944 10:12:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:51.944 10:12:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:51.944 10:12:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:51.944 10:12:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:51.944 10:12:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:51.944 10:12:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:51.944 10:12:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:51.944 10:12:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:51.944 10:12:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:51.944 10:12:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:51.944 10:12:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:51.944 10:12:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:51.944 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:51.944 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.288 ms 00:23:51.944 00:23:51.944 --- 10.0.0.2 ping statistics --- 00:23:51.944 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:51.944 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:23:51.944 10:12:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:51.944 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:51.944 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.170 ms 00:23:51.944 00:23:51.944 --- 10.0.0.1 ping statistics --- 00:23:51.944 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:51.944 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:23:51.944 10:12:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:51.944 10:12:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:23:51.944 10:12:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:51.944 10:12:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:51.944 10:12:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:51.944 10:12:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:51.944 10:12:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:51.944 10:12:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:51.944 10:12:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:51.944 10:12:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:23:51.944 10:12:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:51.944 10:12:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:51.944 10:12:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:51.944 10:12:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=500411 00:23:51.944 10:12:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:51.944 10:12:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 500411 00:23:51.944 10:12:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@831 -- # '[' -z 500411 ']' 00:23:51.944 10:12:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:51.944 10:12:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:51.944 10:12:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:51.944 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:51.944 10:12:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:51.944 10:12:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:51.944 [2024-07-25 10:12:36.974630] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:23:51.944 [2024-07-25 10:12:36.974743] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:51.944 EAL: No free 2048 kB hugepages reported on node 1 00:23:51.944 [2024-07-25 10:12:37.057797] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:52.202 [2024-07-25 10:12:37.181380] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:52.202 [2024-07-25 10:12:37.181453] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:52.202 [2024-07-25 10:12:37.181472] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:52.203 [2024-07-25 10:12:37.181485] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:52.203 [2024-07-25 10:12:37.181496] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:52.203 [2024-07-25 10:12:37.181581] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:52.203 [2024-07-25 10:12:37.181640] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:52.203 [2024-07-25 10:12:37.181698] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:52.203 [2024-07-25 10:12:37.181694] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:52.203 10:12:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:52.203 10:12:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # return 0 00:23:52.203 10:12:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:52.203 10:12:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:52.203 10:12:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:52.203 10:12:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:52.203 10:12:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:23:52.203 10:12:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:23:55.481 10:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:23:55.481 10:12:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:23:56.046 10:12:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:82:00.0 00:23:56.046 10:12:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:23:56.611 10:12:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:23:56.611 10:12:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:82:00.0 ']' 00:23:56.611 10:12:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:23:56.611 10:12:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:23:56.611 10:12:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:56.868 [2024-07-25 10:12:41.837395] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:56.868 10:12:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:57.125 10:12:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:23:57.125 10:12:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:57.383 10:12:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:23:57.383 10:12:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:23:58.314 10:12:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:58.314 [2024-07-25 10:12:43.403137] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:58.314 10:12:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:58.877 10:12:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:82:00.0 ']' 00:23:58.877 10:12:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:82:00.0' 00:23:58.877 10:12:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:23:58.877 10:12:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:82:00.0' 00:23:59.809 Initializing NVMe Controllers 00:23:59.809 Attached to NVMe Controller at 0000:82:00.0 [8086:0a54] 00:23:59.809 Associating PCIE (0000:82:00.0) NSID 1 with lcore 0 00:23:59.809 Initialization complete. Launching workers. 00:23:59.809 ======================================================== 00:23:59.809 Latency(us) 00:23:59.809 Device Information : IOPS MiB/s Average min max 00:23:59.809 PCIE (0000:82:00.0) NSID 1 from core 0: 85067.39 332.29 375.47 23.00 4317.58 00:23:59.809 ======================================================== 00:23:59.809 Total : 85067.39 332.29 375.47 23.00 4317.58 00:23:59.809 00:24:00.066 10:12:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:00.066 EAL: No free 2048 kB hugepages reported on node 1 00:24:01.437 Initializing NVMe Controllers 00:24:01.437 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:01.437 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:01.437 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:01.437 Initialization complete. Launching workers. 00:24:01.437 ======================================================== 00:24:01.437 Latency(us) 00:24:01.437 Device Information : IOPS MiB/s Average min max 00:24:01.437 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 101.95 0.40 10152.96 166.67 45438.11 00:24:01.437 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 69.96 0.27 14862.64 7941.38 50880.07 00:24:01.437 ======================================================== 00:24:01.437 Total : 171.91 0.67 12069.69 166.67 50880.07 00:24:01.437 00:24:01.437 10:12:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:01.437 EAL: No free 2048 kB hugepages reported on node 1 00:24:02.370 Initializing NVMe Controllers 00:24:02.370 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:02.370 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:02.370 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:02.370 Initialization complete. Launching workers. 00:24:02.370 ======================================================== 00:24:02.370 Latency(us) 00:24:02.370 Device Information : IOPS MiB/s Average min max 00:24:02.370 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8495.00 33.18 3776.50 636.30 8148.62 00:24:02.370 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3889.00 15.19 8265.29 5026.79 16429.19 00:24:02.370 ======================================================== 00:24:02.370 Total : 12384.00 48.38 5186.13 636.30 16429.19 00:24:02.370 00:24:02.628 10:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:24:02.628 10:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:24:02.628 10:12:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:02.628 EAL: No free 2048 kB hugepages reported on node 1 00:24:05.155 Initializing NVMe Controllers 00:24:05.155 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:05.155 Controller IO queue size 128, less than required. 00:24:05.155 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:05.155 Controller IO queue size 128, less than required. 00:24:05.155 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:05.155 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:05.155 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:05.155 Initialization complete. Launching workers. 00:24:05.155 ======================================================== 00:24:05.155 Latency(us) 00:24:05.155 Device Information : IOPS MiB/s Average min max 00:24:05.155 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 996.06 249.01 132975.49 78451.23 190938.58 00:24:05.155 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 609.73 152.43 218176.44 70572.46 325957.03 00:24:05.155 ======================================================== 00:24:05.155 Total : 1605.79 401.45 165326.93 70572.46 325957.03 00:24:05.155 00:24:05.155 10:12:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:24:05.155 EAL: No free 2048 kB hugepages reported on node 1 00:24:05.155 No valid NVMe controllers or AIO or URING devices found 00:24:05.155 Initializing NVMe Controllers 00:24:05.155 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:05.155 Controller IO queue size 128, less than required. 00:24:05.155 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:05.155 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:24:05.156 Controller IO queue size 128, less than required. 00:24:05.156 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:05.156 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:24:05.156 WARNING: Some requested NVMe devices were skipped 00:24:05.156 10:12:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:24:05.156 EAL: No free 2048 kB hugepages reported on node 1 00:24:07.721 Initializing NVMe Controllers 00:24:07.721 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:07.721 Controller IO queue size 128, less than required. 00:24:07.721 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:07.721 Controller IO queue size 128, less than required. 00:24:07.721 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:07.721 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:07.721 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:24:07.721 Initialization complete. Launching workers. 00:24:07.721 00:24:07.721 ==================== 00:24:07.721 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:24:07.721 TCP transport: 00:24:07.721 polls: 13187 00:24:07.721 idle_polls: 6424 00:24:07.721 sock_completions: 6763 00:24:07.721 nvme_completions: 4957 00:24:07.721 submitted_requests: 7432 00:24:07.721 queued_requests: 1 00:24:07.721 00:24:07.721 ==================== 00:24:07.721 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:24:07.721 TCP transport: 00:24:07.721 polls: 11843 00:24:07.721 idle_polls: 5875 00:24:07.721 sock_completions: 5968 00:24:07.721 nvme_completions: 4945 00:24:07.721 submitted_requests: 7386 00:24:07.721 queued_requests: 1 00:24:07.721 ======================================================== 00:24:07.721 Latency(us) 00:24:07.721 Device Information : IOPS MiB/s Average min max 00:24:07.721 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1238.40 309.60 107249.05 60646.55 168388.46 00:24:07.721 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1235.40 308.85 104928.49 48340.22 164910.74 00:24:07.721 ======================================================== 00:24:07.721 Total : 2473.80 618.45 106090.18 48340.22 168388.46 00:24:07.721 00:24:07.721 10:12:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:24:07.721 10:12:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:07.979 10:12:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:24:07.979 10:12:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:24:07.979 10:12:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:24:07.979 10:12:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:07.979 10:12:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:24:07.979 10:12:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:07.979 10:12:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:24:07.979 10:12:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:07.979 10:12:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:07.979 rmmod nvme_tcp 00:24:07.979 rmmod nvme_fabrics 00:24:07.979 rmmod nvme_keyring 00:24:07.979 10:12:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:07.979 10:12:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:24:07.979 10:12:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:24:07.979 10:12:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 500411 ']' 00:24:07.979 10:12:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 500411 00:24:07.979 10:12:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@950 -- # '[' -z 500411 ']' 00:24:07.979 10:12:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # kill -0 500411 00:24:07.979 10:12:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # uname 00:24:07.979 10:12:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:07.979 10:12:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 500411 00:24:07.979 10:12:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:07.979 10:12:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:07.979 10:12:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 500411' 00:24:07.979 killing process with pid 500411 00:24:07.979 10:12:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@969 -- # kill 500411 00:24:07.979 10:12:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@974 -- # wait 500411 00:24:09.875 10:12:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:09.875 10:12:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:09.875 10:12:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:09.875 10:12:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:09.875 10:12:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:09.875 10:12:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:09.875 10:12:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:09.875 10:12:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:11.776 10:12:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:11.776 00:24:11.776 real 0m22.452s 00:24:11.776 user 1m10.240s 00:24:11.776 sys 0m5.934s 00:24:11.776 10:12:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:11.776 10:12:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:24:11.776 ************************************ 00:24:11.776 END TEST nvmf_perf 00:24:11.776 ************************************ 00:24:11.776 10:12:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:11.776 10:12:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:11.776 10:12:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:11.776 10:12:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:11.776 ************************************ 00:24:11.776 START TEST nvmf_fio_host 00:24:11.776 ************************************ 00:24:11.776 10:12:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:24:11.776 * Looking for test storage... 00:24:11.776 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:11.776 10:12:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:11.776 10:12:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:11.776 10:12:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:11.776 10:12:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:11.776 10:12:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:11.776 10:12:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:11.776 10:12:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:11.776 10:12:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:11.776 10:12:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:11.776 10:12:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:11.776 10:12:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:24:11.776 10:12:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:11.776 10:12:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:11.776 10:12:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:11.776 10:12:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:11.776 10:12:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:11.776 10:12:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:11.776 10:12:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:11.776 10:12:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:11.776 10:12:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:11.776 10:12:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:11.776 10:12:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:24:11.776 10:12:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:24:11.776 10:12:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:11.776 10:12:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:11.776 10:12:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:11.776 10:12:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:11.776 10:12:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:11.776 10:12:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:11.776 10:12:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:11.776 10:12:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:11.776 10:12:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:11.776 10:12:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:11.777 10:12:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:11.777 10:12:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:24:11.777 10:12:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:11.777 10:12:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:24:11.777 10:12:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:11.777 10:12:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:11.777 10:12:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:11.777 10:12:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:11.777 10:12:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:11.777 10:12:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:11.777 10:12:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:11.777 10:12:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:11.777 10:12:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:11.777 10:12:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:24:11.777 10:12:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:11.777 10:12:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:11.777 10:12:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:11.777 10:12:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:11.777 10:12:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:11.777 10:12:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:11.777 10:12:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:11.777 10:12:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:11.777 10:12:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:11.777 10:12:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:11.777 10:12:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:24:11.777 10:12:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.309 10:12:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:14.309 10:12:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:24:14.309 10:12:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:14.309 10:12:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:14.309 10:12:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:14.309 10:12:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:14.309 10:12:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:14.309 10:12:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:24:14.309 10:12:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:14.309 10:12:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:24:14.309 10:12:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:24:14.309 10:12:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:24:14.309 10:12:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:24:14.309 10:12:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:24:14.309 10:12:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:24:14.309 10:12:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:14.309 10:12:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:14.309 10:12:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:14.309 10:12:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:14.309 10:12:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:14.309 10:12:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:14.309 10:12:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:14.309 10:12:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:14.309 10:12:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:14.309 10:12:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:14.309 10:12:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:14.309 10:12:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:14.309 10:12:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:14.309 10:12:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:14.309 10:12:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:14.309 10:12:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:14.309 10:12:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:14.309 10:12:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:14.309 10:12:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:24:14.309 Found 0000:84:00.0 (0x8086 - 0x159b) 00:24:14.309 10:12:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:14.309 10:12:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:14.309 10:12:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:14.309 10:12:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:14.309 10:12:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:14.309 10:12:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:14.309 10:12:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:24:14.309 Found 0000:84:00.1 (0x8086 - 0x159b) 00:24:14.309 10:12:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:14.310 10:12:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:14.310 10:12:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:14.310 10:12:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:14.310 10:12:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:14.310 10:12:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:14.310 10:12:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:14.310 10:12:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:14.310 10:12:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:14.310 10:12:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:14.310 10:12:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:14.310 10:12:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:14.310 10:12:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:14.310 10:12:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:14.310 10:12:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:14.310 10:12:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:24:14.310 Found net devices under 0000:84:00.0: cvl_0_0 00:24:14.310 10:12:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:14.310 10:12:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:14.310 10:12:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:14.310 10:12:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:14.310 10:12:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:14.310 10:12:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:14.310 10:12:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:14.310 10:12:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:14.310 10:12:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:24:14.310 Found net devices under 0000:84:00.1: cvl_0_1 00:24:14.310 10:12:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:14.310 10:12:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:14.310 10:12:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:24:14.310 10:12:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:14.310 10:12:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:14.310 10:12:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:14.310 10:12:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:14.310 10:12:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:14.310 10:12:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:14.310 10:12:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:14.310 10:12:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:14.310 10:12:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:14.310 10:12:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:14.310 10:12:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:14.310 10:12:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:14.310 10:12:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:14.310 10:12:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:14.310 10:12:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:14.310 10:12:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:14.310 10:12:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:14.310 10:12:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:14.310 10:12:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:14.310 10:12:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:14.310 10:12:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:14.310 10:12:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:14.310 10:12:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:14.310 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:14.310 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.201 ms 00:24:14.310 00:24:14.310 --- 10.0.0.2 ping statistics --- 00:24:14.310 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:14.310 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:24:14.310 10:12:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:14.310 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:14.310 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.089 ms 00:24:14.310 00:24:14.310 --- 10.0.0.1 ping statistics --- 00:24:14.310 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:14.310 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:24:14.310 10:12:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:14.310 10:12:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:24:14.310 10:12:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:14.310 10:12:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:14.310 10:12:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:14.310 10:12:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:14.310 10:12:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:14.310 10:12:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:14.310 10:12:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:14.310 10:12:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:24:14.310 10:12:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:24:14.310 10:12:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:14.310 10:12:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.310 10:12:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=504390 00:24:14.310 10:12:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:14.310 10:12:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:14.310 10:12:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 504390 00:24:14.310 10:12:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@831 -- # '[' -z 504390 ']' 00:24:14.310 10:12:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:14.310 10:12:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:14.310 10:12:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:14.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:14.310 10:12:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:14.310 10:12:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:14.310 [2024-07-25 10:12:59.355828] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:24:14.310 [2024-07-25 10:12:59.356003] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:14.310 EAL: No free 2048 kB hugepages reported on node 1 00:24:14.310 [2024-07-25 10:12:59.463498] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:14.568 [2024-07-25 10:12:59.590860] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:14.568 [2024-07-25 10:12:59.590927] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:14.568 [2024-07-25 10:12:59.590944] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:14.568 [2024-07-25 10:12:59.590958] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:14.569 [2024-07-25 10:12:59.590977] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:14.569 [2024-07-25 10:12:59.591072] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:14.569 [2024-07-25 10:12:59.591128] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:14.569 [2024-07-25 10:12:59.591181] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:14.569 [2024-07-25 10:12:59.591184] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:14.569 10:12:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:14.569 10:12:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # return 0 00:24:14.569 10:12:59 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:15.134 [2024-07-25 10:13:00.158584] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:15.134 10:13:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:24:15.134 10:13:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:15.134 10:13:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:15.134 10:13:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:24:15.391 Malloc1 00:24:15.391 10:13:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:15.956 10:13:00 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:16.214 10:13:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:16.779 [2024-07-25 10:13:01.755353] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:16.779 10:13:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:17.344 10:13:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:24:17.344 10:13:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:17.344 10:13:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:17.344 10:13:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:24:17.344 10:13:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:17.344 10:13:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:24:17.344 10:13:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:17.344 10:13:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:24:17.344 10:13:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:24:17.344 10:13:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:17.344 10:13:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:17.344 10:13:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:24:17.344 10:13:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:17.344 10:13:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:17.344 10:13:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:17.344 10:13:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:17.344 10:13:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:17.344 10:13:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:24:17.344 10:13:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:17.344 10:13:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:17.344 10:13:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:17.344 10:13:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:17.344 10:13:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:24:17.344 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:24:17.344 fio-3.35 00:24:17.344 Starting 1 thread 00:24:17.344 EAL: No free 2048 kB hugepages reported on node 1 00:24:19.871 00:24:19.871 test: (groupid=0, jobs=1): err= 0: pid=504881: Thu Jul 25 10:13:04 2024 00:24:19.871 read: IOPS=9202, BW=35.9MiB/s (37.7MB/s)(72.1MiB/2006msec) 00:24:19.871 slat (usec): min=2, max=165, avg= 3.63, stdev= 1.83 00:24:19.871 clat (usec): min=3289, max=12961, avg=7642.20, stdev=540.28 00:24:19.871 lat (usec): min=3313, max=12965, avg=7645.83, stdev=540.17 00:24:19.871 clat percentiles (usec): 00:24:19.871 | 1.00th=[ 6456], 5.00th=[ 6849], 10.00th=[ 6980], 20.00th=[ 7242], 00:24:19.871 | 30.00th=[ 7373], 40.00th=[ 7504], 50.00th=[ 7635], 60.00th=[ 7767], 00:24:19.871 | 70.00th=[ 7898], 80.00th=[ 8029], 90.00th=[ 8291], 95.00th=[ 8455], 00:24:19.871 | 99.00th=[ 8848], 99.50th=[ 9110], 99.90th=[10814], 99.95th=[11863], 00:24:19.871 | 99.99th=[12911] 00:24:19.871 bw ( KiB/s): min=35808, max=37368, per=99.89%, avg=36770.00, stdev=684.53, samples=4 00:24:19.871 iops : min= 8952, max= 9342, avg=9192.50, stdev=171.13, samples=4 00:24:19.871 write: IOPS=9207, BW=36.0MiB/s (37.7MB/s)(72.1MiB/2006msec); 0 zone resets 00:24:19.871 slat (usec): min=2, max=161, avg= 3.77, stdev= 1.60 00:24:19.871 clat (usec): min=1613, max=11230, avg=6211.13, stdev=470.68 00:24:19.871 lat (usec): min=1638, max=11233, avg=6214.90, stdev=470.60 00:24:19.871 clat percentiles (usec): 00:24:19.871 | 1.00th=[ 5145], 5.00th=[ 5538], 10.00th=[ 5669], 20.00th=[ 5866], 00:24:19.871 | 30.00th=[ 5997], 40.00th=[ 6128], 50.00th=[ 6194], 60.00th=[ 6325], 00:24:19.871 | 70.00th=[ 6456], 80.00th=[ 6587], 90.00th=[ 6783], 95.00th=[ 6915], 00:24:19.871 | 99.00th=[ 7242], 99.50th=[ 7373], 99.90th=[ 9503], 99.95th=[10421], 00:24:19.871 | 99.99th=[11076] 00:24:19.871 bw ( KiB/s): min=36672, max=37072, per=100.00%, avg=36836.00, stdev=191.72, samples=4 00:24:19.871 iops : min= 9168, max= 9268, avg=9209.00, stdev=47.93, samples=4 00:24:19.871 lat (msec) : 2=0.02%, 4=0.12%, 10=99.74%, 20=0.12% 00:24:19.871 cpu : usr=64.69%, sys=30.22%, ctx=128, majf=0, minf=39 00:24:19.871 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:24:19.871 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:19.871 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:19.871 issued rwts: total=18460,18470,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:19.871 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:19.871 00:24:19.871 Run status group 0 (all jobs): 00:24:19.871 READ: bw=35.9MiB/s (37.7MB/s), 35.9MiB/s-35.9MiB/s (37.7MB/s-37.7MB/s), io=72.1MiB (75.6MB), run=2006-2006msec 00:24:19.871 WRITE: bw=36.0MiB/s (37.7MB/s), 36.0MiB/s-36.0MiB/s (37.7MB/s-37.7MB/s), io=72.1MiB (75.7MB), run=2006-2006msec 00:24:19.871 10:13:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:19.871 10:13:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:19.871 10:13:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:24:19.871 10:13:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:24:19.871 10:13:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:24:19.871 10:13:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:19.871 10:13:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:24:19.871 10:13:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:24:19.871 10:13:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:19.871 10:13:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:19.871 10:13:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:24:19.871 10:13:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:19.871 10:13:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:19.871 10:13:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:19.871 10:13:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:24:19.871 10:13:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:24:19.871 10:13:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:24:19.871 10:13:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:24:19.871 10:13:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:24:19.871 10:13:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:24:19.872 10:13:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:24:19.872 10:13:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:24:19.872 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:24:19.872 fio-3.35 00:24:19.872 Starting 1 thread 00:24:19.872 EAL: No free 2048 kB hugepages reported on node 1 00:24:22.401 00:24:22.401 test: (groupid=0, jobs=1): err= 0: pid=505210: Thu Jul 25 10:13:07 2024 00:24:22.401 read: IOPS=6800, BW=106MiB/s (111MB/s)(213MiB/2005msec) 00:24:22.401 slat (usec): min=3, max=179, avg= 5.77, stdev= 3.68 00:24:22.401 clat (usec): min=1512, max=22264, avg=11253.01, stdev=3045.81 00:24:22.401 lat (usec): min=1517, max=22270, avg=11258.78, stdev=3046.39 00:24:22.401 clat percentiles (usec): 00:24:22.401 | 1.00th=[ 5604], 5.00th=[ 6915], 10.00th=[ 7635], 20.00th=[ 8586], 00:24:22.401 | 30.00th=[ 9372], 40.00th=[10159], 50.00th=[11076], 60.00th=[11731], 00:24:22.401 | 70.00th=[12780], 80.00th=[13698], 90.00th=[15401], 95.00th=[16712], 00:24:22.401 | 99.00th=[19530], 99.50th=[20317], 99.90th=[21890], 99.95th=[22152], 00:24:22.401 | 99.99th=[22152] 00:24:22.401 bw ( KiB/s): min=48928, max=61280, per=49.80%, avg=54184.00, stdev=5748.63, samples=4 00:24:22.401 iops : min= 3058, max= 3830, avg=3386.50, stdev=359.29, samples=4 00:24:22.401 write: IOPS=3819, BW=59.7MiB/s (62.6MB/s)(111MiB/1854msec); 0 zone resets 00:24:22.401 slat (usec): min=34, max=978, avg=49.63, stdev=16.81 00:24:22.401 clat (usec): min=3213, max=28245, avg=13486.53, stdev=2744.66 00:24:22.401 lat (usec): min=3257, max=28287, avg=13536.16, stdev=2748.25 00:24:22.401 clat percentiles (usec): 00:24:22.401 | 1.00th=[ 7635], 5.00th=[ 9634], 10.00th=[10421], 20.00th=[11338], 00:24:22.401 | 30.00th=[11994], 40.00th=[12518], 50.00th=[13173], 60.00th=[13829], 00:24:22.401 | 70.00th=[14615], 80.00th=[15533], 90.00th=[17171], 95.00th=[18482], 00:24:22.401 | 99.00th=[21365], 99.50th=[22152], 99.90th=[23462], 99.95th=[23725], 00:24:22.401 | 99.99th=[28181] 00:24:22.401 bw ( KiB/s): min=50528, max=64768, per=91.97%, avg=56200.00, stdev=6391.59, samples=4 00:24:22.401 iops : min= 3158, max= 4048, avg=3512.50, stdev=399.47, samples=4 00:24:22.401 lat (msec) : 2=0.03%, 4=0.18%, 10=26.64%, 20=72.01%, 50=1.14% 00:24:22.401 cpu : usr=80.60%, sys=16.76%, ctx=21, majf=0, minf=53 00:24:22.401 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:24:22.401 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:22.401 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:22.401 issued rwts: total=13635,7081,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:22.401 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:22.401 00:24:22.401 Run status group 0 (all jobs): 00:24:22.401 READ: bw=106MiB/s (111MB/s), 106MiB/s-106MiB/s (111MB/s-111MB/s), io=213MiB (223MB), run=2005-2005msec 00:24:22.401 WRITE: bw=59.7MiB/s (62.6MB/s), 59.7MiB/s-59.7MiB/s (62.6MB/s-62.6MB/s), io=111MiB (116MB), run=1854-1854msec 00:24:22.401 10:13:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:22.967 10:13:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:24:22.967 10:13:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:24:22.967 10:13:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:24:22.967 10:13:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:24:22.967 10:13:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:22.967 10:13:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:24:22.967 10:13:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:22.967 10:13:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:24:22.967 10:13:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:22.967 10:13:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:22.967 rmmod nvme_tcp 00:24:22.967 rmmod nvme_fabrics 00:24:22.967 rmmod nvme_keyring 00:24:22.967 10:13:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:22.967 10:13:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:24:22.967 10:13:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:24:22.967 10:13:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 504390 ']' 00:24:22.967 10:13:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 504390 00:24:22.967 10:13:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@950 -- # '[' -z 504390 ']' 00:24:22.967 10:13:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # kill -0 504390 00:24:22.967 10:13:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # uname 00:24:22.967 10:13:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:22.967 10:13:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 504390 00:24:22.967 10:13:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:22.967 10:13:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:22.967 10:13:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 504390' 00:24:22.967 killing process with pid 504390 00:24:22.967 10:13:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@969 -- # kill 504390 00:24:22.967 10:13:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@974 -- # wait 504390 00:24:23.225 10:13:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:23.225 10:13:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:23.225 10:13:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:23.225 10:13:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:23.225 10:13:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:23.225 10:13:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:23.225 10:13:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:23.225 10:13:08 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:25.756 10:13:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:25.756 00:24:25.756 real 0m13.563s 00:24:25.756 user 0m41.597s 00:24:25.756 sys 0m4.291s 00:24:25.756 10:13:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:25.756 10:13:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.756 ************************************ 00:24:25.756 END TEST nvmf_fio_host 00:24:25.756 ************************************ 00:24:25.756 10:13:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:25.756 10:13:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:25.756 10:13:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:25.756 10:13:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.756 ************************************ 00:24:25.756 START TEST nvmf_failover 00:24:25.756 ************************************ 00:24:25.756 10:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:24:25.756 * Looking for test storage... 00:24:25.756 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:25.757 10:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:25.757 10:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:24:25.757 10:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:25.757 10:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:25.757 10:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:25.757 10:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:25.757 10:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:25.757 10:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:25.757 10:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:25.757 10:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:25.757 10:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:25.757 10:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:25.757 10:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:24:25.757 10:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:24:25.757 10:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:25.757 10:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:25.757 10:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:25.757 10:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:25.757 10:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:25.757 10:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:25.757 10:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:25.757 10:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:25.757 10:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:25.757 10:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:25.757 10:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:25.757 10:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:24:25.757 10:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:25.757 10:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:24:25.757 10:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:25.757 10:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:25.757 10:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:25.757 10:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:25.757 10:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:25.757 10:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:25.757 10:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:25.757 10:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:25.757 10:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:25.757 10:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:25.757 10:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:25.757 10:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:25.757 10:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:24:25.757 10:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:25.757 10:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:25.757 10:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:25.757 10:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:25.757 10:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:25.757 10:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:25.757 10:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:25.757 10:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:25.757 10:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:25.757 10:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:25.757 10:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:24:25.757 10:13:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:28.312 10:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:28.312 10:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:24:28.312 10:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:28.312 10:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:28.312 10:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:28.312 10:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:28.312 10:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:28.312 10:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:24:28.312 10:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:28.312 10:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:24:28.312 10:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:24:28.312 10:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:24:28.312 10:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:24:28.312 10:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:24:28.312 10:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:24:28.312 10:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:28.312 10:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:28.312 10:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:28.312 10:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:28.312 10:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:28.312 10:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:28.312 10:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:28.312 10:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:28.312 10:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:28.312 10:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:28.312 10:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:28.312 10:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:28.312 10:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:28.312 10:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:28.312 10:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:28.312 10:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:28.312 10:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:28.312 10:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:28.312 10:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:24:28.312 Found 0000:84:00.0 (0x8086 - 0x159b) 00:24:28.312 10:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:28.312 10:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:28.312 10:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:28.312 10:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:28.312 10:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:28.312 10:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:28.312 10:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:24:28.312 Found 0000:84:00.1 (0x8086 - 0x159b) 00:24:28.312 10:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:28.313 10:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:28.313 10:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:28.313 10:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:28.313 10:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:28.313 10:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:28.313 10:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:28.313 10:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:28.313 10:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:28.313 10:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:28.313 10:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:28.313 10:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:28.313 10:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:28.313 10:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:28.313 10:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:28.313 10:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:24:28.313 Found net devices under 0000:84:00.0: cvl_0_0 00:24:28.313 10:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:28.313 10:13:12 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:28.313 10:13:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:28.313 10:13:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:28.313 10:13:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:28.313 10:13:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:28.313 10:13:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:28.313 10:13:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:28.313 10:13:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:24:28.313 Found net devices under 0000:84:00.1: cvl_0_1 00:24:28.313 10:13:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:28.313 10:13:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:28.313 10:13:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:24:28.313 10:13:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:28.313 10:13:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:28.313 10:13:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:28.313 10:13:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:28.313 10:13:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:28.313 10:13:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:28.313 10:13:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:28.313 10:13:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:28.313 10:13:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:28.313 10:13:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:28.313 10:13:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:28.313 10:13:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:28.313 10:13:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:28.313 10:13:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:28.313 10:13:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:28.313 10:13:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:28.313 10:13:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:28.313 10:13:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:28.313 10:13:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:28.313 10:13:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:28.313 10:13:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:28.313 10:13:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:28.313 10:13:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:28.313 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:28.313 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.137 ms 00:24:28.313 00:24:28.313 --- 10.0.0.2 ping statistics --- 00:24:28.313 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:28.313 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:24:28.313 10:13:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:28.313 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:28.313 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:24:28.313 00:24:28.313 --- 10.0.0.1 ping statistics --- 00:24:28.313 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:28.313 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:24:28.313 10:13:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:28.313 10:13:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:24:28.313 10:13:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:28.313 10:13:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:28.313 10:13:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:28.313 10:13:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:28.313 10:13:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:28.313 10:13:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:28.313 10:13:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:28.313 10:13:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:24:28.313 10:13:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:28.313 10:13:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:28.313 10:13:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:28.313 10:13:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=507550 00:24:28.313 10:13:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:28.313 10:13:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 507550 00:24:28.313 10:13:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 507550 ']' 00:24:28.313 10:13:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:28.313 10:13:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:28.313 10:13:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:28.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:28.313 10:13:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:28.313 10:13:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:28.313 [2024-07-25 10:13:13.251507] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:24:28.313 [2024-07-25 10:13:13.251607] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:28.313 EAL: No free 2048 kB hugepages reported on node 1 00:24:28.313 [2024-07-25 10:13:13.333375] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:28.313 [2024-07-25 10:13:13.459137] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:28.313 [2024-07-25 10:13:13.459208] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:28.313 [2024-07-25 10:13:13.459225] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:28.313 [2024-07-25 10:13:13.459238] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:28.313 [2024-07-25 10:13:13.459250] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:28.313 [2024-07-25 10:13:13.459339] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:28.313 [2024-07-25 10:13:13.459393] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:28.313 [2024-07-25 10:13:13.459396] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:28.570 10:13:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:28.570 10:13:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:24:28.570 10:13:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:28.570 10:13:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:28.570 10:13:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:28.570 10:13:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:28.570 10:13:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:28.827 [2024-07-25 10:13:13.936256] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:28.827 10:13:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:29.391 Malloc0 00:24:29.391 10:13:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:29.648 10:13:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:29.904 10:13:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:30.162 [2024-07-25 10:13:15.308071] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:30.419 10:13:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:30.676 [2024-07-25 10:13:15.604932] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:30.677 10:13:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:30.935 [2024-07-25 10:13:15.901916] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:24:30.935 10:13:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=507959 00:24:30.935 10:13:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:30.935 10:13:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 507959 /var/tmp/bdevperf.sock 00:24:30.935 10:13:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 507959 ']' 00:24:30.935 10:13:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:30.935 10:13:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:30.935 10:13:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:24:30.935 10:13:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:30.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:30.935 10:13:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:30.935 10:13:15 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:31.192 10:13:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:31.192 10:13:16 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:24:31.192 10:13:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:31.757 NVMe0n1 00:24:31.757 10:13:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:32.014 00:24:32.014 10:13:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=508094 00:24:32.014 10:13:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:24:32.014 10:13:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:33.388 10:13:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:33.388 [2024-07-25 10:13:18.515316] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e3420 is same with the state(5) to be set 00:24:33.388 [2024-07-25 10:13:18.515424] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e3420 is same with the state(5) to be set 00:24:33.388 [2024-07-25 10:13:18.515463] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e3420 is same with the state(5) to be set 00:24:33.388 [2024-07-25 10:13:18.515478] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e3420 is same with the state(5) to be set 00:24:33.388 [2024-07-25 10:13:18.515501] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e3420 is same with the state(5) to be set 00:24:33.388 [2024-07-25 10:13:18.515514] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e3420 is same with the state(5) to be set 00:24:33.388 [2024-07-25 10:13:18.515525] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e3420 is same with the state(5) to be set 00:24:33.388 [2024-07-25 10:13:18.515538] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e3420 is same with the state(5) to be set 00:24:33.388 [2024-07-25 10:13:18.515549] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e3420 is same with the state(5) to be set 00:24:33.388 [2024-07-25 10:13:18.515561] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e3420 is same with the state(5) to be set 00:24:33.388 [2024-07-25 10:13:18.515573] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e3420 is same with the state(5) to be set 00:24:33.388 [2024-07-25 10:13:18.515585] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e3420 is same with the state(5) to be set 00:24:33.388 [2024-07-25 10:13:18.515597] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e3420 is same with the state(5) to be set 00:24:33.388 [2024-07-25 10:13:18.515608] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e3420 is same with the state(5) to be set 00:24:33.389 [2024-07-25 10:13:18.515622] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e3420 is same with the state(5) to be set 00:24:33.389 [2024-07-25 10:13:18.515633] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e3420 is same with the state(5) to be set 00:24:33.389 [2024-07-25 10:13:18.515644] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e3420 is same with the state(5) to be set 00:24:33.389 [2024-07-25 10:13:18.515656] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e3420 is same with the state(5) to be set 00:24:33.389 [2024-07-25 10:13:18.515668] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e3420 is same with the state(5) to be set 00:24:33.389 [2024-07-25 10:13:18.515681] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e3420 is same with the state(5) to be set 00:24:33.389 [2024-07-25 10:13:18.515692] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e3420 is same with the state(5) to be set 00:24:33.389 [2024-07-25 10:13:18.515704] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e3420 is same with the state(5) to be set 00:24:33.389 [2024-07-25 10:13:18.515715] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e3420 is same with the state(5) to be set 00:24:33.389 [2024-07-25 10:13:18.515741] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e3420 is same with the state(5) to be set 00:24:33.389 [2024-07-25 10:13:18.515753] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e3420 is same with the state(5) to be set 00:24:33.389 [2024-07-25 10:13:18.515764] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e3420 is same with the state(5) to be set 00:24:33.389 [2024-07-25 10:13:18.515776] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e3420 is same with the state(5) to be set 00:24:33.389 [2024-07-25 10:13:18.515787] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e3420 is same with the state(5) to be set 00:24:33.389 [2024-07-25 10:13:18.515798] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e3420 is same with the state(5) to be set 00:24:33.389 [2024-07-25 10:13:18.515809] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e3420 is same with the state(5) to be set 00:24:33.389 [2024-07-25 10:13:18.515821] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e3420 is same with the state(5) to be set 00:24:33.389 [2024-07-25 10:13:18.515836] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e3420 is same with the state(5) to be set 00:24:33.389 [2024-07-25 10:13:18.515860] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e3420 is same with the state(5) to be set 00:24:33.389 [2024-07-25 10:13:18.515872] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e3420 is same with the state(5) to be set 00:24:33.389 [2024-07-25 10:13:18.515884] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e3420 is same with the state(5) to be set 00:24:33.389 [2024-07-25 10:13:18.515895] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e3420 is same with the state(5) to be set 00:24:33.389 [2024-07-25 10:13:18.515907] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e3420 is same with the state(5) to be set 00:24:33.389 [2024-07-25 10:13:18.515918] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e3420 is same with the state(5) to be set 00:24:33.389 [2024-07-25 10:13:18.515929] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e3420 is same with the state(5) to be set 00:24:33.389 [2024-07-25 10:13:18.515940] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e3420 is same with the state(5) to be set 00:24:33.389 [2024-07-25 10:13:18.515951] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e3420 is same with the state(5) to be set 00:24:33.389 [2024-07-25 10:13:18.515963] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e3420 is same with the state(5) to be set 00:24:33.389 [2024-07-25 10:13:18.515973] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e3420 is same with the state(5) to be set 00:24:33.389 [2024-07-25 10:13:18.515985] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e3420 is same with the state(5) to be set 00:24:33.389 [2024-07-25 10:13:18.515996] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e3420 is same with the state(5) to be set 00:24:33.389 [2024-07-25 10:13:18.516007] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e3420 is same with the state(5) to be set 00:24:33.389 [2024-07-25 10:13:18.516019] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e3420 is same with the state(5) to be set 00:24:33.389 [2024-07-25 10:13:18.516030] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e3420 is same with the state(5) to be set 00:24:33.389 [2024-07-25 10:13:18.516041] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e3420 is same with the state(5) to be set 00:24:33.389 [2024-07-25 10:13:18.516052] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e3420 is same with the state(5) to be set 00:24:33.389 [2024-07-25 10:13:18.516063] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e3420 is same with the state(5) to be set 00:24:33.389 [2024-07-25 10:13:18.516075] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e3420 is same with the state(5) to be set 00:24:33.389 [2024-07-25 10:13:18.516086] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e3420 is same with the state(5) to be set 00:24:33.389 [2024-07-25 10:13:18.516097] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e3420 is same with the state(5) to be set 00:24:33.389 [2024-07-25 10:13:18.516108] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e3420 is same with the state(5) to be set 00:24:33.389 [2024-07-25 10:13:18.516124] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e3420 is same with the state(5) to be set 00:24:33.389 [2024-07-25 10:13:18.516135] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e3420 is same with the state(5) to be set 00:24:33.389 [2024-07-25 10:13:18.516146] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e3420 is same with the state(5) to be set 00:24:33.389 [2024-07-25 10:13:18.516157] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e3420 is same with the state(5) to be set 00:24:33.389 [2024-07-25 10:13:18.516172] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e3420 is same with the state(5) to be set 00:24:33.389 [2024-07-25 10:13:18.516183] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e3420 is same with the state(5) to be set 00:24:33.389 [2024-07-25 10:13:18.516194] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e3420 is same with the state(5) to be set 00:24:33.389 [2024-07-25 10:13:18.516206] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e3420 is same with the state(5) to be set 00:24:33.389 [2024-07-25 10:13:18.516217] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e3420 is same with the state(5) to be set 00:24:33.389 [2024-07-25 10:13:18.516234] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e3420 is same with the state(5) to be set 00:24:33.389 [2024-07-25 10:13:18.516245] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e3420 is same with the state(5) to be set 00:24:33.389 [2024-07-25 10:13:18.516258] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e3420 is same with the state(5) to be set 00:24:33.389 [2024-07-25 10:13:18.516270] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e3420 is same with the state(5) to be set 00:24:33.389 [2024-07-25 10:13:18.516281] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e3420 is same with the state(5) to be set 00:24:33.389 [2024-07-25 10:13:18.516292] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e3420 is same with the state(5) to be set 00:24:33.389 [2024-07-25 10:13:18.516304] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e3420 is same with the state(5) to be set 00:24:33.389 [2024-07-25 10:13:18.516315] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e3420 is same with the state(5) to be set 00:24:33.389 [2024-07-25 10:13:18.516326] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e3420 is same with the state(5) to be set 00:24:33.389 [2024-07-25 10:13:18.516337] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e3420 is same with the state(5) to be set 00:24:33.389 [2024-07-25 10:13:18.516349] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e3420 is same with the state(5) to be set 00:24:33.389 [2024-07-25 10:13:18.516360] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e3420 is same with the state(5) to be set 00:24:33.389 [2024-07-25 10:13:18.516371] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e3420 is same with the state(5) to be set 00:24:33.389 [2024-07-25 10:13:18.516382] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e3420 is same with the state(5) to be set 00:24:33.389 [2024-07-25 10:13:18.516394] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e3420 is same with the state(5) to be set 00:24:33.389 [2024-07-25 10:13:18.516420] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e3420 is same with the state(5) to be set 00:24:33.389 [2024-07-25 10:13:18.516445] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e3420 is same with the state(5) to be set 00:24:33.389 [2024-07-25 10:13:18.516458] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e3420 is same with the state(5) to be set 00:24:33.389 [2024-07-25 10:13:18.516469] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e3420 is same with the state(5) to be set 00:24:33.389 [2024-07-25 10:13:18.516481] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e3420 is same with the state(5) to be set 00:24:33.389 [2024-07-25 10:13:18.516492] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e3420 is same with the state(5) to be set 00:24:33.389 [2024-07-25 10:13:18.516504] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e3420 is same with the state(5) to be set 00:24:33.389 [2024-07-25 10:13:18.516520] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e3420 is same with the state(5) to be set 00:24:33.389 [2024-07-25 10:13:18.516532] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e3420 is same with the state(5) to be set 00:24:33.389 [2024-07-25 10:13:18.516543] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e3420 is same with the state(5) to be set 00:24:33.389 [2024-07-25 10:13:18.516555] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e3420 is same with the state(5) to be set 00:24:33.389 [2024-07-25 10:13:18.516566] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e3420 is same with the state(5) to be set 00:24:33.389 [2024-07-25 10:13:18.516578] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e3420 is same with the state(5) to be set 00:24:33.389 [2024-07-25 10:13:18.516590] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e3420 is same with the state(5) to be set 00:24:33.389 [2024-07-25 10:13:18.516601] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e3420 is same with the state(5) to be set 00:24:33.389 [2024-07-25 10:13:18.516613] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e3420 is same with the state(5) to be set 00:24:33.389 [2024-07-25 10:13:18.516625] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e3420 is same with the state(5) to be set 00:24:33.389 [2024-07-25 10:13:18.516642] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e3420 is same with the state(5) to be set 00:24:33.389 [2024-07-25 10:13:18.516654] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e3420 is same with the state(5) to be set 00:24:33.390 [2024-07-25 10:13:18.516666] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e3420 is same with the state(5) to be set 00:24:33.390 [2024-07-25 10:13:18.516678] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e3420 is same with the state(5) to be set 00:24:33.390 [2024-07-25 10:13:18.516689] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e3420 is same with the state(5) to be set 00:24:33.390 [2024-07-25 10:13:18.516702] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e3420 is same with the state(5) to be set 00:24:33.390 [2024-07-25 10:13:18.516713] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e3420 is same with the state(5) to be set 00:24:33.390 [2024-07-25 10:13:18.516725] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e3420 is same with the state(5) to be set 00:24:33.390 [2024-07-25 10:13:18.516737] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e3420 is same with the state(5) to be set 00:24:33.390 [2024-07-25 10:13:18.516749] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e3420 is same with the state(5) to be set 00:24:33.390 [2024-07-25 10:13:18.516774] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e3420 is same with the state(5) to be set 00:24:33.390 [2024-07-25 10:13:18.516786] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e3420 is same with the state(5) to be set 00:24:33.390 [2024-07-25 10:13:18.516797] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e3420 is same with the state(5) to be set 00:24:33.390 [2024-07-25 10:13:18.516809] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e3420 is same with the state(5) to be set 00:24:33.390 [2024-07-25 10:13:18.516820] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e3420 is same with the state(5) to be set 00:24:33.390 [2024-07-25 10:13:18.516831] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e3420 is same with the state(5) to be set 00:24:33.390 [2024-07-25 10:13:18.516842] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e3420 is same with the state(5) to be set 00:24:33.390 [2024-07-25 10:13:18.516857] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e3420 is same with the state(5) to be set 00:24:33.390 [2024-07-25 10:13:18.516868] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e3420 is same with the state(5) to be set 00:24:33.390 [2024-07-25 10:13:18.516879] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e3420 is same with the state(5) to be set 00:24:33.390 [2024-07-25 10:13:18.516890] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x6e3420 is same with the state(5) to be set 00:24:33.390 10:13:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:24:36.671 10:13:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:36.929 00:24:36.929 10:13:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:37.494 10:13:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:24:40.772 10:13:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:40.772 [2024-07-25 10:13:25.723073] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:40.772 10:13:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:24:41.704 10:13:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:41.962 10:13:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 508094 00:24:47.221 0 00:24:47.221 10:13:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 507959 00:24:47.221 10:13:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 507959 ']' 00:24:47.221 10:13:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 507959 00:24:47.221 10:13:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:24:47.221 10:13:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:47.221 10:13:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 507959 00:24:47.221 10:13:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:47.222 10:13:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:47.222 10:13:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 507959' 00:24:47.222 killing process with pid 507959 00:24:47.222 10:13:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 507959 00:24:47.222 10:13:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 507959 00:24:47.490 10:13:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:47.490 [2024-07-25 10:13:15.971354] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:24:47.490 [2024-07-25 10:13:15.971472] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid507959 ] 00:24:47.490 EAL: No free 2048 kB hugepages reported on node 1 00:24:47.490 [2024-07-25 10:13:16.036681] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:47.490 [2024-07-25 10:13:16.146912] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:47.490 Running I/O for 15 seconds... 00:24:47.490 [2024-07-25 10:13:18.517706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:87536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.490 [2024-07-25 10:13:18.517762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.490 [2024-07-25 10:13:18.517792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:87544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.490 [2024-07-25 10:13:18.517808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.490 [2024-07-25 10:13:18.517825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:87552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.490 [2024-07-25 10:13:18.517838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.490 [2024-07-25 10:13:18.517854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:87560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.490 [2024-07-25 10:13:18.517867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.490 [2024-07-25 10:13:18.517882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:87568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.490 [2024-07-25 10:13:18.517896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.490 [2024-07-25 10:13:18.517910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:87576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.490 [2024-07-25 10:13:18.517924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.490 [2024-07-25 10:13:18.517939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:87584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.490 [2024-07-25 10:13:18.517952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.490 [2024-07-25 10:13:18.517967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:87592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.490 [2024-07-25 10:13:18.517980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.490 [2024-07-25 10:13:18.517995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:87600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.490 [2024-07-25 10:13:18.518008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.490 [2024-07-25 10:13:18.518023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:87608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.490 [2024-07-25 10:13:18.518036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.491 [2024-07-25 10:13:18.518051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:87616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.491 [2024-07-25 10:13:18.518064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.491 [2024-07-25 10:13:18.518087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:87624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.491 [2024-07-25 10:13:18.518101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.491 [2024-07-25 10:13:18.518117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:87632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.491 [2024-07-25 10:13:18.518130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.491 [2024-07-25 10:13:18.518144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:87640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.491 [2024-07-25 10:13:18.518157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.491 [2024-07-25 10:13:18.518172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:87648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.491 [2024-07-25 10:13:18.518184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.491 [2024-07-25 10:13:18.518199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:87656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.491 [2024-07-25 10:13:18.518211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.491 [2024-07-25 10:13:18.518226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:87664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.491 [2024-07-25 10:13:18.518249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.491 [2024-07-25 10:13:18.518264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:87672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.491 [2024-07-25 10:13:18.518277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.491 [2024-07-25 10:13:18.518291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:87680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.491 [2024-07-25 10:13:18.518304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.491 [2024-07-25 10:13:18.518319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:87688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.491 [2024-07-25 10:13:18.518332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.491 [2024-07-25 10:13:18.518346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:87696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.491 [2024-07-25 10:13:18.518358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.491 [2024-07-25 10:13:18.518372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:87704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.491 [2024-07-25 10:13:18.518385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.491 [2024-07-25 10:13:18.518399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:87712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.491 [2024-07-25 10:13:18.518436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.491 [2024-07-25 10:13:18.518456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:87720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.491 [2024-07-25 10:13:18.518474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.491 [2024-07-25 10:13:18.518489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:87728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.491 [2024-07-25 10:13:18.518502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.491 [2024-07-25 10:13:18.518517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:87736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.491 [2024-07-25 10:13:18.518530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.491 [2024-07-25 10:13:18.518544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:87744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.491 [2024-07-25 10:13:18.518557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.491 [2024-07-25 10:13:18.518572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:87752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.491 [2024-07-25 10:13:18.518585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.491 [2024-07-25 10:13:18.518599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:87760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.491 [2024-07-25 10:13:18.518612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.491 [2024-07-25 10:13:18.518626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:87768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.491 [2024-07-25 10:13:18.518639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.491 [2024-07-25 10:13:18.518653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:87776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.491 [2024-07-25 10:13:18.518666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.491 [2024-07-25 10:13:18.518681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:87784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.491 [2024-07-25 10:13:18.518693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.491 [2024-07-25 10:13:18.518723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:87792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.491 [2024-07-25 10:13:18.518742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.491 [2024-07-25 10:13:18.518757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:87800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.491 [2024-07-25 10:13:18.518770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.491 [2024-07-25 10:13:18.518784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:87808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.491 [2024-07-25 10:13:18.518796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.491 [2024-07-25 10:13:18.518810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:87816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.491 [2024-07-25 10:13:18.518823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.491 [2024-07-25 10:13:18.518844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:87824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.491 [2024-07-25 10:13:18.518858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.491 [2024-07-25 10:13:18.518872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:87832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.491 [2024-07-25 10:13:18.518885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.491 [2024-07-25 10:13:18.518899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:87840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.491 [2024-07-25 10:13:18.518911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.491 [2024-07-25 10:13:18.518925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:87848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.491 [2024-07-25 10:13:18.518938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.491 [2024-07-25 10:13:18.518952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:87856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.491 [2024-07-25 10:13:18.518964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.491 [2024-07-25 10:13:18.518978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:87864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.491 [2024-07-25 10:13:18.518990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.491 [2024-07-25 10:13:18.519004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:87872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.491 [2024-07-25 10:13:18.519016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.491 [2024-07-25 10:13:18.519030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:87880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.491 [2024-07-25 10:13:18.519042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.491 [2024-07-25 10:13:18.519058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:87888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.491 [2024-07-25 10:13:18.519071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.491 [2024-07-25 10:13:18.519085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:87896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.491 [2024-07-25 10:13:18.519098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.491 [2024-07-25 10:13:18.519112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:87904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.491 [2024-07-25 10:13:18.519124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.491 [2024-07-25 10:13:18.519138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:87912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.491 [2024-07-25 10:13:18.519151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.491 [2024-07-25 10:13:18.519165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:87920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.491 [2024-07-25 10:13:18.519187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.492 [2024-07-25 10:13:18.519202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:87928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.492 [2024-07-25 10:13:18.519215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.492 [2024-07-25 10:13:18.519229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:87936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.492 [2024-07-25 10:13:18.519242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.492 [2024-07-25 10:13:18.519256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:87944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.492 [2024-07-25 10:13:18.519269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.492 [2024-07-25 10:13:18.519283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:87952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.492 [2024-07-25 10:13:18.519296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.492 [2024-07-25 10:13:18.519310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:87960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.492 [2024-07-25 10:13:18.519323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.492 [2024-07-25 10:13:18.519337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:87968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.492 [2024-07-25 10:13:18.519349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.492 [2024-07-25 10:13:18.519363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:87976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.492 [2024-07-25 10:13:18.519376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.492 [2024-07-25 10:13:18.519390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:87984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.492 [2024-07-25 10:13:18.519403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.492 [2024-07-25 10:13:18.519440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:87992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.492 [2024-07-25 10:13:18.519456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.492 [2024-07-25 10:13:18.519471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:88000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.492 [2024-07-25 10:13:18.519485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.492 [2024-07-25 10:13:18.519499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:88008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.492 [2024-07-25 10:13:18.519513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.492 [2024-07-25 10:13:18.519528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:88016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.492 [2024-07-25 10:13:18.519541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.492 [2024-07-25 10:13:18.519559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:88024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.492 [2024-07-25 10:13:18.519573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.492 [2024-07-25 10:13:18.519588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:88032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.492 [2024-07-25 10:13:18.519601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.492 [2024-07-25 10:13:18.519616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:88040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.492 [2024-07-25 10:13:18.519629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.492 [2024-07-25 10:13:18.519644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:88048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.492 [2024-07-25 10:13:18.519662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.492 [2024-07-25 10:13:18.519677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:88056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.492 [2024-07-25 10:13:18.519691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.492 [2024-07-25 10:13:18.519705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:88064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.492 [2024-07-25 10:13:18.519733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.492 [2024-07-25 10:13:18.519749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:88072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.492 [2024-07-25 10:13:18.519761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.492 [2024-07-25 10:13:18.519776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:88080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.492 [2024-07-25 10:13:18.519788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.492 [2024-07-25 10:13:18.519802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:88088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.492 [2024-07-25 10:13:18.519815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.492 [2024-07-25 10:13:18.519829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:88096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.492 [2024-07-25 10:13:18.519842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.492 [2024-07-25 10:13:18.519857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:88104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.492 [2024-07-25 10:13:18.519870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.492 [2024-07-25 10:13:18.519884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:88112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.492 [2024-07-25 10:13:18.519897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.492 [2024-07-25 10:13:18.519911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:88120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.492 [2024-07-25 10:13:18.519924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.492 [2024-07-25 10:13:18.519942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:88128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.492 [2024-07-25 10:13:18.519955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.492 [2024-07-25 10:13:18.519970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:88136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.492 [2024-07-25 10:13:18.519983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.492 [2024-07-25 10:13:18.519997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:88144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.492 [2024-07-25 10:13:18.520010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.492 [2024-07-25 10:13:18.520029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:88152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.492 [2024-07-25 10:13:18.520042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.492 [2024-07-25 10:13:18.520057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:88160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.492 [2024-07-25 10:13:18.520069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.492 [2024-07-25 10:13:18.520083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:88168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.492 [2024-07-25 10:13:18.520096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.492 [2024-07-25 10:13:18.520110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:88176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.492 [2024-07-25 10:13:18.520123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.492 [2024-07-25 10:13:18.520137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:88184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.492 [2024-07-25 10:13:18.520149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.492 [2024-07-25 10:13:18.520163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:88192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.492 [2024-07-25 10:13:18.520175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.492 [2024-07-25 10:13:18.520189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:88200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.492 [2024-07-25 10:13:18.520201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.492 [2024-07-25 10:13:18.520215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.492 [2024-07-25 10:13:18.520227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.492 [2024-07-25 10:13:18.520241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:88216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.492 [2024-07-25 10:13:18.520253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.492 [2024-07-25 10:13:18.520267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:88224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.492 [2024-07-25 10:13:18.520283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.492 [2024-07-25 10:13:18.520298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:88232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.492 [2024-07-25 10:13:18.520310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.492 [2024-07-25 10:13:18.520324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:88240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.492 [2024-07-25 10:13:18.520337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.493 [2024-07-25 10:13:18.520350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:88248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.493 [2024-07-25 10:13:18.520363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.493 [2024-07-25 10:13:18.520376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:88256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.493 [2024-07-25 10:13:18.520389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.493 [2024-07-25 10:13:18.520402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:88264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.493 [2024-07-25 10:13:18.520414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.493 [2024-07-25 10:13:18.520435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:88272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.493 [2024-07-25 10:13:18.520465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.493 [2024-07-25 10:13:18.520485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:88280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.493 [2024-07-25 10:13:18.520499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.493 [2024-07-25 10:13:18.520515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:88288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.493 [2024-07-25 10:13:18.520528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.493 [2024-07-25 10:13:18.520542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:88296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.493 [2024-07-25 10:13:18.520554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.493 [2024-07-25 10:13:18.520569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:88304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.493 [2024-07-25 10:13:18.520582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.493 [2024-07-25 10:13:18.520597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.493 [2024-07-25 10:13:18.520610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.493 [2024-07-25 10:13:18.520624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:88320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.493 [2024-07-25 10:13:18.520637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.493 [2024-07-25 10:13:18.520656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:88328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.493 [2024-07-25 10:13:18.520670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.493 [2024-07-25 10:13:18.520684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:88336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.493 [2024-07-25 10:13:18.520697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.493 [2024-07-25 10:13:18.520711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:88344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.493 [2024-07-25 10:13:18.520724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.493 [2024-07-25 10:13:18.520754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:88352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.493 [2024-07-25 10:13:18.520766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.493 [2024-07-25 10:13:18.520780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:88360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.493 [2024-07-25 10:13:18.520793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.493 [2024-07-25 10:13:18.520807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:88368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.493 [2024-07-25 10:13:18.520819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.493 [2024-07-25 10:13:18.520833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:88376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.493 [2024-07-25 10:13:18.520845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.493 [2024-07-25 10:13:18.520859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:88384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.493 [2024-07-25 10:13:18.520871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.493 [2024-07-25 10:13:18.520885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.493 [2024-07-25 10:13:18.520898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.493 [2024-07-25 10:13:18.520911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:88400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.493 [2024-07-25 10:13:18.520923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.493 [2024-07-25 10:13:18.520943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:88408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.493 [2024-07-25 10:13:18.520956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.493 [2024-07-25 10:13:18.520970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:88416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.493 [2024-07-25 10:13:18.520983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.493 [2024-07-25 10:13:18.520996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:88424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.493 [2024-07-25 10:13:18.521012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.493 [2024-07-25 10:13:18.521043] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.493 [2024-07-25 10:13:18.521059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88432 len:8 PRP1 0x0 PRP2 0x0 00:24:47.493 [2024-07-25 10:13:18.521071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.493 [2024-07-25 10:13:18.521087] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.493 [2024-07-25 10:13:18.521098] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.493 [2024-07-25 10:13:18.521108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88440 len:8 PRP1 0x0 PRP2 0x0 00:24:47.493 [2024-07-25 10:13:18.521120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.493 [2024-07-25 10:13:18.521132] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.493 [2024-07-25 10:13:18.521142] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.493 [2024-07-25 10:13:18.521152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88448 len:8 PRP1 0x0 PRP2 0x0 00:24:47.493 [2024-07-25 10:13:18.521164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.493 [2024-07-25 10:13:18.521176] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.493 [2024-07-25 10:13:18.521186] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.493 [2024-07-25 10:13:18.521196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88456 len:8 PRP1 0x0 PRP2 0x0 00:24:47.493 [2024-07-25 10:13:18.521208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.493 [2024-07-25 10:13:18.521220] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.493 [2024-07-25 10:13:18.521230] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.493 [2024-07-25 10:13:18.521240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88464 len:8 PRP1 0x0 PRP2 0x0 00:24:47.493 [2024-07-25 10:13:18.521252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.493 [2024-07-25 10:13:18.521263] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.493 [2024-07-25 10:13:18.521273] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.493 [2024-07-25 10:13:18.521283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88472 len:8 PRP1 0x0 PRP2 0x0 00:24:47.493 [2024-07-25 10:13:18.521295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.493 [2024-07-25 10:13:18.521307] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.493 [2024-07-25 10:13:18.521317] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.493 [2024-07-25 10:13:18.521327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88480 len:8 PRP1 0x0 PRP2 0x0 00:24:47.493 [2024-07-25 10:13:18.521347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.493 [2024-07-25 10:13:18.521360] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.493 [2024-07-25 10:13:18.521370] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.493 [2024-07-25 10:13:18.521381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88488 len:8 PRP1 0x0 PRP2 0x0 00:24:47.493 [2024-07-25 10:13:18.521396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.493 [2024-07-25 10:13:18.521425] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.493 [2024-07-25 10:13:18.521443] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.493 [2024-07-25 10:13:18.521455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88496 len:8 PRP1 0x0 PRP2 0x0 00:24:47.493 [2024-07-25 10:13:18.521468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.493 [2024-07-25 10:13:18.521480] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.493 [2024-07-25 10:13:18.521490] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.493 [2024-07-25 10:13:18.521501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88504 len:8 PRP1 0x0 PRP2 0x0 00:24:47.493 [2024-07-25 10:13:18.521513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.494 [2024-07-25 10:13:18.521526] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.494 [2024-07-25 10:13:18.521536] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.494 [2024-07-25 10:13:18.521547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88512 len:8 PRP1 0x0 PRP2 0x0 00:24:47.494 [2024-07-25 10:13:18.521559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.494 [2024-07-25 10:13:18.521571] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.494 [2024-07-25 10:13:18.521581] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.494 [2024-07-25 10:13:18.521592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88520 len:8 PRP1 0x0 PRP2 0x0 00:24:47.494 [2024-07-25 10:13:18.521604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.494 [2024-07-25 10:13:18.521616] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.494 [2024-07-25 10:13:18.521626] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.494 [2024-07-25 10:13:18.521637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88528 len:8 PRP1 0x0 PRP2 0x0 00:24:47.494 [2024-07-25 10:13:18.521649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.494 [2024-07-25 10:13:18.521661] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.494 [2024-07-25 10:13:18.521672] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.494 [2024-07-25 10:13:18.521682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88536 len:8 PRP1 0x0 PRP2 0x0 00:24:47.494 [2024-07-25 10:13:18.521694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.494 [2024-07-25 10:13:18.521706] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.494 [2024-07-25 10:13:18.521716] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.494 [2024-07-25 10:13:18.521727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88544 len:8 PRP1 0x0 PRP2 0x0 00:24:47.494 [2024-07-25 10:13:18.521760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.494 [2024-07-25 10:13:18.521773] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.494 [2024-07-25 10:13:18.521783] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.494 [2024-07-25 10:13:18.521797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88552 len:8 PRP1 0x0 PRP2 0x0 00:24:47.494 [2024-07-25 10:13:18.521809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.494 [2024-07-25 10:13:18.521865] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1ac6ba0 was disconnected and freed. reset controller. 00:24:47.494 [2024-07-25 10:13:18.521882] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:24:47.494 [2024-07-25 10:13:18.521920] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:47.494 [2024-07-25 10:13:18.521937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.494 [2024-07-25 10:13:18.521951] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:47.494 [2024-07-25 10:13:18.521964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.494 [2024-07-25 10:13:18.521977] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:47.494 [2024-07-25 10:13:18.521989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.494 [2024-07-25 10:13:18.522002] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:47.494 [2024-07-25 10:13:18.522014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.494 [2024-07-25 10:13:18.522033] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:47.494 [2024-07-25 10:13:18.525324] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:47.494 [2024-07-25 10:13:18.525361] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa0790 (9): Bad file descriptor 00:24:47.494 [2024-07-25 10:13:18.634328] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:47.494 [2024-07-25 10:13:22.347676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:121512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.494 [2024-07-25 10:13:22.347749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.494 [2024-07-25 10:13:22.347781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:121520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.494 [2024-07-25 10:13:22.347813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.494 [2024-07-25 10:13:22.347831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:121528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.494 [2024-07-25 10:13:22.347845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.494 [2024-07-25 10:13:22.347860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:121536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.494 [2024-07-25 10:13:22.347883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.494 [2024-07-25 10:13:22.347899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:121544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.494 [2024-07-25 10:13:22.347913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.494 [2024-07-25 10:13:22.347928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:121552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.494 [2024-07-25 10:13:22.347949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.494 [2024-07-25 10:13:22.347965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:121560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.494 [2024-07-25 10:13:22.347979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.494 [2024-07-25 10:13:22.347994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:121568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.494 [2024-07-25 10:13:22.348008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.494 [2024-07-25 10:13:22.348023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:121576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.494 [2024-07-25 10:13:22.348037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.494 [2024-07-25 10:13:22.348052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:121584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.494 [2024-07-25 10:13:22.348065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.494 [2024-07-25 10:13:22.348080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:121592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.494 [2024-07-25 10:13:22.348094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.494 [2024-07-25 10:13:22.348109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:121600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.494 [2024-07-25 10:13:22.348122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.494 [2024-07-25 10:13:22.348137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:121608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.494 [2024-07-25 10:13:22.348150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.494 [2024-07-25 10:13:22.348165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:121616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.494 [2024-07-25 10:13:22.348178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.494 [2024-07-25 10:13:22.348194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:121624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.494 [2024-07-25 10:13:22.348207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.494 [2024-07-25 10:13:22.348222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:121632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.494 [2024-07-25 10:13:22.348235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.494 [2024-07-25 10:13:22.348250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:121640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.494 [2024-07-25 10:13:22.348263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.494 [2024-07-25 10:13:22.348280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:121648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.494 [2024-07-25 10:13:22.348293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.494 [2024-07-25 10:13:22.348313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:121656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.494 [2024-07-25 10:13:22.348328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.494 [2024-07-25 10:13:22.348343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:121664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.494 [2024-07-25 10:13:22.348356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.494 [2024-07-25 10:13:22.348371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:121672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.494 [2024-07-25 10:13:22.348385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.494 [2024-07-25 10:13:22.348400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:121680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.494 [2024-07-25 10:13:22.348413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.494 [2024-07-25 10:13:22.348436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:121688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.494 [2024-07-25 10:13:22.348453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.495 [2024-07-25 10:13:22.348468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:121696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.495 [2024-07-25 10:13:22.348481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.495 [2024-07-25 10:13:22.348497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:121704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.495 [2024-07-25 10:13:22.348511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.495 [2024-07-25 10:13:22.348526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:121712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.495 [2024-07-25 10:13:22.348540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.495 [2024-07-25 10:13:22.348555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:121720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.495 [2024-07-25 10:13:22.348568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.495 [2024-07-25 10:13:22.348583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:121728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.495 [2024-07-25 10:13:22.348597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.495 [2024-07-25 10:13:22.348611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:121736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.495 [2024-07-25 10:13:22.348625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.495 [2024-07-25 10:13:22.348640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:121744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.495 [2024-07-25 10:13:22.348654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.495 [2024-07-25 10:13:22.348669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:121752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.495 [2024-07-25 10:13:22.348686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.495 [2024-07-25 10:13:22.348702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:121760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.495 [2024-07-25 10:13:22.348716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.495 [2024-07-25 10:13:22.348731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:121768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.495 [2024-07-25 10:13:22.348744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.495 [2024-07-25 10:13:22.348760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:121776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.495 [2024-07-25 10:13:22.348773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.495 [2024-07-25 10:13:22.348788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:121784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.495 [2024-07-25 10:13:22.348801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.495 [2024-07-25 10:13:22.348816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:121792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.495 [2024-07-25 10:13:22.348829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.495 [2024-07-25 10:13:22.348844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:121800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.495 [2024-07-25 10:13:22.348858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.495 [2024-07-25 10:13:22.348873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:121808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.495 [2024-07-25 10:13:22.348887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.495 [2024-07-25 10:13:22.348902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:121816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.495 [2024-07-25 10:13:22.348915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.495 [2024-07-25 10:13:22.348929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:121824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.495 [2024-07-25 10:13:22.348943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.495 [2024-07-25 10:13:22.348957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:121832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.495 [2024-07-25 10:13:22.348970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.495 [2024-07-25 10:13:22.348985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:121840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.495 [2024-07-25 10:13:22.348998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.495 [2024-07-25 10:13:22.349013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:121848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.495 [2024-07-25 10:13:22.349026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.495 [2024-07-25 10:13:22.349045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:121856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.495 [2024-07-25 10:13:22.349059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.495 [2024-07-25 10:13:22.349076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:121864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.495 [2024-07-25 10:13:22.349098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.495 [2024-07-25 10:13:22.349124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:121872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.495 [2024-07-25 10:13:22.349149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.495 [2024-07-25 10:13:22.349176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:121880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.495 [2024-07-25 10:13:22.349201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.495 [2024-07-25 10:13:22.349228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:121888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.495 [2024-07-25 10:13:22.349252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.495 [2024-07-25 10:13:22.349277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:121896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.495 [2024-07-25 10:13:22.349300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.495 [2024-07-25 10:13:22.349325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:121904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.495 [2024-07-25 10:13:22.349349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.495 [2024-07-25 10:13:22.349374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:121912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.495 [2024-07-25 10:13:22.349398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.495 [2024-07-25 10:13:22.349424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:121920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.495 [2024-07-25 10:13:22.349458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.495 [2024-07-25 10:13:22.349477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:121928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.495 [2024-07-25 10:13:22.349491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.495 [2024-07-25 10:13:22.349506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:121936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.495 [2024-07-25 10:13:22.349520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.496 [2024-07-25 10:13:22.349534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:121944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.496 [2024-07-25 10:13:22.349547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.496 [2024-07-25 10:13:22.349561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:121952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.496 [2024-07-25 10:13:22.349580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.496 [2024-07-25 10:13:22.349597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:121960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.496 [2024-07-25 10:13:22.349611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.496 [2024-07-25 10:13:22.349626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:121968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.496 [2024-07-25 10:13:22.349639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.496 [2024-07-25 10:13:22.349654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:121016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.496 [2024-07-25 10:13:22.349667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.496 [2024-07-25 10:13:22.349682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:121024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.496 [2024-07-25 10:13:22.349695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.496 [2024-07-25 10:13:22.349709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:121032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.496 [2024-07-25 10:13:22.349723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.496 [2024-07-25 10:13:22.349738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:121040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.496 [2024-07-25 10:13:22.349751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.496 [2024-07-25 10:13:22.349766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:121048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.496 [2024-07-25 10:13:22.349779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.496 [2024-07-25 10:13:22.349794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:121056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.496 [2024-07-25 10:13:22.349807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.496 [2024-07-25 10:13:22.349822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:121064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.496 [2024-07-25 10:13:22.349835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.496 [2024-07-25 10:13:22.349850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:121976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.496 [2024-07-25 10:13:22.349863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.496 [2024-07-25 10:13:22.349878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:121072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.496 [2024-07-25 10:13:22.349892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.496 [2024-07-25 10:13:22.349907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:121080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.496 [2024-07-25 10:13:22.349920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.496 [2024-07-25 10:13:22.349935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:121088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.496 [2024-07-25 10:13:22.349957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.496 [2024-07-25 10:13:22.349973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:121096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.496 [2024-07-25 10:13:22.349986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.496 [2024-07-25 10:13:22.350001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:121104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.496 [2024-07-25 10:13:22.350014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.496 [2024-07-25 10:13:22.350030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:121112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.496 [2024-07-25 10:13:22.350043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.496 [2024-07-25 10:13:22.350058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:121120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.496 [2024-07-25 10:13:22.350071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.496 [2024-07-25 10:13:22.350086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:121128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.496 [2024-07-25 10:13:22.350099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.496 [2024-07-25 10:13:22.350113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:121136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.496 [2024-07-25 10:13:22.350127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.496 [2024-07-25 10:13:22.350141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:121144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.496 [2024-07-25 10:13:22.350154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.496 [2024-07-25 10:13:22.350169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:121152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.496 [2024-07-25 10:13:22.350182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.496 [2024-07-25 10:13:22.350197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:121160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.496 [2024-07-25 10:13:22.350210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.496 [2024-07-25 10:13:22.350225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:121168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.496 [2024-07-25 10:13:22.350238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.496 [2024-07-25 10:13:22.350252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:121176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.496 [2024-07-25 10:13:22.350266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.496 [2024-07-25 10:13:22.350281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:121184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.496 [2024-07-25 10:13:22.350294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.496 [2024-07-25 10:13:22.350313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:121192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.496 [2024-07-25 10:13:22.350328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.496 [2024-07-25 10:13:22.350343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:121200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.496 [2024-07-25 10:13:22.350356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.496 [2024-07-25 10:13:22.350370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:121208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.496 [2024-07-25 10:13:22.350383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.496 [2024-07-25 10:13:22.350398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:121216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.496 [2024-07-25 10:13:22.350412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.496 [2024-07-25 10:13:22.350426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:121224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.496 [2024-07-25 10:13:22.350450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.496 [2024-07-25 10:13:22.350465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:121232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.496 [2024-07-25 10:13:22.350479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.496 [2024-07-25 10:13:22.350494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:121240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.496 [2024-07-25 10:13:22.350508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.496 [2024-07-25 10:13:22.350522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:121248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.496 [2024-07-25 10:13:22.350535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.496 [2024-07-25 10:13:22.350550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:121256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.496 [2024-07-25 10:13:22.350564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.496 [2024-07-25 10:13:22.350578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:121984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.496 [2024-07-25 10:13:22.350591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.496 [2024-07-25 10:13:22.350606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:121992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.496 [2024-07-25 10:13:22.350619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.496 [2024-07-25 10:13:22.350633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:122000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.496 [2024-07-25 10:13:22.350647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.496 [2024-07-25 10:13:22.350661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:122008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.497 [2024-07-25 10:13:22.350678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.497 [2024-07-25 10:13:22.350695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:122016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.497 [2024-07-25 10:13:22.350716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.497 [2024-07-25 10:13:22.350732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:122024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.497 [2024-07-25 10:13:22.350745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.497 [2024-07-25 10:13:22.350760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:121264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.497 [2024-07-25 10:13:22.350773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.497 [2024-07-25 10:13:22.350794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:121272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.497 [2024-07-25 10:13:22.350808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.497 [2024-07-25 10:13:22.350823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:121280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.497 [2024-07-25 10:13:22.350837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.497 [2024-07-25 10:13:22.350851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:121288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.497 [2024-07-25 10:13:22.350864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.497 [2024-07-25 10:13:22.350879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:121296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.497 [2024-07-25 10:13:22.350892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.497 [2024-07-25 10:13:22.350907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:121304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.497 [2024-07-25 10:13:22.350920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.497 [2024-07-25 10:13:22.350934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:121312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.497 [2024-07-25 10:13:22.350947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.497 [2024-07-25 10:13:22.350962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:121320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.497 [2024-07-25 10:13:22.350974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.497 [2024-07-25 10:13:22.350989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:121328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.497 [2024-07-25 10:13:22.351002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.497 [2024-07-25 10:13:22.351016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:121336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.497 [2024-07-25 10:13:22.351030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.497 [2024-07-25 10:13:22.351053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:121344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.497 [2024-07-25 10:13:22.351080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.497 [2024-07-25 10:13:22.351106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:121352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.497 [2024-07-25 10:13:22.351133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.497 [2024-07-25 10:13:22.351161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:121360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.497 [2024-07-25 10:13:22.351186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.497 [2024-07-25 10:13:22.351214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:121368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.497 [2024-07-25 10:13:22.351235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.497 [2024-07-25 10:13:22.351252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:121376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.497 [2024-07-25 10:13:22.351267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.497 [2024-07-25 10:13:22.351282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:122032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.497 [2024-07-25 10:13:22.351295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.497 [2024-07-25 10:13:22.351310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:121384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.497 [2024-07-25 10:13:22.351323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.497 [2024-07-25 10:13:22.351345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:121392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.497 [2024-07-25 10:13:22.351359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.497 [2024-07-25 10:13:22.351373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:121400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.497 [2024-07-25 10:13:22.351387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.497 [2024-07-25 10:13:22.351401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:121408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.497 [2024-07-25 10:13:22.351415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.497 [2024-07-25 10:13:22.351437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:121416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.497 [2024-07-25 10:13:22.351453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.497 [2024-07-25 10:13:22.351468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:121424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.497 [2024-07-25 10:13:22.351482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.497 [2024-07-25 10:13:22.351496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:121432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.497 [2024-07-25 10:13:22.351514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.497 [2024-07-25 10:13:22.351530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:121440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.497 [2024-07-25 10:13:22.351543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.497 [2024-07-25 10:13:22.351558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:121448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.497 [2024-07-25 10:13:22.351571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.497 [2024-07-25 10:13:22.351586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:121456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.497 [2024-07-25 10:13:22.351599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.497 [2024-07-25 10:13:22.351613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:121464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.497 [2024-07-25 10:13:22.351626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.497 [2024-07-25 10:13:22.351641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:121472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.497 [2024-07-25 10:13:22.351654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.497 [2024-07-25 10:13:22.351669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:121480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.497 [2024-07-25 10:13:22.351681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.497 [2024-07-25 10:13:22.351696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:121488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.497 [2024-07-25 10:13:22.351709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.497 [2024-07-25 10:13:22.351724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:121496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.497 [2024-07-25 10:13:22.351744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.497 [2024-07-25 10:13:22.351758] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6d80 is same with the state(5) to be set 00:24:47.497 [2024-07-25 10:13:22.351775] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.497 [2024-07-25 10:13:22.351786] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.497 [2024-07-25 10:13:22.351797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:121504 len:8 PRP1 0x0 PRP2 0x0 00:24:47.497 [2024-07-25 10:13:22.351815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.497 [2024-07-25 10:13:22.351879] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1ac6d80 was disconnected and freed. reset controller. 00:24:47.497 [2024-07-25 10:13:22.351897] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:24:47.497 [2024-07-25 10:13:22.351933] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:47.497 [2024-07-25 10:13:22.351951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.497 [2024-07-25 10:13:22.351970] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:47.497 [2024-07-25 10:13:22.351984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.497 [2024-07-25 10:13:22.351997] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:47.497 [2024-07-25 10:13:22.352009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.498 [2024-07-25 10:13:22.352023] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:47.498 [2024-07-25 10:13:22.352035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.498 [2024-07-25 10:13:22.352048] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:47.498 [2024-07-25 10:13:22.352111] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa0790 (9): Bad file descriptor 00:24:47.498 [2024-07-25 10:13:22.355476] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:47.498 [2024-07-25 10:13:22.515694] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:47.498 [2024-07-25 10:13:27.028801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:97640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.498 [2024-07-25 10:13:27.028858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.498 [2024-07-25 10:13:27.028905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:97648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.498 [2024-07-25 10:13:27.028921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.498 [2024-07-25 10:13:27.028938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:97656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.498 [2024-07-25 10:13:27.028952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.498 [2024-07-25 10:13:27.028966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:97664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.498 [2024-07-25 10:13:27.028979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.498 [2024-07-25 10:13:27.028994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:96872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.498 [2024-07-25 10:13:27.029007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.498 [2024-07-25 10:13:27.029022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:96880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.498 [2024-07-25 10:13:27.029035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.498 [2024-07-25 10:13:27.029050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:96888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.498 [2024-07-25 10:13:27.029063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.498 [2024-07-25 10:13:27.029078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:96896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.498 [2024-07-25 10:13:27.029091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.498 [2024-07-25 10:13:27.029118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:96904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.498 [2024-07-25 10:13:27.029132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.498 [2024-07-25 10:13:27.029147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:96912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.498 [2024-07-25 10:13:27.029160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.498 [2024-07-25 10:13:27.029175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:96920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.498 [2024-07-25 10:13:27.029188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.498 [2024-07-25 10:13:27.029203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:96928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.498 [2024-07-25 10:13:27.029216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.498 [2024-07-25 10:13:27.029232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:96936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.498 [2024-07-25 10:13:27.029246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.498 [2024-07-25 10:13:27.029260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:96944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.498 [2024-07-25 10:13:27.029273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.498 [2024-07-25 10:13:27.029287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:96952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.498 [2024-07-25 10:13:27.029300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.498 [2024-07-25 10:13:27.029315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:96960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.498 [2024-07-25 10:13:27.029327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.498 [2024-07-25 10:13:27.029342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:96968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.498 [2024-07-25 10:13:27.029355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.498 [2024-07-25 10:13:27.029369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:96976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.498 [2024-07-25 10:13:27.029382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.498 [2024-07-25 10:13:27.029397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:96984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.498 [2024-07-25 10:13:27.029425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.498 [2024-07-25 10:13:27.029451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:96992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.498 [2024-07-25 10:13:27.029466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.498 [2024-07-25 10:13:27.029481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:97672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.498 [2024-07-25 10:13:27.029495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.498 [2024-07-25 10:13:27.029514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:97680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.498 [2024-07-25 10:13:27.029528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.498 [2024-07-25 10:13:27.029543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:97688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.498 [2024-07-25 10:13:27.029556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.498 [2024-07-25 10:13:27.029571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:97696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.498 [2024-07-25 10:13:27.029584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.498 [2024-07-25 10:13:27.029599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:97704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.498 [2024-07-25 10:13:27.029613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.498 [2024-07-25 10:13:27.029627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:97712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.498 [2024-07-25 10:13:27.029641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.498 [2024-07-25 10:13:27.029656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:97720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.498 [2024-07-25 10:13:27.029670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.498 [2024-07-25 10:13:27.029685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:97728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.498 [2024-07-25 10:13:27.029699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.498 [2024-07-25 10:13:27.029713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:97736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.498 [2024-07-25 10:13:27.029727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.498 [2024-07-25 10:13:27.029757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:97744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.498 [2024-07-25 10:13:27.029770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.498 [2024-07-25 10:13:27.029785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:97752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.498 [2024-07-25 10:13:27.029798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.498 [2024-07-25 10:13:27.029812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:97760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.498 [2024-07-25 10:13:27.029825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.498 [2024-07-25 10:13:27.029839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:97768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.498 [2024-07-25 10:13:27.029852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.498 [2024-07-25 10:13:27.029867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:97776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.498 [2024-07-25 10:13:27.029883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.498 [2024-07-25 10:13:27.029898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:97784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.498 [2024-07-25 10:13:27.029911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.498 [2024-07-25 10:13:27.029926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:97792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.498 [2024-07-25 10:13:27.029939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.498 [2024-07-25 10:13:27.029954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:97800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.498 [2024-07-25 10:13:27.029966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.499 [2024-07-25 10:13:27.029981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:97808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.499 [2024-07-25 10:13:27.029995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.499 [2024-07-25 10:13:27.030009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:97816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.499 [2024-07-25 10:13:27.030022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.499 [2024-07-25 10:13:27.030036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:97824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.499 [2024-07-25 10:13:27.030049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.499 [2024-07-25 10:13:27.030064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:97832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.499 [2024-07-25 10:13:27.030076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.499 [2024-07-25 10:13:27.030091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:97840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.499 [2024-07-25 10:13:27.030104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.499 [2024-07-25 10:13:27.030118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:97848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.499 [2024-07-25 10:13:27.030131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.499 [2024-07-25 10:13:27.030146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:97856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.499 [2024-07-25 10:13:27.030159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.499 [2024-07-25 10:13:27.030173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:97864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.499 [2024-07-25 10:13:27.030186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.499 [2024-07-25 10:13:27.030201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:97872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.499 [2024-07-25 10:13:27.030214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.499 [2024-07-25 10:13:27.030231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:97880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.499 [2024-07-25 10:13:27.030245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.499 [2024-07-25 10:13:27.030260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:97000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.499 [2024-07-25 10:13:27.030273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.499 [2024-07-25 10:13:27.030304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:97008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.499 [2024-07-25 10:13:27.030317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.499 [2024-07-25 10:13:27.030332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:97016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.499 [2024-07-25 10:13:27.030345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.499 [2024-07-25 10:13:27.030361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:97024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.499 [2024-07-25 10:13:27.030374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.499 [2024-07-25 10:13:27.030389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:97032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.499 [2024-07-25 10:13:27.030403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.499 [2024-07-25 10:13:27.030417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:97040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.499 [2024-07-25 10:13:27.030437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.499 [2024-07-25 10:13:27.030454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:97048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.499 [2024-07-25 10:13:27.030468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.499 [2024-07-25 10:13:27.030482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:97056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.499 [2024-07-25 10:13:27.030496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.499 [2024-07-25 10:13:27.030511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:97064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.499 [2024-07-25 10:13:27.030524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.499 [2024-07-25 10:13:27.030540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:97072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.499 [2024-07-25 10:13:27.030554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.499 [2024-07-25 10:13:27.030568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:97080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.499 [2024-07-25 10:13:27.030582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.499 [2024-07-25 10:13:27.030596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:97088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.499 [2024-07-25 10:13:27.030614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.499 [2024-07-25 10:13:27.030630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:97096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.499 [2024-07-25 10:13:27.030644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.499 [2024-07-25 10:13:27.030659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:97104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.499 [2024-07-25 10:13:27.030672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.499 [2024-07-25 10:13:27.030687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:97112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.499 [2024-07-25 10:13:27.030701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.499 [2024-07-25 10:13:27.030715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:97120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.499 [2024-07-25 10:13:27.030729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.499 [2024-07-25 10:13:27.030743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:97128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.499 [2024-07-25 10:13:27.030757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.499 [2024-07-25 10:13:27.030772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:97136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.499 [2024-07-25 10:13:27.030785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.499 [2024-07-25 10:13:27.030800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:97144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.499 [2024-07-25 10:13:27.030814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.499 [2024-07-25 10:13:27.030828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:97152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.499 [2024-07-25 10:13:27.030842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.499 [2024-07-25 10:13:27.030857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:97160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.499 [2024-07-25 10:13:27.030870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.499 [2024-07-25 10:13:27.030885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:97168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.499 [2024-07-25 10:13:27.030899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.499 [2024-07-25 10:13:27.030914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:97176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.499 [2024-07-25 10:13:27.030927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.499 [2024-07-25 10:13:27.030942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:97184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.500 [2024-07-25 10:13:27.030956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.500 [2024-07-25 10:13:27.030971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.500 [2024-07-25 10:13:27.030988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.500 [2024-07-25 10:13:27.031004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:97200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.500 [2024-07-25 10:13:27.031017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.500 [2024-07-25 10:13:27.031032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:97208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.500 [2024-07-25 10:13:27.031045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.500 [2024-07-25 10:13:27.031060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:97216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.500 [2024-07-25 10:13:27.031074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.500 [2024-07-25 10:13:27.031090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:97224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.500 [2024-07-25 10:13:27.031103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.500 [2024-07-25 10:13:27.031118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:97232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.500 [2024-07-25 10:13:27.031131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.500 [2024-07-25 10:13:27.031146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:97240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.500 [2024-07-25 10:13:27.031159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.500 [2024-07-25 10:13:27.031174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:97248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.500 [2024-07-25 10:13:27.031187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.500 [2024-07-25 10:13:27.031202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:97256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.500 [2024-07-25 10:13:27.031216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.500 [2024-07-25 10:13:27.031231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:97264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.500 [2024-07-25 10:13:27.031244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.500 [2024-07-25 10:13:27.031259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:97272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.500 [2024-07-25 10:13:27.031272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.500 [2024-07-25 10:13:27.031288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:97280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.500 [2024-07-25 10:13:27.031301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.500 [2024-07-25 10:13:27.031316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:97288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.500 [2024-07-25 10:13:27.031330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.500 [2024-07-25 10:13:27.031348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:97296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.500 [2024-07-25 10:13:27.031362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.500 [2024-07-25 10:13:27.031382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:97304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.500 [2024-07-25 10:13:27.031396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.500 [2024-07-25 10:13:27.031411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:97312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.500 [2024-07-25 10:13:27.031424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.500 [2024-07-25 10:13:27.031449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:97320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.500 [2024-07-25 10:13:27.031464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.500 [2024-07-25 10:13:27.031479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:97328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.500 [2024-07-25 10:13:27.031493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.500 [2024-07-25 10:13:27.031507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:97336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.500 [2024-07-25 10:13:27.031521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.500 [2024-07-25 10:13:27.031536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:97344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.500 [2024-07-25 10:13:27.031549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.500 [2024-07-25 10:13:27.031564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:97352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.500 [2024-07-25 10:13:27.031577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.500 [2024-07-25 10:13:27.031592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:97360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.500 [2024-07-25 10:13:27.031606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.500 [2024-07-25 10:13:27.031621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:97368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.500 [2024-07-25 10:13:27.031634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.500 [2024-07-25 10:13:27.031649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:97376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.500 [2024-07-25 10:13:27.031662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.500 [2024-07-25 10:13:27.031677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:97384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.500 [2024-07-25 10:13:27.031691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.500 [2024-07-25 10:13:27.031706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:97392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.500 [2024-07-25 10:13:27.031723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.500 [2024-07-25 10:13:27.031739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:97400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.500 [2024-07-25 10:13:27.031752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.500 [2024-07-25 10:13:27.031778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:97408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.500 [2024-07-25 10:13:27.031792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.500 [2024-07-25 10:13:27.031807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:97416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.500 [2024-07-25 10:13:27.031821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.500 [2024-07-25 10:13:27.031836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:97424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.500 [2024-07-25 10:13:27.031849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.500 [2024-07-25 10:13:27.031865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:97432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.500 [2024-07-25 10:13:27.031878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.500 [2024-07-25 10:13:27.031893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:97440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.500 [2024-07-25 10:13:27.031906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.500 [2024-07-25 10:13:27.031921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:97448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.500 [2024-07-25 10:13:27.031935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.500 [2024-07-25 10:13:27.031950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:97456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.500 [2024-07-25 10:13:27.031963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.500 [2024-07-25 10:13:27.031978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:97464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.500 [2024-07-25 10:13:27.031992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.500 [2024-07-25 10:13:27.032007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:97472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.500 [2024-07-25 10:13:27.032020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.500 [2024-07-25 10:13:27.032035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:97480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.500 [2024-07-25 10:13:27.032048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.500 [2024-07-25 10:13:27.032063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:97488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.500 [2024-07-25 10:13:27.032076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.500 [2024-07-25 10:13:27.032096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:97496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.500 [2024-07-25 10:13:27.032110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.500 [2024-07-25 10:13:27.032125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:97504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.501 [2024-07-25 10:13:27.032139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.501 [2024-07-25 10:13:27.032154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:97512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.501 [2024-07-25 10:13:27.032167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.501 [2024-07-25 10:13:27.032182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:97520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.501 [2024-07-25 10:13:27.032196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.501 [2024-07-25 10:13:27.032211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:97528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.501 [2024-07-25 10:13:27.032225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.501 [2024-07-25 10:13:27.032240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:97536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.501 [2024-07-25 10:13:27.032254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.501 [2024-07-25 10:13:27.032269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:97544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.501 [2024-07-25 10:13:27.032282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.501 [2024-07-25 10:13:27.032297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:97552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.501 [2024-07-25 10:13:27.032310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.501 [2024-07-25 10:13:27.032325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:97560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.501 [2024-07-25 10:13:27.032338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.501 [2024-07-25 10:13:27.032353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:97568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.501 [2024-07-25 10:13:27.032367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.501 [2024-07-25 10:13:27.032381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:97888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:47.501 [2024-07-25 10:13:27.032394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.501 [2024-07-25 10:13:27.032409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:97576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.501 [2024-07-25 10:13:27.032422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.501 [2024-07-25 10:13:27.032445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:97584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.501 [2024-07-25 10:13:27.032465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.501 [2024-07-25 10:13:27.032481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:97592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.501 [2024-07-25 10:13:27.032494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.501 [2024-07-25 10:13:27.032509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:97600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.501 [2024-07-25 10:13:27.032522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.501 [2024-07-25 10:13:27.032537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:97608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.501 [2024-07-25 10:13:27.032550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.501 [2024-07-25 10:13:27.032565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:97616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.501 [2024-07-25 10:13:27.032578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.501 [2024-07-25 10:13:27.032593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:97624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:47.501 [2024-07-25 10:13:27.032606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.501 [2024-07-25 10:13:27.032620] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac6d80 is same with the state(5) to be set 00:24:47.501 [2024-07-25 10:13:27.032636] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:47.501 [2024-07-25 10:13:27.032647] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:47.501 [2024-07-25 10:13:27.032659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97632 len:8 PRP1 0x0 PRP2 0x0 00:24:47.501 [2024-07-25 10:13:27.032671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.501 [2024-07-25 10:13:27.032732] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1ac6d80 was disconnected and freed. reset controller. 00:24:47.501 [2024-07-25 10:13:27.032751] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:24:47.501 [2024-07-25 10:13:27.032785] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:47.501 [2024-07-25 10:13:27.032803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.501 [2024-07-25 10:13:27.032818] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:47.501 [2024-07-25 10:13:27.032831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.501 [2024-07-25 10:13:27.032844] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:47.501 [2024-07-25 10:13:27.032857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.501 [2024-07-25 10:13:27.032870] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:47.501 [2024-07-25 10:13:27.032883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:47.501 [2024-07-25 10:13:27.032896] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:47.501 [2024-07-25 10:13:27.036180] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:47.501 [2024-07-25 10:13:27.036220] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1aa0790 (9): Bad file descriptor 00:24:47.501 [2024-07-25 10:13:27.185133] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:47.501 00:24:47.501 Latency(us) 00:24:47.501 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:47.501 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:47.501 Verification LBA range: start 0x0 length 0x4000 00:24:47.501 NVMe0n1 : 15.01 8562.21 33.45 1094.28 0.00 13229.46 819.20 14854.83 00:24:47.501 =================================================================================================================== 00:24:47.501 Total : 8562.21 33.45 1094.28 0.00 13229.46 819.20 14854.83 00:24:47.501 Received shutdown signal, test time was about 15.000000 seconds 00:24:47.501 00:24:47.501 Latency(us) 00:24:47.501 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:47.501 =================================================================================================================== 00:24:47.501 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:47.501 10:13:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:24:47.501 10:13:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:24:47.501 10:13:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:24:47.501 10:13:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=509811 00:24:47.501 10:13:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:24:47.501 10:13:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 509811 /var/tmp/bdevperf.sock 00:24:47.501 10:13:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 509811 ']' 00:24:47.501 10:13:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:47.501 10:13:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:47.501 10:13:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:47.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:47.501 10:13:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:47.501 10:13:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:47.785 10:13:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:47.785 10:13:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:24:47.785 10:13:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:48.350 [2024-07-25 10:13:33.266623] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:48.350 10:13:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:24:48.606 [2024-07-25 10:13:33.559484] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:24:48.606 10:13:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:49.170 NVMe0n1 00:24:49.170 10:13:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:49.736 00:24:49.736 10:13:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:50.300 00:24:50.300 10:13:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:50.300 10:13:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:24:50.557 10:13:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:51.122 10:13:36 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:24:54.400 10:13:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:54.400 10:13:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:24:54.400 10:13:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=510610 00:24:54.400 10:13:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:54.400 10:13:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 510610 00:24:55.773 0 00:24:55.773 10:13:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:55.773 [2024-07-25 10:13:32.653752] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:24:55.773 [2024-07-25 10:13:32.653853] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid509811 ] 00:24:55.773 EAL: No free 2048 kB hugepages reported on node 1 00:24:55.773 [2024-07-25 10:13:32.719355] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:55.773 [2024-07-25 10:13:32.828304] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:55.773 [2024-07-25 10:13:36.141502] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:24:55.773 [2024-07-25 10:13:36.141597] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:55.773 [2024-07-25 10:13:36.141620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.773 [2024-07-25 10:13:36.141639] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:55.773 [2024-07-25 10:13:36.141653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.773 [2024-07-25 10:13:36.141667] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:55.773 [2024-07-25 10:13:36.141681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.773 [2024-07-25 10:13:36.141695] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:55.773 [2024-07-25 10:13:36.141709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:55.773 [2024-07-25 10:13:36.141739] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:55.773 [2024-07-25 10:13:36.141791] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:55.773 [2024-07-25 10:13:36.141823] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x188b790 (9): Bad file descriptor 00:24:55.773 [2024-07-25 10:13:36.187007] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:55.773 Running I/O for 1 seconds... 00:24:55.773 00:24:55.773 Latency(us) 00:24:55.773 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:55.773 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:55.773 Verification LBA range: start 0x0 length 0x4000 00:24:55.773 NVMe0n1 : 1.01 8838.63 34.53 0.00 0.00 14410.72 1747.63 12718.84 00:24:55.773 =================================================================================================================== 00:24:55.773 Total : 8838.63 34.53 0.00 0.00 14410.72 1747.63 12718.84 00:24:55.773 10:13:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:55.773 10:13:40 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:24:56.031 10:13:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:56.289 10:13:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:56.289 10:13:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:24:56.853 10:13:41 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:57.111 10:13:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:25:00.385 10:13:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:00.385 10:13:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:25:00.643 10:13:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 509811 00:25:00.643 10:13:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 509811 ']' 00:25:00.643 10:13:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 509811 00:25:00.643 10:13:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:25:00.643 10:13:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:00.643 10:13:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 509811 00:25:00.643 10:13:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:00.643 10:13:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:00.643 10:13:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 509811' 00:25:00.643 killing process with pid 509811 00:25:00.643 10:13:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 509811 00:25:00.643 10:13:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 509811 00:25:00.901 10:13:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:25:00.901 10:13:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:01.159 10:13:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:25:01.159 10:13:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:25:01.159 10:13:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:25:01.159 10:13:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:01.159 10:13:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:25:01.159 10:13:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:01.159 10:13:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:25:01.159 10:13:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:01.159 10:13:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:01.159 rmmod nvme_tcp 00:25:01.160 rmmod nvme_fabrics 00:25:01.160 rmmod nvme_keyring 00:25:01.160 10:13:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:01.160 10:13:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:25:01.160 10:13:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:25:01.160 10:13:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 507550 ']' 00:25:01.160 10:13:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 507550 00:25:01.160 10:13:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 507550 ']' 00:25:01.160 10:13:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 507550 00:25:01.160 10:13:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:25:01.160 10:13:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:01.160 10:13:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 507550 00:25:01.160 10:13:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:01.160 10:13:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:01.160 10:13:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 507550' 00:25:01.160 killing process with pid 507550 00:25:01.160 10:13:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 507550 00:25:01.160 10:13:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 507550 00:25:01.418 10:13:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:01.418 10:13:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:01.418 10:13:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:01.418 10:13:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:01.418 10:13:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:01.418 10:13:46 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:01.418 10:13:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:01.418 10:13:46 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:03.950 10:13:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:03.950 00:25:03.950 real 0m38.177s 00:25:03.950 user 2m15.375s 00:25:03.950 sys 0m6.994s 00:25:03.950 10:13:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:03.950 10:13:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:03.950 ************************************ 00:25:03.950 END TEST nvmf_failover 00:25:03.950 ************************************ 00:25:03.950 10:13:48 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:03.950 10:13:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:03.950 10:13:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:03.950 10:13:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.950 ************************************ 00:25:03.950 START TEST nvmf_host_discovery 00:25:03.950 ************************************ 00:25:03.950 10:13:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:25:03.950 * Looking for test storage... 00:25:03.950 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:03.950 10:13:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:03.950 10:13:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:25:03.950 10:13:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:03.950 10:13:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:03.950 10:13:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:03.950 10:13:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:03.950 10:13:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:03.950 10:13:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:03.950 10:13:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:03.950 10:13:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:03.950 10:13:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:03.950 10:13:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:03.950 10:13:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:25:03.950 10:13:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:25:03.950 10:13:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:03.950 10:13:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:03.950 10:13:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:03.950 10:13:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:03.950 10:13:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:03.950 10:13:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:03.950 10:13:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:03.950 10:13:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:03.950 10:13:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.950 10:13:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.950 10:13:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.950 10:13:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:25:03.950 10:13:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:03.950 10:13:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:25:03.950 10:13:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:03.950 10:13:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:03.950 10:13:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:03.950 10:13:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:03.950 10:13:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:03.950 10:13:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:03.950 10:13:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:03.950 10:13:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:03.950 10:13:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:25:03.950 10:13:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:25:03.950 10:13:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:25:03.950 10:13:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:25:03.950 10:13:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:25:03.950 10:13:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:25:03.950 10:13:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:25:03.950 10:13:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:03.950 10:13:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:03.950 10:13:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:03.950 10:13:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:03.950 10:13:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:03.950 10:13:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:03.950 10:13:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:03.950 10:13:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:03.950 10:13:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:03.950 10:13:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:03.950 10:13:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:25:03.950 10:13:48 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:05.878 10:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:05.878 10:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:25:05.878 10:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:05.878 10:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:05.878 10:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:05.878 10:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:05.878 10:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:05.878 10:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:25:05.878 10:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:05.878 10:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:25:05.878 10:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:25:05.878 10:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:25:05.878 10:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:25:05.878 10:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:25:05.878 10:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:25:05.878 10:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:05.878 10:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:05.878 10:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:05.878 10:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:05.878 10:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:05.878 10:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:05.878 10:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:05.878 10:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:05.878 10:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:05.878 10:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:05.878 10:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:05.878 10:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:05.878 10:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:05.878 10:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:05.878 10:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:05.878 10:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:05.878 10:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:05.878 10:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:05.878 10:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:25:05.878 Found 0000:84:00.0 (0x8086 - 0x159b) 00:25:05.878 10:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:05.878 10:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:05.878 10:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:05.878 10:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:05.878 10:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:05.878 10:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:05.878 10:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:25:05.878 Found 0000:84:00.1 (0x8086 - 0x159b) 00:25:05.878 10:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:05.878 10:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:05.878 10:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:05.878 10:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:05.879 10:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:05.879 10:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:05.879 10:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:05.879 10:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:05.879 10:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:05.879 10:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:05.879 10:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:05.879 10:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:05.879 10:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:05.879 10:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:05.879 10:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:05.879 10:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:25:05.879 Found net devices under 0000:84:00.0: cvl_0_0 00:25:05.879 10:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:05.879 10:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:05.879 10:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:05.879 10:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:05.879 10:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:05.879 10:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:05.879 10:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:05.879 10:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:05.879 10:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:25:05.879 Found net devices under 0000:84:00.1: cvl_0_1 00:25:05.879 10:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:05.879 10:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:05.879 10:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:25:05.879 10:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:05.879 10:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:05.879 10:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:05.879 10:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:05.879 10:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:05.879 10:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:05.879 10:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:05.879 10:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:05.879 10:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:05.879 10:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:05.879 10:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:05.879 10:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:05.879 10:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:06.136 10:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:06.136 10:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:06.136 10:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:06.136 10:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:06.136 10:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:06.136 10:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:06.136 10:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:06.136 10:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:06.136 10:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:06.136 10:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:06.136 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:06.136 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.284 ms 00:25:06.136 00:25:06.136 --- 10.0.0.2 ping statistics --- 00:25:06.136 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:06.136 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:25:06.136 10:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:06.136 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:06.136 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.182 ms 00:25:06.136 00:25:06.136 --- 10.0.0.1 ping statistics --- 00:25:06.136 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:06.136 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:25:06.136 10:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:06.136 10:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:25:06.136 10:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:06.136 10:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:06.136 10:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:06.136 10:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:06.136 10:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:06.136 10:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:06.136 10:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:06.136 10:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:25:06.136 10:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:06.136 10:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:06.136 10:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:06.136 10:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=513354 00:25:06.136 10:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 513354 00:25:06.136 10:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 513354 ']' 00:25:06.136 10:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:06.136 10:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:06.136 10:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:06.136 10:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:06.136 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:06.136 10:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:06.136 10:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:06.136 [2024-07-25 10:13:51.283289] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:25:06.136 [2024-07-25 10:13:51.283398] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:06.394 EAL: No free 2048 kB hugepages reported on node 1 00:25:06.394 [2024-07-25 10:13:51.366015] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:06.394 [2024-07-25 10:13:51.487009] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:06.394 [2024-07-25 10:13:51.487080] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:06.394 [2024-07-25 10:13:51.487097] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:06.394 [2024-07-25 10:13:51.487110] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:06.394 [2024-07-25 10:13:51.487121] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:06.394 [2024-07-25 10:13:51.487153] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:06.652 10:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:06.652 10:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:25:06.652 10:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:06.652 10:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:06.652 10:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:06.652 10:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:06.652 10:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:06.652 10:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.652 10:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:06.652 [2024-07-25 10:13:51.639914] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:06.652 10:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.652 10:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:25:06.652 10:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.652 10:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:06.652 [2024-07-25 10:13:51.648125] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:25:06.652 10:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.652 10:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:25:06.652 10:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.652 10:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:06.652 null0 00:25:06.652 10:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.652 10:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:25:06.652 10:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.652 10:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:06.652 null1 00:25:06.652 10:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.652 10:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:25:06.652 10:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.652 10:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:06.652 10:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.652 10:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=513495 00:25:06.652 10:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 513495 /tmp/host.sock 00:25:06.652 10:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 513495 ']' 00:25:06.652 10:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:25:06.652 10:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:25:06.652 10:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:06.652 10:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:25:06.652 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:25:06.652 10:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:06.652 10:13:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:06.652 [2024-07-25 10:13:51.736268] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:25:06.652 [2024-07-25 10:13:51.736361] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid513495 ] 00:25:06.652 EAL: No free 2048 kB hugepages reported on node 1 00:25:06.652 [2024-07-25 10:13:51.804118] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:06.910 [2024-07-25 10:13:51.929174] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:06.910 10:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:06.910 10:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:25:06.910 10:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:06.910 10:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:25:06.910 10:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.910 10:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:06.910 10:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.910 10:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:25:06.910 10:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:06.910 10:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:06.910 10:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:06.910 10:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:25:06.910 10:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:25:07.168 10:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:07.168 10:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:07.168 10:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.168 10:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:07.168 10:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:07.168 10:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:07.168 10:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.168 10:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:25:07.168 10:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:25:07.168 10:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:07.168 10:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:07.168 10:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.168 10:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:07.168 10:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:07.168 10:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:07.168 10:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.168 10:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:25:07.168 10:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:25:07.168 10:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.168 10:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:07.168 10:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.168 10:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:25:07.168 10:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:07.168 10:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:07.168 10:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.168 10:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:07.168 10:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:07.168 10:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:07.168 10:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.168 10:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:25:07.168 10:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:25:07.168 10:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:07.168 10:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:07.168 10:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.168 10:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:07.168 10:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:07.168 10:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:07.168 10:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.426 10:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:25:07.426 10:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:25:07.427 10:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.427 10:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:07.427 10:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.427 10:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:25:07.427 10:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:07.427 10:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.427 10:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:07.427 10:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:07.427 10:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:07.427 10:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:07.427 10:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.427 10:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:25:07.427 10:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:25:07.427 10:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:07.427 10:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:07.427 10:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.427 10:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:07.427 10:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:07.427 10:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:07.427 10:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.427 10:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:25:07.427 10:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:07.427 10:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.427 10:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:07.427 [2024-07-25 10:13:52.486407] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:07.427 10:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.427 10:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:25:07.427 10:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:07.427 10:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:07.427 10:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.427 10:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:07.427 10:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:07.427 10:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:07.427 10:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.427 10:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:25:07.427 10:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:25:07.427 10:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:07.427 10:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:07.427 10:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.427 10:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:07.427 10:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:07.427 10:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:07.427 10:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.427 10:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:25:07.427 10:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:25:07.427 10:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:07.427 10:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:07.427 10:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:07.427 10:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:07.427 10:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:07.427 10:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:07.427 10:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:25:07.427 10:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:07.427 10:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:07.427 10:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.427 10:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:07.427 10:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.685 10:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:07.685 10:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:25:07.685 10:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:25:07.685 10:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:07.685 10:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:25:07.685 10:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.685 10:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:07.685 10:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.685 10:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:07.685 10:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:07.685 10:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:07.685 10:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:07.685 10:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:07.685 10:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:25:07.685 10:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:07.685 10:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:07.685 10:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:07.685 10:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:07.685 10:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:07.685 10:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:07.685 10:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:07.685 10:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == \n\v\m\e\0 ]] 00:25:07.685 10:13:52 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:25:07.943 [2024-07-25 10:13:53.077799] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:07.943 [2024-07-25 10:13:53.077834] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:07.943 [2024-07-25 10:13:53.077861] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:08.199 [2024-07-25 10:13:53.164130] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:25:08.199 [2024-07-25 10:13:53.350568] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:08.199 [2024-07-25 10:13:53.350597] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:08.763 10:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:08.763 10:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:08.763 10:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:25:08.763 10:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:08.763 10:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.763 10:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:08.763 10:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:08.763 10:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:08.763 10:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:08.763 10:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.763 10:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:08.763 10:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:08.763 10:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:08.763 10:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:25:08.763 10:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:08.763 10:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:08.763 10:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:25:08.763 10:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:25:08.763 10:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:08.763 10:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:08.763 10:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.763 10:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:08.763 10:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:08.763 10:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:08.763 10:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.763 10:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:25:08.763 10:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:08.763 10:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:08.763 10:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:25:08.763 10:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:08.763 10:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:08.763 10:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:25:08.763 10:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:25:08.763 10:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:08.763 10:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.763 10:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:08.763 10:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:08.763 10:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:08.763 10:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:08.763 10:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.763 10:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0 ]] 00:25:08.763 10:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:08.763 10:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:25:08.763 10:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:08.763 10:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:08.763 10:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:08.763 10:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:08.763 10:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:08.763 10:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:08.763 10:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:25:08.763 10:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:25:08.763 10:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:08.763 10:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.763 10:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:08.763 10:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.763 10:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:08.763 10:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:25:08.763 10:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:25:08.763 10:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:08.763 10:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:25:08.763 10:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.763 10:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:08.763 10:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:08.763 10:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:08.763 10:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:08.763 10:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:08.763 10:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:08.763 10:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:08.763 10:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:25:08.763 10:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:08.763 10:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:08.763 10:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:08.763 10:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:08.763 10:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:08.763 10:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:09.021 10:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.021 10:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:09.021 10:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:09.021 10:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:25:09.021 10:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:25:09.021 10:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:09.021 10:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:09.021 10:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:09.021 10:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:09.021 10:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:09.021 10:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:25:09.021 10:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:25:09.021 10:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:09.021 10:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.022 10:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:09.022 10:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.022 10:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:25:09.022 10:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:09.022 10:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:25:09.022 10:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:09.022 10:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:25:09.022 10:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.022 10:13:53 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:09.022 [2024-07-25 10:13:54.002786] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:09.022 [2024-07-25 10:13:54.003459] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:09.022 [2024-07-25 10:13:54.003497] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:09.022 10:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.022 10:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:09.022 10:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:09.022 10:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:09.022 10:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:09.022 10:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:09.022 10:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:25:09.022 10:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:09.022 10:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:09.022 10:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.022 10:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:09.022 10:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:09.022 10:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:09.022 10:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.022 10:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:09.022 10:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:09.022 10:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:09.022 10:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:09.022 10:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:09.022 10:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:09.022 10:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:09.022 10:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:25:09.022 10:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:09.022 10:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:09.022 10:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.022 10:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:09.022 10:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:09.022 10:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:09.022 10:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.022 [2024-07-25 10:13:54.089839] bdev_nvme.c:6935:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:25:09.022 10:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:09.022 10:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:09.022 10:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:09.022 10:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:25:09.022 10:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:09.022 10:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:09.022 10:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:09.022 10:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:25:09.022 10:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:09.022 10:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:09.022 10:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.022 10:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:09.022 10:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:09.022 10:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:09.022 10:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.022 10:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:25:09.022 10:13:54 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:25:09.280 [2024-07-25 10:13:54.193542] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:09.280 [2024-07-25 10:13:54.193569] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:09.280 [2024-07-25 10:13:54.193579] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:10.208 10:13:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:10.208 10:13:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:25:10.208 10:13:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:25:10.208 10:13:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:10.208 10:13:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:10.208 10:13:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:10.208 10:13:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.208 10:13:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:10.208 10:13:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:10.208 10:13:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.208 10:13:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:25:10.208 10:13:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:10.208 10:13:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:25:10.208 10:13:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:10.208 10:13:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:10.208 10:13:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:10.208 10:13:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:10.208 10:13:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:10.208 10:13:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:10.208 10:13:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:25:10.208 10:13:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:10.208 10:13:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:10.208 10:13:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.208 10:13:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:10.208 10:13:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.208 10:13:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:10.208 10:13:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:10.208 10:13:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:25:10.208 10:13:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:10.208 10:13:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:10.208 10:13:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.208 10:13:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:10.208 [2024-07-25 10:13:55.275012] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:25:10.208 [2024-07-25 10:13:55.275048] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:10.208 10:13:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.208 10:13:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:10.208 10:13:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:25:10.208 10:13:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:10.208 10:13:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:10.208 10:13:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:25:10.208 10:13:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:25:10.208 [2024-07-25 10:13:55.282182] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:10.208 [2024-07-25 10:13:55.282218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.208 [2024-07-25 10:13:55.282237] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:10.208 [2024-07-25 10:13:55.282261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.208 [2024-07-25 10:13:55.282276] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:10.208 [2024-07-25 10:13:55.282292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.208 [2024-07-25 10:13:55.282307] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:10.208 [2024-07-25 10:13:55.282321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:10.208 [2024-07-25 10:13:55.282344] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e23230 is same with the state(5) to be set 00:25:10.208 10:13:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:10.208 10:13:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:10.208 10:13:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:10.208 10:13:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.208 10:13:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:10.208 10:13:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:10.208 [2024-07-25 10:13:55.292188] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e23230 (9): Bad file descriptor 00:25:10.208 10:13:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.208 [2024-07-25 10:13:55.302233] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:10.208 [2024-07-25 10:13:55.302552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.208 [2024-07-25 10:13:55.302586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e23230 with addr=10.0.0.2, port=4420 00:25:10.208 [2024-07-25 10:13:55.302605] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e23230 is same with the state(5) to be set 00:25:10.208 [2024-07-25 10:13:55.302630] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e23230 (9): Bad file descriptor 00:25:10.208 [2024-07-25 10:13:55.302654] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:10.208 [2024-07-25 10:13:55.302670] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:10.208 [2024-07-25 10:13:55.302687] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:10.208 [2024-07-25 10:13:55.302710] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:10.208 [2024-07-25 10:13:55.312323] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:10.208 [2024-07-25 10:13:55.312570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.208 [2024-07-25 10:13:55.312601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e23230 with addr=10.0.0.2, port=4420 00:25:10.208 [2024-07-25 10:13:55.312619] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e23230 is same with the state(5) to be set 00:25:10.208 [2024-07-25 10:13:55.312644] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e23230 (9): Bad file descriptor 00:25:10.208 [2024-07-25 10:13:55.312666] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:10.208 [2024-07-25 10:13:55.312681] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:10.208 [2024-07-25 10:13:55.312696] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:10.208 [2024-07-25 10:13:55.312717] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:10.208 [2024-07-25 10:13:55.322399] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:10.208 [2024-07-25 10:13:55.322728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.208 [2024-07-25 10:13:55.322760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e23230 with addr=10.0.0.2, port=4420 00:25:10.208 [2024-07-25 10:13:55.322778] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e23230 is same with the state(5) to be set 00:25:10.208 [2024-07-25 10:13:55.322804] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e23230 (9): Bad file descriptor 00:25:10.208 [2024-07-25 10:13:55.322827] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:10.208 [2024-07-25 10:13:55.322842] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:10.208 [2024-07-25 10:13:55.322857] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:10.208 [2024-07-25 10:13:55.322878] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:10.208 [2024-07-25 10:13:55.332489] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:10.208 [2024-07-25 10:13:55.332780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.208 [2024-07-25 10:13:55.332812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e23230 with addr=10.0.0.2, port=4420 00:25:10.208 [2024-07-25 10:13:55.332830] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e23230 is same with the state(5) to be set 00:25:10.208 [2024-07-25 10:13:55.332861] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e23230 (9): Bad file descriptor 00:25:10.208 [2024-07-25 10:13:55.332885] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:10.208 [2024-07-25 10:13:55.332900] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:10.208 [2024-07-25 10:13:55.332915] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:10.208 [2024-07-25 10:13:55.332936] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:10.208 [2024-07-25 10:13:55.342585] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:10.208 [2024-07-25 10:13:55.342914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.208 [2024-07-25 10:13:55.342945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e23230 with addr=10.0.0.2, port=4420 00:25:10.208 [2024-07-25 10:13:55.342962] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e23230 is same with the state(5) to be set 00:25:10.208 [2024-07-25 10:13:55.342987] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e23230 (9): Bad file descriptor 00:25:10.208 [2024-07-25 10:13:55.343009] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:10.208 [2024-07-25 10:13:55.343025] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:10.208 [2024-07-25 10:13:55.343039] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:10.209 [2024-07-25 10:13:55.343059] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:10.209 [2024-07-25 10:13:55.352661] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:25:10.209 [2024-07-25 10:13:55.352921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:10.209 [2024-07-25 10:13:55.352951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e23230 with addr=10.0.0.2, port=4420 00:25:10.209 [2024-07-25 10:13:55.352968] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e23230 is same with the state(5) to be set 00:25:10.209 [2024-07-25 10:13:55.352992] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e23230 (9): Bad file descriptor 00:25:10.209 [2024-07-25 10:13:55.353014] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:25:10.209 [2024-07-25 10:13:55.353029] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:25:10.209 [2024-07-25 10:13:55.353043] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:25:10.209 [2024-07-25 10:13:55.353064] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:25:10.209 [2024-07-25 10:13:55.361297] bdev_nvme.c:6798:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:25:10.209 [2024-07-25 10:13:55.361332] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:10.209 10:13:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:10.209 10:13:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:10.209 10:13:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:10.209 10:13:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:25:10.209 10:13:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:10.209 10:13:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:10.209 10:13:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:25:10.209 10:13:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:25:10.209 10:13:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:10.209 10:13:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:10.209 10:13:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.209 10:13:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:10.209 10:13:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:10.209 10:13:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:10.466 10:13:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.466 10:13:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:10.466 10:13:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:10.466 10:13:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:10.466 10:13:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:25:10.466 10:13:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:10.466 10:13:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:10.466 10:13:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:25:10.466 10:13:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:25:10.466 10:13:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:25:10.466 10:13:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:25:10.466 10:13:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.466 10:13:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:10.466 10:13:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:25:10.466 10:13:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:25:10.466 10:13:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.466 10:13:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4421 == \4\4\2\1 ]] 00:25:10.466 10:13:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:10.466 10:13:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:25:10.466 10:13:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:25:10.466 10:13:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:10.466 10:13:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:10.466 10:13:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:10.466 10:13:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:10.466 10:13:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:10.466 10:13:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:25:10.466 10:13:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:10.466 10:13:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:10.466 10:13:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.466 10:13:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:10.466 10:13:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.466 10:13:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:25:10.466 10:13:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:25:10.466 10:13:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:25:10.466 10:13:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:10.466 10:13:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:25:10.466 10:13:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.466 10:13:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:10.466 10:13:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.466 10:13:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:25:10.466 10:13:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:25:10.466 10:13:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:10.466 10:13:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:10.466 10:13:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:25:10.466 10:13:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:25:10.466 10:13:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:25:10.466 10:13:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:25:10.466 10:13:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.466 10:13:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:10.466 10:13:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:25:10.466 10:13:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:25:10.466 10:13:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.723 10:13:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:25:10.723 10:13:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:10.723 10:13:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:25:10.723 10:13:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:25:10.723 10:13:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:10.723 10:13:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:10.723 10:13:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:25:10.723 10:13:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:25:10.723 10:13:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:10.723 10:13:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:10.723 10:13:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.723 10:13:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:10.723 10:13:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:10.723 10:13:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:10.723 10:13:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.723 10:13:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:25:10.723 10:13:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:10.723 10:13:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:25:10.723 10:13:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:25:10.723 10:13:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:25:10.723 10:13:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:25:10.723 10:13:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:25:10.723 10:13:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:25:10.723 10:13:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:25:10.723 10:13:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:25:10.723 10:13:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:25:10.723 10:13:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:25:10.723 10:13:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.723 10:13:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:10.723 10:13:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.723 10:13:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:25:10.723 10:13:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:25:10.723 10:13:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:25:10.723 10:13:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:25:10.723 10:13:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:10.723 10:13:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.723 10:13:55 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:11.669 [2024-07-25 10:13:56.746854] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:11.669 [2024-07-25 10:13:56.746882] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:11.669 [2024-07-25 10:13:56.746907] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:11.669 [2024-07-25 10:13:56.833179] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:25:11.925 [2024-07-25 10:13:56.902752] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:25:11.926 [2024-07-25 10:13:56.902793] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:25:11.926 10:13:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.926 10:13:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:11.926 10:13:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:25:11.926 10:13:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:11.926 10:13:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:11.926 10:13:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:11.926 10:13:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:11.926 10:13:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:11.926 10:13:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:11.926 10:13:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.926 10:13:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:11.926 request: 00:25:11.926 { 00:25:11.926 "name": "nvme", 00:25:11.926 "trtype": "tcp", 00:25:11.926 "traddr": "10.0.0.2", 00:25:11.926 "adrfam": "ipv4", 00:25:11.926 "trsvcid": "8009", 00:25:11.926 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:11.926 "wait_for_attach": true, 00:25:11.926 "method": "bdev_nvme_start_discovery", 00:25:11.926 "req_id": 1 00:25:11.926 } 00:25:11.926 Got JSON-RPC error response 00:25:11.926 response: 00:25:11.926 { 00:25:11.926 "code": -17, 00:25:11.926 "message": "File exists" 00:25:11.926 } 00:25:11.926 10:13:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:11.926 10:13:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:25:11.926 10:13:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:11.926 10:13:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:11.926 10:13:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:11.926 10:13:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:25:11.926 10:13:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:11.926 10:13:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:11.926 10:13:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.926 10:13:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:11.926 10:13:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:11.926 10:13:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:11.926 10:13:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.926 10:13:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:25:11.926 10:13:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:25:11.926 10:13:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:11.926 10:13:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:11.926 10:13:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.926 10:13:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:11.926 10:13:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:11.926 10:13:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:11.926 10:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:11.926 10:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:11.926 10:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:11.926 10:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:25:11.926 10:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:11.926 10:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:11.926 10:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:11.926 10:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:11.926 10:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:11.926 10:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:25:11.926 10:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.926 10:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:11.926 request: 00:25:11.926 { 00:25:11.926 "name": "nvme_second", 00:25:11.926 "trtype": "tcp", 00:25:11.926 "traddr": "10.0.0.2", 00:25:11.926 "adrfam": "ipv4", 00:25:11.926 "trsvcid": "8009", 00:25:11.926 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:11.926 "wait_for_attach": true, 00:25:11.926 "method": "bdev_nvme_start_discovery", 00:25:11.926 "req_id": 1 00:25:11.926 } 00:25:11.926 Got JSON-RPC error response 00:25:11.926 response: 00:25:11.926 { 00:25:11.926 "code": -17, 00:25:11.926 "message": "File exists" 00:25:11.926 } 00:25:11.926 10:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:11.926 10:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:25:11.926 10:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:11.926 10:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:11.926 10:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:11.926 10:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:25:11.926 10:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:11.926 10:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:11.926 10:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:11.926 10:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:11.926 10:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:11.926 10:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:11.926 10:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.183 10:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:25:12.183 10:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:25:12.183 10:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:12.183 10:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:25:12.183 10:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.183 10:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:12.183 10:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:25:12.183 10:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:25:12.183 10:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:12.183 10:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:25:12.183 10:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:12.183 10:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:25:12.183 10:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:12.183 10:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:25:12.183 10:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:12.183 10:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:25:12.183 10:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:25:12.183 10:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:25:12.183 10:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:12.183 10:13:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:13.116 [2024-07-25 10:13:58.154388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:13.116 [2024-07-25 10:13:58.154467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e1eeb0 with addr=10.0.0.2, port=8010 00:25:13.116 [2024-07-25 10:13:58.154501] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:13.116 [2024-07-25 10:13:58.154517] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:13.116 [2024-07-25 10:13:58.154531] bdev_nvme.c:7073:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:14.049 [2024-07-25 10:13:59.156983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:14.049 [2024-07-25 10:13:59.157061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e1eeb0 with addr=10.0.0.2, port=8010 00:25:14.049 [2024-07-25 10:13:59.157096] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:14.049 [2024-07-25 10:13:59.157112] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:14.049 [2024-07-25 10:13:59.157126] bdev_nvme.c:7073:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:25:15.423 [2024-07-25 10:14:00.158949] bdev_nvme.c:7054:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:25:15.423 request: 00:25:15.423 { 00:25:15.423 "name": "nvme_second", 00:25:15.423 "trtype": "tcp", 00:25:15.423 "traddr": "10.0.0.2", 00:25:15.423 "adrfam": "ipv4", 00:25:15.423 "trsvcid": "8010", 00:25:15.423 "hostnqn": "nqn.2021-12.io.spdk:test", 00:25:15.423 "wait_for_attach": false, 00:25:15.423 "attach_timeout_ms": 3000, 00:25:15.423 "method": "bdev_nvme_start_discovery", 00:25:15.423 "req_id": 1 00:25:15.423 } 00:25:15.423 Got JSON-RPC error response 00:25:15.423 response: 00:25:15.423 { 00:25:15.423 "code": -110, 00:25:15.423 "message": "Connection timed out" 00:25:15.423 } 00:25:15.423 10:14:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:25:15.423 10:14:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:25:15.423 10:14:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:25:15.423 10:14:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:25:15.423 10:14:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:25:15.423 10:14:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:25:15.423 10:14:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:25:15.423 10:14:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:25:15.423 10:14:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.423 10:14:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:15.423 10:14:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:25:15.423 10:14:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:25:15.423 10:14:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.423 10:14:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:25:15.423 10:14:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:25:15.423 10:14:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 513495 00:25:15.423 10:14:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:25:15.423 10:14:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:15.423 10:14:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:25:15.423 10:14:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:15.423 10:14:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:25:15.423 10:14:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:15.423 10:14:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:15.423 rmmod nvme_tcp 00:25:15.423 rmmod nvme_fabrics 00:25:15.423 rmmod nvme_keyring 00:25:15.423 10:14:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:15.423 10:14:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:25:15.423 10:14:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:25:15.423 10:14:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 513354 ']' 00:25:15.423 10:14:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 513354 00:25:15.423 10:14:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@950 -- # '[' -z 513354 ']' 00:25:15.423 10:14:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # kill -0 513354 00:25:15.423 10:14:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # uname 00:25:15.423 10:14:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:15.423 10:14:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 513354 00:25:15.423 10:14:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:25:15.423 10:14:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:25:15.423 10:14:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 513354' 00:25:15.423 killing process with pid 513354 00:25:15.423 10:14:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@969 -- # kill 513354 00:25:15.423 10:14:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@974 -- # wait 513354 00:25:15.682 10:14:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:15.682 10:14:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:15.682 10:14:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:15.682 10:14:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:15.682 10:14:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:15.682 10:14:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:15.682 10:14:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:15.682 10:14:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:17.579 10:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:17.579 00:25:17.579 real 0m14.021s 00:25:17.579 user 0m20.530s 00:25:17.579 sys 0m3.207s 00:25:17.579 10:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:17.579 10:14:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:25:17.579 ************************************ 00:25:17.579 END TEST nvmf_host_discovery 00:25:17.579 ************************************ 00:25:17.579 10:14:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:25:17.579 10:14:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:17.579 10:14:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:17.579 10:14:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.837 ************************************ 00:25:17.837 START TEST nvmf_host_multipath_status 00:25:17.837 ************************************ 00:25:17.837 10:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:25:17.837 * Looking for test storage... 00:25:17.837 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:17.837 10:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:17.837 10:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:25:17.837 10:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:17.837 10:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:17.837 10:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:17.837 10:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:17.837 10:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:17.837 10:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:17.837 10:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:17.837 10:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:17.837 10:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:17.837 10:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:17.837 10:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:25:17.837 10:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:25:17.837 10:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:17.837 10:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:17.837 10:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:17.838 10:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:17.838 10:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:17.838 10:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:17.838 10:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:17.838 10:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:17.838 10:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:17.838 10:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:17.838 10:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:17.838 10:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:25:17.838 10:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:17.838 10:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:25:17.838 10:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:17.838 10:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:17.838 10:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:17.838 10:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:17.838 10:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:17.838 10:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:17.838 10:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:17.838 10:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:17.838 10:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:25:17.838 10:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:25:17.838 10:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:17.838 10:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:25:17.838 10:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:17.838 10:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:25:17.838 10:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:25:17.838 10:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:17.838 10:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:17.838 10:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:17.838 10:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:17.838 10:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:17.838 10:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:17.838 10:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:17.838 10:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:17.838 10:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:17.838 10:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:17.838 10:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:25:17.838 10:14:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:20.398 10:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:20.398 10:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:25:20.398 10:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:20.398 10:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:20.398 10:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:20.398 10:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:20.398 10:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:20.398 10:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:25:20.398 10:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:20.398 10:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:25:20.398 10:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:25:20.398 10:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:25:20.398 10:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:25:20.398 10:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:25:20.398 10:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:25:20.398 10:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:20.398 10:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:20.398 10:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:20.398 10:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:20.398 10:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:20.398 10:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:20.398 10:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:20.398 10:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:20.398 10:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:20.398 10:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:20.398 10:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:20.398 10:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:20.398 10:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:20.398 10:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:20.398 10:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:20.398 10:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:20.398 10:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:20.398 10:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:20.398 10:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:25:20.398 Found 0000:84:00.0 (0x8086 - 0x159b) 00:25:20.398 10:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:20.398 10:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:20.398 10:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:20.398 10:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:20.398 10:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:20.398 10:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:20.398 10:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:25:20.398 Found 0000:84:00.1 (0x8086 - 0x159b) 00:25:20.398 10:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:20.398 10:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:20.398 10:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:20.398 10:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:20.398 10:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:20.398 10:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:20.398 10:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:20.398 10:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:20.398 10:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:20.398 10:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:20.398 10:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:20.398 10:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:20.398 10:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:20.398 10:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:20.398 10:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:20.398 10:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:25:20.398 Found net devices under 0000:84:00.0: cvl_0_0 00:25:20.398 10:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:20.398 10:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:20.398 10:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:20.398 10:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:20.398 10:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:20.398 10:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:20.398 10:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:20.398 10:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:20.398 10:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:25:20.398 Found net devices under 0000:84:00.1: cvl_0_1 00:25:20.398 10:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:20.398 10:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:20.398 10:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:25:20.398 10:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:20.398 10:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:20.398 10:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:20.398 10:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:20.398 10:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:20.398 10:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:20.398 10:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:20.398 10:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:20.398 10:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:20.398 10:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:20.398 10:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:20.398 10:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:20.398 10:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:20.398 10:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:20.398 10:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:20.398 10:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:20.398 10:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:20.398 10:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:20.398 10:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:20.398 10:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:20.398 10:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:20.398 10:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:20.657 10:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:20.657 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:20.657 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.240 ms 00:25:20.657 00:25:20.657 --- 10.0.0.2 ping statistics --- 00:25:20.657 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:20.657 rtt min/avg/max/mdev = 0.240/0.240/0.240/0.000 ms 00:25:20.657 10:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:20.657 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:20.657 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.195 ms 00:25:20.657 00:25:20.657 --- 10.0.0.1 ping statistics --- 00:25:20.657 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:20.657 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:25:20.657 10:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:20.657 10:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:25:20.657 10:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:20.657 10:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:20.657 10:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:20.657 10:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:20.657 10:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:20.657 10:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:20.657 10:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:20.657 10:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:25:20.657 10:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:20.657 10:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:20.657 10:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:20.657 10:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=516668 00:25:20.657 10:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:25:20.657 10:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 516668 00:25:20.657 10:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 516668 ']' 00:25:20.657 10:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:20.657 10:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:20.657 10:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:20.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:20.657 10:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:20.657 10:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:20.657 [2024-07-25 10:14:05.660802] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:25:20.657 [2024-07-25 10:14:05.660900] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:20.657 EAL: No free 2048 kB hugepages reported on node 1 00:25:20.657 [2024-07-25 10:14:05.738189] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:20.915 [2024-07-25 10:14:05.859814] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:20.915 [2024-07-25 10:14:05.859873] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:20.915 [2024-07-25 10:14:05.859890] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:20.915 [2024-07-25 10:14:05.859904] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:20.915 [2024-07-25 10:14:05.859916] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:20.915 [2024-07-25 10:14:05.859999] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:20.915 [2024-07-25 10:14:05.860006] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:20.915 10:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:20.915 10:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:25:20.915 10:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:20.915 10:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:20.915 10:14:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:20.915 10:14:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:20.915 10:14:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=516668 00:25:20.915 10:14:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:21.172 [2024-07-25 10:14:06.323640] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:21.429 10:14:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:21.687 Malloc0 00:25:21.687 10:14:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:25:21.943 10:14:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:22.200 10:14:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:22.457 [2024-07-25 10:14:07.455474] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:22.457 10:14:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:22.714 [2024-07-25 10:14:07.756317] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:22.714 10:14:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=516953 00:25:22.714 10:14:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:22.714 10:14:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 516953 /var/tmp/bdevperf.sock 00:25:22.714 10:14:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:25:22.714 10:14:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 516953 ']' 00:25:22.714 10:14:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:22.714 10:14:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:22.714 10:14:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:22.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:22.714 10:14:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:22.714 10:14:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:25:23.279 10:14:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:23.279 10:14:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:25:23.279 10:14:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:25:23.536 10:14:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:25:24.468 Nvme0n1 00:25:24.468 10:14:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:25:25.033 Nvme0n1 00:25:25.033 10:14:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:25:25.033 10:14:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:25:27.561 10:14:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:25:27.561 10:14:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:25:27.561 10:14:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:27.820 10:14:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:25:28.751 10:14:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:25:28.751 10:14:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:28.751 10:14:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:28.752 10:14:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:29.009 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:29.009 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:29.009 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:29.009 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:29.574 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:29.574 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:29.574 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:29.574 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:29.833 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:29.833 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:29.833 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:29.833 10:14:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:30.092 10:14:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:30.092 10:14:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:30.092 10:14:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:30.092 10:14:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:30.350 10:14:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:30.350 10:14:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:30.350 10:14:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:30.350 10:14:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:30.607 10:14:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:30.607 10:14:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:25:30.607 10:14:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:31.171 10:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:31.428 10:14:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:25:32.362 10:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:25:32.362 10:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:32.362 10:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:32.362 10:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:32.928 10:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:32.928 10:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:32.928 10:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:32.928 10:14:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:33.186 10:14:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:33.186 10:14:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:33.186 10:14:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:33.186 10:14:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:33.752 10:14:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:33.752 10:14:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:33.752 10:14:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:33.752 10:14:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:34.011 10:14:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:34.011 10:14:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:34.011 10:14:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:34.011 10:14:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:34.627 10:14:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:34.627 10:14:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:34.627 10:14:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:34.627 10:14:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:34.885 10:14:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:34.885 10:14:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:25:34.885 10:14:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:35.143 10:14:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:25:35.709 10:14:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:25:36.641 10:14:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:25:36.641 10:14:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:36.641 10:14:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:36.641 10:14:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:36.900 10:14:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:36.900 10:14:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:36.900 10:14:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:36.900 10:14:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:37.158 10:14:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:37.158 10:14:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:37.158 10:14:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:37.158 10:14:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:37.724 10:14:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:37.724 10:14:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:37.724 10:14:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:37.724 10:14:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:37.982 10:14:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:37.982 10:14:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:37.982 10:14:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:37.982 10:14:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:38.547 10:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:38.547 10:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:38.547 10:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:38.547 10:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:38.805 10:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:38.805 10:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:25:38.805 10:14:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:39.062 10:14:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:39.628 10:14:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:25:40.562 10:14:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:25:40.562 10:14:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:40.562 10:14:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:40.562 10:14:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:40.820 10:14:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:40.820 10:14:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:40.820 10:14:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:40.820 10:14:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:41.386 10:14:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:41.386 10:14:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:41.386 10:14:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:41.386 10:14:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:41.644 10:14:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:41.644 10:14:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:41.644 10:14:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:41.644 10:14:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:42.210 10:14:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:42.210 10:14:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:42.210 10:14:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:42.210 10:14:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:42.468 10:14:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:42.468 10:14:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:42.468 10:14:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:42.468 10:14:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:42.726 10:14:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:42.726 10:14:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:25:42.726 10:14:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:25:42.984 10:14:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:25:43.242 10:14:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:25:44.614 10:14:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:25:44.614 10:14:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:44.614 10:14:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:44.614 10:14:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:44.614 10:14:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:44.614 10:14:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:25:44.614 10:14:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:44.614 10:14:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:45.179 10:14:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:45.179 10:14:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:45.179 10:14:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:45.179 10:14:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:45.437 10:14:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:45.437 10:14:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:45.437 10:14:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:45.437 10:14:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:45.695 10:14:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:45.695 10:14:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:25:45.695 10:14:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:45.695 10:14:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:45.952 10:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:45.952 10:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:25:45.952 10:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:45.952 10:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:46.209 10:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:46.209 10:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:25:46.209 10:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:25:46.466 10:14:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:47.031 10:14:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:25:48.408 10:14:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:25:48.408 10:14:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:48.408 10:14:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:48.408 10:14:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:48.408 10:14:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:48.408 10:14:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:48.408 10:14:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:48.408 10:14:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:49.012 10:14:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:49.012 10:14:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:49.012 10:14:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:49.012 10:14:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:49.270 10:14:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:49.270 10:14:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:49.270 10:14:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:49.270 10:14:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:49.528 10:14:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:49.528 10:14:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:25:49.528 10:14:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:49.528 10:14:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:49.786 10:14:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:49.786 10:14:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:49.786 10:14:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:49.786 10:14:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:50.351 10:14:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:50.351 10:14:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:25:50.610 10:14:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:25:50.610 10:14:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:25:51.176 10:14:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:51.434 10:14:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:25:52.369 10:14:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:25:52.369 10:14:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:25:52.369 10:14:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:52.369 10:14:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:52.626 10:14:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:52.626 10:14:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:52.626 10:14:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:52.626 10:14:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:52.884 10:14:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:52.884 10:14:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:52.884 10:14:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:52.884 10:14:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:53.450 10:14:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:53.450 10:14:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:53.450 10:14:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:53.450 10:14:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:53.708 10:14:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:53.708 10:14:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:53.708 10:14:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:53.708 10:14:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:53.967 10:14:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:53.967 10:14:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:53.967 10:14:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:53.967 10:14:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:54.248 10:14:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:54.248 10:14:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:25:54.248 10:14:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:54.814 10:14:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:25:55.073 10:14:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:25:56.006 10:14:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:25:56.006 10:14:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:25:56.006 10:14:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:56.006 10:14:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:25:56.568 10:14:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:25:56.568 10:14:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:25:56.568 10:14:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:56.568 10:14:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:25:56.824 10:14:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:56.824 10:14:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:25:56.824 10:14:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:56.824 10:14:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:25:57.388 10:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:57.388 10:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:25:57.388 10:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:57.388 10:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:25:57.645 10:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:57.645 10:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:25:57.645 10:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:57.645 10:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:25:57.902 10:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:57.902 10:14:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:25:57.902 10:14:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:25:57.902 10:14:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:25:58.466 10:14:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:25:58.466 10:14:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:25:58.466 10:14:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:25:58.722 10:14:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:25:59.286 10:14:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:26:00.219 10:14:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:26:00.219 10:14:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:00.219 10:14:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:00.219 10:14:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:00.477 10:14:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:00.477 10:14:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:00.734 10:14:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:00.734 10:14:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:00.991 10:14:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:00.991 10:14:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:00.991 10:14:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:00.991 10:14:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:01.249 10:14:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:01.249 10:14:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:01.249 10:14:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:01.249 10:14:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:01.813 10:14:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:01.813 10:14:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:01.813 10:14:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:01.813 10:14:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:02.076 10:14:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:02.076 10:14:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:02.076 10:14:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:02.076 10:14:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:02.393 10:14:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:02.393 10:14:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:26:02.393 10:14:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:02.959 10:14:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:26:03.217 10:14:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:26:04.149 10:14:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:26:04.149 10:14:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:04.150 10:14:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:04.150 10:14:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:04.407 10:14:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:04.407 10:14:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:04.407 10:14:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:04.407 10:14:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:04.972 10:14:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:04.972 10:14:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:04.972 10:14:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:04.972 10:14:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:05.230 10:14:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:05.230 10:14:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:05.230 10:14:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:05.230 10:14:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:05.796 10:14:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:05.796 10:14:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:05.796 10:14:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:05.796 10:14:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:06.054 10:14:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:06.054 10:14:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:26:06.054 10:14:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:06.054 10:14:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:06.624 10:14:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:06.624 10:14:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 516953 00:26:06.624 10:14:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 516953 ']' 00:26:06.624 10:14:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 516953 00:26:06.624 10:14:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:26:06.624 10:14:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:06.624 10:14:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 516953 00:26:06.624 10:14:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:26:06.624 10:14:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:26:06.624 10:14:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 516953' 00:26:06.624 killing process with pid 516953 00:26:06.624 10:14:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 516953 00:26:06.624 10:14:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 516953 00:26:06.624 Connection closed with partial response: 00:26:06.624 00:26:06.624 00:26:06.624 10:14:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 516953 00:26:06.624 10:14:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:06.624 [2024-07-25 10:14:07.826755] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:26:06.624 [2024-07-25 10:14:07.826863] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid516953 ] 00:26:06.624 EAL: No free 2048 kB hugepages reported on node 1 00:26:06.624 [2024-07-25 10:14:07.900631] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:06.624 [2024-07-25 10:14:08.022945] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:06.624 Running I/O for 90 seconds... 00:26:06.624 [2024-07-25 10:14:28.026741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:86128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.624 [2024-07-25 10:14:28.026802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:06.624 [2024-07-25 10:14:28.026880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:86136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.624 [2024-07-25 10:14:28.026901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:06.624 [2024-07-25 10:14:28.026924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:86144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.624 [2024-07-25 10:14:28.026941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:06.624 [2024-07-25 10:14:28.026966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:86152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.624 [2024-07-25 10:14:28.026982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:06.624 [2024-07-25 10:14:28.027004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:86160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.624 [2024-07-25 10:14:28.027020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:06.624 [2024-07-25 10:14:28.027041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:86168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.624 [2024-07-25 10:14:28.027057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:06.624 [2024-07-25 10:14:28.027079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:86176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.624 [2024-07-25 10:14:28.027096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:06.624 [2024-07-25 10:14:28.027118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:86184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.624 [2024-07-25 10:14:28.027134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:06.624 [2024-07-25 10:14:28.027422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:86192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.624 [2024-07-25 10:14:28.027466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:06.624 [2024-07-25 10:14:28.027496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:86200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.624 [2024-07-25 10:14:28.027525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:06.624 [2024-07-25 10:14:28.027548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:86208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.625 [2024-07-25 10:14:28.027579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:06.625 [2024-07-25 10:14:28.027603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:86216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.625 [2024-07-25 10:14:28.027620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:06.625 [2024-07-25 10:14:28.027643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:85432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.625 [2024-07-25 10:14:28.027659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:06.625 [2024-07-25 10:14:28.027683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:85440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.625 [2024-07-25 10:14:28.027699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:06.625 [2024-07-25 10:14:28.027722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:85448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.625 [2024-07-25 10:14:28.027754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:06.625 [2024-07-25 10:14:28.027777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:85456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.625 [2024-07-25 10:14:28.027793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:06.625 [2024-07-25 10:14:28.027816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:85464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.625 [2024-07-25 10:14:28.027832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:06.625 [2024-07-25 10:14:28.027854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:85472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.625 [2024-07-25 10:14:28.027869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:06.625 [2024-07-25 10:14:28.027891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:85480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.625 [2024-07-25 10:14:28.027907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:06.625 [2024-07-25 10:14:28.027929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:85488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.625 [2024-07-25 10:14:28.027945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:06.625 [2024-07-25 10:14:28.027967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:85496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.625 [2024-07-25 10:14:28.027983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:06.625 [2024-07-25 10:14:28.028004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:85504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.625 [2024-07-25 10:14:28.028020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:06.625 [2024-07-25 10:14:28.028041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:85512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.625 [2024-07-25 10:14:28.028057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:06.625 [2024-07-25 10:14:28.028084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:85520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.625 [2024-07-25 10:14:28.028100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:06.625 [2024-07-25 10:14:28.028122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:85528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.625 [2024-07-25 10:14:28.028138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:06.625 [2024-07-25 10:14:28.028230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:85536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.625 [2024-07-25 10:14:28.028252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:06.625 [2024-07-25 10:14:28.028279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:85544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.625 [2024-07-25 10:14:28.028297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:06.625 [2024-07-25 10:14:28.028320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:85552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.625 [2024-07-25 10:14:28.028337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:06.625 [2024-07-25 10:14:28.028360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:86224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.625 [2024-07-25 10:14:28.028376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:06.625 [2024-07-25 10:14:28.028399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:86232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.625 [2024-07-25 10:14:28.028437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:06.625 [2024-07-25 10:14:28.028464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:86240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.625 [2024-07-25 10:14:28.028481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:06.625 [2024-07-25 10:14:28.028504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:86248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.625 [2024-07-25 10:14:28.028521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:06.625 [2024-07-25 10:14:28.029320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:86256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.625 [2024-07-25 10:14:28.029341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:26:06.625 [2024-07-25 10:14:28.029369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:86264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.625 [2024-07-25 10:14:28.029386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:26:06.625 [2024-07-25 10:14:28.029410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:86272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.625 [2024-07-25 10:14:28.029426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:26:06.625 [2024-07-25 10:14:28.029482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:86280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.625 [2024-07-25 10:14:28.029500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:26:06.625 [2024-07-25 10:14:28.029524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.625 [2024-07-25 10:14:28.029540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:26:06.625 [2024-07-25 10:14:28.029564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:86296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.625 [2024-07-25 10:14:28.029581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:06.625 [2024-07-25 10:14:28.029604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:86304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.625 [2024-07-25 10:14:28.029621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:26:06.625 [2024-07-25 10:14:28.029645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:86312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.625 [2024-07-25 10:14:28.029661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:06.625 [2024-07-25 10:14:28.029685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:86320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.625 [2024-07-25 10:14:28.029702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:06.625 [2024-07-25 10:14:28.029726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:86328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.625 [2024-07-25 10:14:28.029757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:26:06.625 [2024-07-25 10:14:28.029781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:86336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.625 [2024-07-25 10:14:28.029797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:26:06.625 [2024-07-25 10:14:28.029820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:86344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.625 [2024-07-25 10:14:28.029835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:06.625 [2024-07-25 10:14:28.029858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:86352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.626 [2024-07-25 10:14:28.029874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:26:06.626 [2024-07-25 10:14:28.029897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:86360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.626 [2024-07-25 10:14:28.029913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:26:06.626 [2024-07-25 10:14:28.029936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:86368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.626 [2024-07-25 10:14:28.029951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:26:06.626 [2024-07-25 10:14:28.029974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:86376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.626 [2024-07-25 10:14:28.029994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:26:06.626 [2024-07-25 10:14:28.030018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.626 [2024-07-25 10:14:28.030034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:06.626 [2024-07-25 10:14:28.030107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:86392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.626 [2024-07-25 10:14:28.030126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:26:06.626 [2024-07-25 10:14:28.030154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:86400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.626 [2024-07-25 10:14:28.030171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:26:06.626 [2024-07-25 10:14:28.030196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.626 [2024-07-25 10:14:28.030212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:06.626 [2024-07-25 10:14:28.030236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:86416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.626 [2024-07-25 10:14:28.030252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:26:06.626 [2024-07-25 10:14:28.030277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:86424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.626 [2024-07-25 10:14:28.030293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:26:06.626 [2024-07-25 10:14:28.030317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:86432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.626 [2024-07-25 10:14:28.030332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:26:06.626 [2024-07-25 10:14:28.030357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:86440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.626 [2024-07-25 10:14:28.030372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:06.626 [2024-07-25 10:14:28.030397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:85560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.626 [2024-07-25 10:14:28.030412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:06.626 [2024-07-25 10:14:28.030460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:85568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.626 [2024-07-25 10:14:28.030478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:06.626 [2024-07-25 10:14:28.030505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:85576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.626 [2024-07-25 10:14:28.030522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:26:06.626 [2024-07-25 10:14:28.030547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:85584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.626 [2024-07-25 10:14:28.030568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:26:06.626 [2024-07-25 10:14:28.030594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:85592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.626 [2024-07-25 10:14:28.030611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:06.626 [2024-07-25 10:14:28.030636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:85600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.626 [2024-07-25 10:14:28.030653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:06.626 [2024-07-25 10:14:28.030678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:85608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.626 [2024-07-25 10:14:28.030694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:26:06.626 [2024-07-25 10:14:28.030719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:85616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.626 [2024-07-25 10:14:28.030750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:06.626 [2024-07-25 10:14:28.030776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:85624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.626 [2024-07-25 10:14:28.030792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:06.626 [2024-07-25 10:14:28.030816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:85632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.626 [2024-07-25 10:14:28.030832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:26:06.626 [2024-07-25 10:14:28.030856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:85640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.626 [2024-07-25 10:14:28.030872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:06.626 [2024-07-25 10:14:28.030897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:85648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.626 [2024-07-25 10:14:28.030912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:06.626 [2024-07-25 10:14:28.030937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:85656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.626 [2024-07-25 10:14:28.030953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:26:06.626 [2024-07-25 10:14:28.030977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:85664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.626 [2024-07-25 10:14:28.030993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:26:06.626 [2024-07-25 10:14:28.031018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:85672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.626 [2024-07-25 10:14:28.031034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:06.626 [2024-07-25 10:14:28.031058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:85680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.626 [2024-07-25 10:14:28.031078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:06.626 [2024-07-25 10:14:28.031103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:85688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.626 [2024-07-25 10:14:28.031119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:06.626 [2024-07-25 10:14:28.031144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:85696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.626 [2024-07-25 10:14:28.031160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:06.626 [2024-07-25 10:14:28.031185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:85704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.626 [2024-07-25 10:14:28.031200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:26:06.626 [2024-07-25 10:14:28.031225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:85712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.626 [2024-07-25 10:14:28.031241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:06.626 [2024-07-25 10:14:28.031265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:85720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.626 [2024-07-25 10:14:28.031282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:06.626 [2024-07-25 10:14:28.031307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.626 [2024-07-25 10:14:28.031323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:06.626 [2024-07-25 10:14:28.031347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:85736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.626 [2024-07-25 10:14:28.031363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:26:06.626 [2024-07-25 10:14:28.031387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:85744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.626 [2024-07-25 10:14:28.031403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:26:06.626 [2024-07-25 10:14:28.031434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:85752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.626 [2024-07-25 10:14:28.031467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:06.626 [2024-07-25 10:14:28.031494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:85760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.626 [2024-07-25 10:14:28.031511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:26:06.627 [2024-07-25 10:14:28.031537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:85768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.627 [2024-07-25 10:14:28.031553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:06.627 [2024-07-25 10:14:28.031579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:85776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.627 [2024-07-25 10:14:28.031595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:06.627 [2024-07-25 10:14:28.031630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:85784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.627 [2024-07-25 10:14:28.031647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:06.627 [2024-07-25 10:14:28.031673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:85792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.627 [2024-07-25 10:14:28.031690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:26:06.627 [2024-07-25 10:14:28.031715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:85800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.627 [2024-07-25 10:14:28.031746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:26:06.627 [2024-07-25 10:14:28.031773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:85808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.627 [2024-07-25 10:14:28.031789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:26:06.627 [2024-07-25 10:14:28.031814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:85816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.627 [2024-07-25 10:14:28.031831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:06.627 [2024-07-25 10:14:28.031856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:85824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.627 [2024-07-25 10:14:28.031873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:06.627 [2024-07-25 10:14:28.031897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:85832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.627 [2024-07-25 10:14:28.031913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:26:06.627 [2024-07-25 10:14:28.031939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:85840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.627 [2024-07-25 10:14:28.031955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:26:06.627 [2024-07-25 10:14:28.031982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:85848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.627 [2024-07-25 10:14:28.031998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:06.627 [2024-07-25 10:14:28.032023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:85856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.627 [2024-07-25 10:14:28.032039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:26:06.627 [2024-07-25 10:14:28.032064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:85864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.627 [2024-07-25 10:14:28.032081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:06.627 [2024-07-25 10:14:28.032106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:85872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.627 [2024-07-25 10:14:28.032123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:26:06.627 [2024-07-25 10:14:28.032151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:85880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.627 [2024-07-25 10:14:28.032169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:06.627 [2024-07-25 10:14:28.032194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:85888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.627 [2024-07-25 10:14:28.032211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:06.627 [2024-07-25 10:14:28.032236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:85896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.627 [2024-07-25 10:14:28.032252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:06.627 [2024-07-25 10:14:28.032277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:85904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.627 [2024-07-25 10:14:28.032293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:06.627 [2024-07-25 10:14:28.032319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:85912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.627 [2024-07-25 10:14:28.032335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:06.627 [2024-07-25 10:14:28.032361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:85920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.627 [2024-07-25 10:14:28.032377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:06.627 [2024-07-25 10:14:28.032402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:85928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.627 [2024-07-25 10:14:28.032439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.627 [2024-07-25 10:14:28.032468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:85936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.627 [2024-07-25 10:14:28.032484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.627 [2024-07-25 10:14:28.032510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:85944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.627 [2024-07-25 10:14:28.032526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:06.627 [2024-07-25 10:14:28.032551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:85952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.627 [2024-07-25 10:14:28.032568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:06.627 [2024-07-25 10:14:28.032593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:85960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.627 [2024-07-25 10:14:28.032609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:06.627 [2024-07-25 10:14:28.032635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:85968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.627 [2024-07-25 10:14:28.032651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:06.627 [2024-07-25 10:14:28.032678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:85976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.627 [2024-07-25 10:14:28.032698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:06.627 [2024-07-25 10:14:28.032739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:85984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.627 [2024-07-25 10:14:28.032756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:06.627 [2024-07-25 10:14:28.032782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:85992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.627 [2024-07-25 10:14:28.032797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:06.627 [2024-07-25 10:14:28.032822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:86000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.627 [2024-07-25 10:14:28.032838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:06.627 [2024-07-25 10:14:28.032863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:86008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.627 [2024-07-25 10:14:28.032879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:06.627 [2024-07-25 10:14:28.033053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:86016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.627 [2024-07-25 10:14:28.033074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:06.627 [2024-07-25 10:14:28.033108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:86024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.627 [2024-07-25 10:14:28.033128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:06.627 [2024-07-25 10:14:28.033159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:86032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.627 [2024-07-25 10:14:28.033175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:06.627 [2024-07-25 10:14:28.033205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:86040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.627 [2024-07-25 10:14:28.033228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:06.627 [2024-07-25 10:14:28.033258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:86048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.627 [2024-07-25 10:14:28.033274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:06.627 [2024-07-25 10:14:28.033304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:86056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.627 [2024-07-25 10:14:28.033320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:06.627 [2024-07-25 10:14:28.033349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:86448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.627 [2024-07-25 10:14:28.033366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:06.627 [2024-07-25 10:14:28.033395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:86064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.628 [2024-07-25 10:14:28.033416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:06.628 [2024-07-25 10:14:28.033472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:86072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.628 [2024-07-25 10:14:28.033491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:06.628 [2024-07-25 10:14:28.033522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:86080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.628 [2024-07-25 10:14:28.033539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:06.628 [2024-07-25 10:14:28.033569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:86088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.628 [2024-07-25 10:14:28.033587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:06.628 [2024-07-25 10:14:28.033619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:86096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.628 [2024-07-25 10:14:28.033636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:06.628 [2024-07-25 10:14:28.033666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:86104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.628 [2024-07-25 10:14:28.033683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:06.628 [2024-07-25 10:14:28.033714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:86112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.628 [2024-07-25 10:14:28.033731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:06.628 [2024-07-25 10:14:28.033762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:86120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.628 [2024-07-25 10:14:28.033778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:06.628 [2024-07-25 10:14:48.181922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:6656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.628 [2024-07-25 10:14:48.181984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:06.628 [2024-07-25 10:14:48.182080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.628 [2024-07-25 10:14:48.182102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:06.628 [2024-07-25 10:14:48.182135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:6720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:06.628 [2024-07-25 10:14:48.182151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:06.628 [2024-07-25 10:14:48.182173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:6752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.628 [2024-07-25 10:14:48.182190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:06.628 [2024-07-25 10:14:48.182211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:6768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.628 [2024-07-25 10:14:48.182228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:06.628 [2024-07-25 10:14:48.182261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:6784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.628 [2024-07-25 10:14:48.182278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:06.628 [2024-07-25 10:14:48.182299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:6800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.628 [2024-07-25 10:14:48.182315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:06.628 [2024-07-25 10:14:48.182337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:6816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.628 [2024-07-25 10:14:48.182353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:06.628 [2024-07-25 10:14:48.182374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:6832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.628 [2024-07-25 10:14:48.182390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:06.628 [2024-07-25 10:14:48.182412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:6848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.628 [2024-07-25 10:14:48.182434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:26:06.628 [2024-07-25 10:14:48.182475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:6864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.628 [2024-07-25 10:14:48.182492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:06.628 [2024-07-25 10:14:48.182514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:6880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.628 [2024-07-25 10:14:48.182531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:26:06.628 [2024-07-25 10:14:48.182553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:6896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.628 [2024-07-25 10:14:48.182570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:06.628 [2024-07-25 10:14:48.182592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:6912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.628 [2024-07-25 10:14:48.182608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:26:06.628 [2024-07-25 10:14:48.182630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:6928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.628 [2024-07-25 10:14:48.182647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:26:06.628 [2024-07-25 10:14:48.182669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:6944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.628 [2024-07-25 10:14:48.182685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:26:06.628 [2024-07-25 10:14:48.182707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:6960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.628 [2024-07-25 10:14:48.182723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:26:06.628 [2024-07-25 10:14:48.182765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:6976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.628 [2024-07-25 10:14:48.182783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:06.628 [2024-07-25 10:14:48.182803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:6992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.628 [2024-07-25 10:14:48.182819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:26:06.628 [2024-07-25 10:14:48.182840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:7008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.628 [2024-07-25 10:14:48.182855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:26:06.628 [2024-07-25 10:14:48.182876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:7024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.628 [2024-07-25 10:14:48.182892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:06.628 [2024-07-25 10:14:48.182913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:7040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.628 [2024-07-25 10:14:48.182929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:26:06.628 [2024-07-25 10:14:48.182949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:7056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.628 [2024-07-25 10:14:48.182966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:06.628 [2024-07-25 10:14:48.182987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:7072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.628 [2024-07-25 10:14:48.183003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:06.628 [2024-07-25 10:14:48.183024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:7088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.628 [2024-07-25 10:14:48.183040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:26:06.628 [2024-07-25 10:14:48.183062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:7104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.628 [2024-07-25 10:14:48.183078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:26:06.628 [2024-07-25 10:14:48.183099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:7120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.628 [2024-07-25 10:14:48.183115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:06.628 [2024-07-25 10:14:48.183136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:7136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.628 [2024-07-25 10:14:48.183151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:26:06.628 [2024-07-25 10:14:48.183173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:7152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.628 [2024-07-25 10:14:48.183189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:26:06.628 [2024-07-25 10:14:48.183210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:7168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.628 [2024-07-25 10:14:48.183229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:26:06.629 [2024-07-25 10:14:48.183251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:7184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.629 [2024-07-25 10:14:48.183278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:06.629 [2024-07-25 10:14:48.183777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:7200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.629 [2024-07-25 10:14:48.183801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:06.629 [2024-07-25 10:14:48.183827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:7216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.629 [2024-07-25 10:14:48.183845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:26:06.630 [2024-07-25 10:14:48.183866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:7232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.630 [2024-07-25 10:14:48.183882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:26:06.630 [2024-07-25 10:14:48.183903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:7248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.630 [2024-07-25 10:14:48.183919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:06.630 [2024-07-25 10:14:48.183940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:7264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.630 [2024-07-25 10:14:48.183956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:26:06.630 [2024-07-25 10:14:48.183978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:7280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.630 [2024-07-25 10:14:48.183994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:26:06.630 [2024-07-25 10:14:48.184015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:7296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.630 [2024-07-25 10:14:48.184031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:06.630 [2024-07-25 10:14:48.184053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:7312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.630 [2024-07-25 10:14:48.184068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:26:06.630 [2024-07-25 10:14:48.184089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:7328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.630 [2024-07-25 10:14:48.184105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:06.630 [2024-07-25 10:14:48.184126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:7344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.630 [2024-07-25 10:14:48.184141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:06.630 [2024-07-25 10:14:48.184162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:7360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.630 [2024-07-25 10:14:48.184183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:26:06.630 [2024-07-25 10:14:48.184205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:7376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.630 [2024-07-25 10:14:48.184221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:26:06.630 [2024-07-25 10:14:48.184242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:7392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.630 [2024-07-25 10:14:48.184258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:06.630 [2024-07-25 10:14:48.184279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.630 [2024-07-25 10:14:48.184295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:06.630 [2024-07-25 10:14:48.184317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:7424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.630 [2024-07-25 10:14:48.184332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:26:06.630 [2024-07-25 10:14:48.184354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:7440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.630 [2024-07-25 10:14:48.184370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:26:06.630 [2024-07-25 10:14:48.184393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:7456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.630 [2024-07-25 10:14:48.184424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:26:06.630 [2024-07-25 10:14:48.184459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:7472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.630 [2024-07-25 10:14:48.184478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:26:06.630 [2024-07-25 10:14:48.184500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:7488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.630 [2024-07-25 10:14:48.184516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:26:06.630 [2024-07-25 10:14:48.184537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:7504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.630 [2024-07-25 10:14:48.184553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:06.630 [2024-07-25 10:14:48.184575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:7520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.630 [2024-07-25 10:14:48.184591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:26:06.630 [2024-07-25 10:14:48.184612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:7536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.630 [2024-07-25 10:14:48.184628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:26:06.630 [2024-07-25 10:14:48.184650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:7552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.630 [2024-07-25 10:14:48.184666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:26:06.630 [2024-07-25 10:14:48.184693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:7568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.630 [2024-07-25 10:14:48.184709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:26:06.630 [2024-07-25 10:14:48.184746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:7584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.630 [2024-07-25 10:14:48.184762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:06.630 [2024-07-25 10:14:48.184783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:7600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.630 [2024-07-25 10:14:48.184798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:26:06.630 [2024-07-25 10:14:48.184819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:7616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.630 [2024-07-25 10:14:48.184835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:26:06.630 [2024-07-25 10:14:48.184856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:7632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.630 [2024-07-25 10:14:48.184871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:26:06.630 [2024-07-25 10:14:48.184892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:7648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.630 [2024-07-25 10:14:48.184908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:06.630 [2024-07-25 10:14:48.184929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:7656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.630 [2024-07-25 10:14:48.184945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:06.630 [2024-07-25 10:14:48.184966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:7664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.630 [2024-07-25 10:14:48.184981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:26:06.630 [2024-07-25 10:14:48.185002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:7680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.630 [2024-07-25 10:14:48.185024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:26:06.630 [2024-07-25 10:14:48.187046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:7696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:06.630 [2024-07-25 10:14:48.187070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:26:06.630 Received shutdown signal, test time was about 41.105325 seconds 00:26:06.630 00:26:06.630 Latency(us) 00:26:06.630 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:06.630 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:06.630 Verification LBA range: start 0x0 length 0x4000 00:26:06.630 Nvme0n1 : 41.10 8324.68 32.52 0.00 0.00 15349.53 206.32 5020737.23 00:26:06.630 =================================================================================================================== 00:26:06.630 Total : 8324.68 32.52 0.00 0.00 15349.53 206.32 5020737.23 00:26:06.631 10:14:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:07.196 10:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:26:07.196 10:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:07.196 10:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:26:07.196 10:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:07.196 10:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:26:07.196 10:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:07.196 10:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:26:07.196 10:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:07.196 10:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:07.196 rmmod nvme_tcp 00:26:07.196 rmmod nvme_fabrics 00:26:07.196 rmmod nvme_keyring 00:26:07.196 10:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:07.196 10:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:26:07.196 10:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:26:07.196 10:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 516668 ']' 00:26:07.196 10:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 516668 00:26:07.196 10:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 516668 ']' 00:26:07.196 10:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 516668 00:26:07.196 10:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:26:07.196 10:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:07.196 10:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 516668 00:26:07.196 10:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:07.196 10:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:07.196 10:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 516668' 00:26:07.196 killing process with pid 516668 00:26:07.196 10:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 516668 00:26:07.196 10:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 516668 00:26:07.454 10:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:07.454 10:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:07.454 10:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:07.454 10:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:07.454 10:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:07.454 10:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:07.454 10:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:07.454 10:14:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:09.984 10:14:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:09.984 00:26:09.984 real 0m51.816s 00:26:09.984 user 2m40.615s 00:26:09.984 sys 0m14.282s 00:26:09.984 10:14:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:09.984 10:14:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:09.984 ************************************ 00:26:09.984 END TEST nvmf_host_multipath_status 00:26:09.984 ************************************ 00:26:09.984 10:14:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:09.984 10:14:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:09.984 10:14:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:09.984 10:14:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:09.984 ************************************ 00:26:09.984 START TEST nvmf_discovery_remove_ifc 00:26:09.984 ************************************ 00:26:09.984 10:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:26:09.984 * Looking for test storage... 00:26:09.984 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:09.984 10:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:09.984 10:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:26:09.984 10:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:09.984 10:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:09.984 10:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:09.984 10:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:09.984 10:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:09.984 10:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:09.984 10:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:09.984 10:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:09.984 10:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:09.984 10:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:09.985 10:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:26:09.985 10:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:26:09.985 10:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:09.985 10:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:09.985 10:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:09.985 10:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:09.985 10:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:09.985 10:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:09.985 10:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:09.985 10:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:09.985 10:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:09.985 10:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:09.985 10:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:09.985 10:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:26:09.985 10:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:09.985 10:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:26:09.985 10:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:09.985 10:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:09.985 10:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:09.985 10:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:09.985 10:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:09.985 10:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:09.985 10:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:09.985 10:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:09.985 10:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:26:09.985 10:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:26:09.985 10:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:26:09.985 10:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:26:09.985 10:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:26:09.985 10:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:26:09.985 10:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:26:09.985 10:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:09.985 10:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:09.985 10:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:09.985 10:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:09.985 10:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:09.985 10:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:09.985 10:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:09.985 10:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:09.985 10:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:09.985 10:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:09.985 10:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:26:09.985 10:14:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:11.887 10:14:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:11.887 10:14:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:26:11.887 10:14:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:11.887 10:14:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:11.887 10:14:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:11.887 10:14:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:11.887 10:14:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:11.887 10:14:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:26:11.887 10:14:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:11.887 10:14:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:26:11.887 10:14:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:26:11.887 10:14:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:26:11.887 10:14:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:26:11.887 10:14:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:26:11.887 10:14:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:26:11.887 10:14:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:11.887 10:14:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:11.887 10:14:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:11.887 10:14:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:11.887 10:14:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:11.887 10:14:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:11.887 10:14:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:11.887 10:14:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:11.887 10:14:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:11.887 10:14:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:11.887 10:14:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:11.887 10:14:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:11.887 10:14:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:11.887 10:14:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:11.887 10:14:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:11.887 10:14:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:11.887 10:14:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:11.887 10:14:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:11.887 10:14:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:26:11.887 Found 0000:84:00.0 (0x8086 - 0x159b) 00:26:11.887 10:14:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:11.887 10:14:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:11.887 10:14:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:11.887 10:14:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:11.887 10:14:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:11.887 10:14:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:11.887 10:14:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:26:11.887 Found 0000:84:00.1 (0x8086 - 0x159b) 00:26:11.887 10:14:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:11.887 10:14:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:11.887 10:14:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:11.887 10:14:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:11.887 10:14:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:11.887 10:14:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:11.887 10:14:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:11.887 10:14:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:11.887 10:14:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:11.887 10:14:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:11.887 10:14:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:11.887 10:14:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:11.887 10:14:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:11.887 10:14:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:11.887 10:14:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:11.887 10:14:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:26:11.887 Found net devices under 0000:84:00.0: cvl_0_0 00:26:11.887 10:14:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:11.887 10:14:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:11.888 10:14:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:11.888 10:14:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:11.888 10:14:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:11.888 10:14:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:11.888 10:14:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:11.888 10:14:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:11.888 10:14:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:26:11.888 Found net devices under 0000:84:00.1: cvl_0_1 00:26:11.888 10:14:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:11.888 10:14:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:11.888 10:14:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:26:11.888 10:14:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:11.888 10:14:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:11.888 10:14:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:11.888 10:14:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:11.888 10:14:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:11.888 10:14:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:11.888 10:14:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:11.888 10:14:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:11.888 10:14:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:11.888 10:14:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:11.888 10:14:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:11.888 10:14:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:11.888 10:14:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:11.888 10:14:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:11.888 10:14:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:11.888 10:14:56 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:11.888 10:14:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:11.888 10:14:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:12.146 10:14:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:12.146 10:14:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:12.146 10:14:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:12.146 10:14:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:12.146 10:14:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:12.146 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:12.146 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.272 ms 00:26:12.146 00:26:12.146 --- 10.0.0.2 ping statistics --- 00:26:12.146 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:12.146 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:26:12.146 10:14:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:12.146 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:12.146 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.188 ms 00:26:12.146 00:26:12.146 --- 10.0.0.1 ping statistics --- 00:26:12.146 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:12.146 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:26:12.146 10:14:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:12.146 10:14:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:26:12.146 10:14:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:12.146 10:14:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:12.146 10:14:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:12.146 10:14:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:12.146 10:14:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:12.146 10:14:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:12.146 10:14:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:12.146 10:14:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:26:12.146 10:14:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:12.146 10:14:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:12.146 10:14:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:12.146 10:14:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=524177 00:26:12.146 10:14:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:12.146 10:14:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 524177 00:26:12.146 10:14:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 524177 ']' 00:26:12.146 10:14:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:12.146 10:14:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:12.146 10:14:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:12.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:12.146 10:14:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:12.146 10:14:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:12.146 [2024-07-25 10:14:57.205915] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:26:12.146 [2024-07-25 10:14:57.206013] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:12.146 EAL: No free 2048 kB hugepages reported on node 1 00:26:12.146 [2024-07-25 10:14:57.283008] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:12.405 [2024-07-25 10:14:57.404018] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:12.405 [2024-07-25 10:14:57.404077] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:12.405 [2024-07-25 10:14:57.404095] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:12.405 [2024-07-25 10:14:57.404109] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:12.405 [2024-07-25 10:14:57.404121] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:12.405 [2024-07-25 10:14:57.404152] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:12.405 10:14:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:12.405 10:14:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:26:12.405 10:14:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:12.405 10:14:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:12.405 10:14:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:12.405 10:14:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:12.405 10:14:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:26:12.405 10:14:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.405 10:14:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:12.663 [2024-07-25 10:14:57.572234] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:12.663 [2024-07-25 10:14:57.580481] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:26:12.663 null0 00:26:12.663 [2024-07-25 10:14:57.612370] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:12.663 10:14:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.663 10:14:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=524240 00:26:12.663 10:14:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 524240 /tmp/host.sock 00:26:12.663 10:14:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 524240 ']' 00:26:12.663 10:14:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:26:12.663 10:14:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:26:12.663 10:14:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:12.663 10:14:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:26:12.663 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:26:12.663 10:14:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:12.663 10:14:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:12.663 [2024-07-25 10:14:57.685185] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:26:12.663 [2024-07-25 10:14:57.685275] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid524240 ] 00:26:12.663 EAL: No free 2048 kB hugepages reported on node 1 00:26:12.663 [2024-07-25 10:14:57.756572] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:12.921 [2024-07-25 10:14:57.879911] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:12.921 10:14:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:12.921 10:14:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:26:12.921 10:14:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:12.921 10:14:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:26:12.921 10:14:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.921 10:14:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:12.921 10:14:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.921 10:14:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:26:12.921 10:14:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.921 10:14:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:12.921 10:14:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:12.921 10:14:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:26:12.921 10:14:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:12.921 10:14:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:14.293 [2024-07-25 10:14:59.112379] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:14.293 [2024-07-25 10:14:59.112419] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:14.293 [2024-07-25 10:14:59.112452] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:14.293 [2024-07-25 10:14:59.241865] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:26:14.293 [2024-07-25 10:14:59.301882] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:14.293 [2024-07-25 10:14:59.301951] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:14.293 [2024-07-25 10:14:59.301998] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:14.293 [2024-07-25 10:14:59.302027] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:14.293 [2024-07-25 10:14:59.302066] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:14.293 10:14:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.293 10:14:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:26:14.293 10:14:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:14.293 10:14:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:14.293 10:14:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:14.293 10:14:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.293 10:14:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:14.293 10:14:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:14.293 10:14:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:14.293 [2024-07-25 10:14:59.308751] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1322e50 was disconnected and freed. delete nvme_qpair. 00:26:14.293 10:14:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.293 10:14:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:26:14.293 10:14:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:26:14.293 10:14:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:26:14.293 10:14:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:26:14.293 10:14:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:14.293 10:14:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:14.293 10:14:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:14.293 10:14:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:14.293 10:14:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:14.293 10:14:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:14.293 10:14:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:14.293 10:14:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:14.551 10:14:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:14.551 10:14:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:15.482 10:15:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:15.482 10:15:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:15.482 10:15:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:15.482 10:15:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:15.482 10:15:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:15.482 10:15:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:15.482 10:15:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:15.482 10:15:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:15.482 10:15:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:15.482 10:15:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:16.414 10:15:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:16.414 10:15:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:16.414 10:15:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:16.414 10:15:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:16.414 10:15:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:16.414 10:15:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:16.414 10:15:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:16.414 10:15:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:16.414 10:15:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:16.414 10:15:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:17.825 10:15:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:17.825 10:15:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:17.825 10:15:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:17.825 10:15:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.825 10:15:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:17.825 10:15:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:17.825 10:15:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:17.825 10:15:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.825 10:15:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:17.825 10:15:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:18.755 10:15:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:18.755 10:15:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:18.755 10:15:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:18.755 10:15:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:18.755 10:15:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:18.755 10:15:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:18.755 10:15:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:18.755 10:15:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:18.755 10:15:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:18.755 10:15:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:19.683 10:15:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:19.683 10:15:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:19.683 10:15:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:19.683 10:15:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:19.683 10:15:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:19.683 10:15:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:19.683 10:15:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:19.683 10:15:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:19.683 10:15:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:19.683 10:15:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:19.683 [2024-07-25 10:15:04.742551] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:26:19.683 [2024-07-25 10:15:04.742620] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:19.683 [2024-07-25 10:15:04.742644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.683 [2024-07-25 10:15:04.742665] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:19.683 [2024-07-25 10:15:04.742681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.684 [2024-07-25 10:15:04.742698] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:19.684 [2024-07-25 10:15:04.742717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.684 [2024-07-25 10:15:04.742734] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:19.684 [2024-07-25 10:15:04.742749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.684 [2024-07-25 10:15:04.742778] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:19.684 [2024-07-25 10:15:04.742794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:19.684 [2024-07-25 10:15:04.742810] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e9890 is same with the state(5) to be set 00:26:19.684 [2024-07-25 10:15:04.752567] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e9890 (9): Bad file descriptor 00:26:19.684 [2024-07-25 10:15:04.762616] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:20.615 10:15:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:20.615 10:15:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:20.615 10:15:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:20.615 10:15:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:20.615 10:15:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:20.615 10:15:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:20.615 10:15:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:20.615 [2024-07-25 10:15:05.765498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:26:20.615 [2024-07-25 10:15:05.765578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12e9890 with addr=10.0.0.2, port=4420 00:26:20.615 [2024-07-25 10:15:05.765608] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12e9890 is same with the state(5) to be set 00:26:20.615 [2024-07-25 10:15:05.765663] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e9890 (9): Bad file descriptor 00:26:20.615 [2024-07-25 10:15:05.766170] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:26:20.615 [2024-07-25 10:15:05.766220] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:20.615 [2024-07-25 10:15:05.766248] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:20.615 [2024-07-25 10:15:05.766267] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:20.615 [2024-07-25 10:15:05.766304] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:20.615 [2024-07-25 10:15:05.766325] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:20.615 10:15:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:20.615 10:15:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:26:20.615 10:15:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:21.985 [2024-07-25 10:15:06.768828] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:21.985 [2024-07-25 10:15:06.768864] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:21.985 [2024-07-25 10:15:06.768881] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:21.985 [2024-07-25 10:15:06.768896] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:26:21.985 [2024-07-25 10:15:06.768920] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:21.985 [2024-07-25 10:15:06.768967] bdev_nvme.c:6762:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:26:21.985 [2024-07-25 10:15:06.769012] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:21.985 [2024-07-25 10:15:06.769035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.985 [2024-07-25 10:15:06.769056] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:21.985 [2024-07-25 10:15:06.769071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.985 [2024-07-25 10:15:06.769089] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:21.985 [2024-07-25 10:15:06.769104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.985 [2024-07-25 10:15:06.769128] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:21.985 [2024-07-25 10:15:06.769144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.985 [2024-07-25 10:15:06.769160] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:26:21.985 [2024-07-25 10:15:06.769175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:21.985 [2024-07-25 10:15:06.769191] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:26:21.985 [2024-07-25 10:15:06.769245] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12e8cf0 (9): Bad file descriptor 00:26:21.985 [2024-07-25 10:15:06.770237] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:26:21.985 [2024-07-25 10:15:06.770262] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:26:21.985 10:15:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:21.985 10:15:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:21.985 10:15:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:21.985 10:15:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.985 10:15:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:21.985 10:15:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:21.985 10:15:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:21.985 10:15:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.985 10:15:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:26:21.985 10:15:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:21.985 10:15:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:21.985 10:15:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:26:21.985 10:15:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:21.985 10:15:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:21.985 10:15:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:21.985 10:15:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:21.985 10:15:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:21.985 10:15:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:21.985 10:15:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:21.985 10:15:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:21.985 10:15:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:21.985 10:15:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:22.917 10:15:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:22.917 10:15:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:22.917 10:15:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:22.917 10:15:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:22.917 10:15:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:22.917 10:15:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:22.917 10:15:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:22.917 10:15:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:22.917 10:15:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:26:22.917 10:15:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:26:23.847 [2024-07-25 10:15:08.827615] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:23.847 [2024-07-25 10:15:08.827653] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:23.847 [2024-07-25 10:15:08.827681] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:23.847 [2024-07-25 10:15:08.913949] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:26:23.847 [2024-07-25 10:15:08.976997] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:26:23.847 [2024-07-25 10:15:08.977052] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:26:23.847 [2024-07-25 10:15:08.977092] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:26:23.847 [2024-07-25 10:15:08.977119] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:26:23.847 [2024-07-25 10:15:08.977135] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:23.847 [2024-07-25 10:15:08.984018] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x132c7d0 was disconnected and freed. delete nvme_qpair. 00:26:24.105 10:15:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:26:24.105 10:15:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:24.105 10:15:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:26:24.105 10:15:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:24.105 10:15:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:24.105 10:15:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:26:24.105 10:15:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:26:24.105 10:15:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:24.105 10:15:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:26:24.105 10:15:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:26:24.105 10:15:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 524240 00:26:24.105 10:15:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 524240 ']' 00:26:24.105 10:15:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 524240 00:26:24.105 10:15:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:26:24.105 10:15:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:24.105 10:15:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 524240 00:26:24.105 10:15:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:24.105 10:15:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:24.105 10:15:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 524240' 00:26:24.105 killing process with pid 524240 00:26:24.105 10:15:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 524240 00:26:24.105 10:15:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 524240 00:26:24.363 10:15:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:26:24.363 10:15:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:24.363 10:15:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:26:24.363 10:15:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:24.363 10:15:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:26:24.363 10:15:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:24.363 10:15:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:24.363 rmmod nvme_tcp 00:26:24.363 rmmod nvme_fabrics 00:26:24.363 rmmod nvme_keyring 00:26:24.363 10:15:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:24.363 10:15:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:26:24.363 10:15:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:26:24.363 10:15:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 524177 ']' 00:26:24.363 10:15:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 524177 00:26:24.363 10:15:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 524177 ']' 00:26:24.363 10:15:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 524177 00:26:24.363 10:15:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:26:24.363 10:15:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:24.363 10:15:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 524177 00:26:24.619 10:15:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:26:24.619 10:15:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:26:24.619 10:15:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 524177' 00:26:24.619 killing process with pid 524177 00:26:24.619 10:15:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 524177 00:26:24.619 10:15:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 524177 00:26:24.876 10:15:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:24.876 10:15:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:24.876 10:15:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:24.876 10:15:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:24.876 10:15:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:24.876 10:15:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:24.876 10:15:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:24.876 10:15:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:26.773 10:15:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:26.773 00:26:26.773 real 0m17.245s 00:26:26.773 user 0m24.252s 00:26:26.773 sys 0m3.339s 00:26:26.773 10:15:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:26.773 10:15:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:26:26.773 ************************************ 00:26:26.773 END TEST nvmf_discovery_remove_ifc 00:26:26.773 ************************************ 00:26:26.773 10:15:11 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:26:26.773 10:15:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:26.773 10:15:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:26.773 10:15:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:27.031 ************************************ 00:26:27.031 START TEST nvmf_identify_kernel_target 00:26:27.031 ************************************ 00:26:27.031 10:15:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:26:27.031 * Looking for test storage... 00:26:27.031 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:27.031 10:15:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:27.031 10:15:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:26:27.031 10:15:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:27.031 10:15:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:27.031 10:15:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:27.031 10:15:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:27.031 10:15:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:27.031 10:15:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:27.031 10:15:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:27.031 10:15:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:27.031 10:15:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:27.031 10:15:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:27.031 10:15:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:26:27.031 10:15:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:26:27.031 10:15:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:27.031 10:15:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:27.031 10:15:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:27.031 10:15:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:27.031 10:15:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:27.031 10:15:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:27.031 10:15:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:27.031 10:15:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:27.031 10:15:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:27.031 10:15:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:27.031 10:15:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:27.031 10:15:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:26:27.031 10:15:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:27.031 10:15:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:26:27.031 10:15:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:27.031 10:15:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:27.031 10:15:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:27.031 10:15:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:27.031 10:15:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:27.031 10:15:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:27.031 10:15:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:27.031 10:15:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:27.031 10:15:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:26:27.031 10:15:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:27.031 10:15:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:27.031 10:15:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:27.031 10:15:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:27.031 10:15:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:27.031 10:15:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:27.031 10:15:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:27.031 10:15:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:27.031 10:15:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:27.031 10:15:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:27.031 10:15:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:26:27.031 10:15:12 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:26:29.558 10:15:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:29.558 10:15:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:26:29.558 10:15:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:29.558 10:15:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:29.558 10:15:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:29.558 10:15:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:29.558 10:15:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:29.558 10:15:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:26:29.558 10:15:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:29.558 10:15:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:26:29.558 10:15:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:26:29.558 10:15:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:26:29.558 10:15:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:26:29.558 10:15:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:26:29.558 10:15:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:26:29.558 10:15:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:29.558 10:15:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:29.558 10:15:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:29.558 10:15:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:29.558 10:15:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:29.558 10:15:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:29.558 10:15:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:29.558 10:15:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:29.558 10:15:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:29.558 10:15:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:29.558 10:15:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:29.558 10:15:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:29.558 10:15:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:29.558 10:15:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:29.558 10:15:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:29.558 10:15:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:29.558 10:15:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:29.558 10:15:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:29.558 10:15:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:26:29.558 Found 0000:84:00.0 (0x8086 - 0x159b) 00:26:29.558 10:15:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:29.558 10:15:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:29.558 10:15:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:29.558 10:15:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:29.558 10:15:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:29.558 10:15:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:29.558 10:15:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:26:29.558 Found 0000:84:00.1 (0x8086 - 0x159b) 00:26:29.558 10:15:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:29.558 10:15:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:29.558 10:15:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:29.558 10:15:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:29.558 10:15:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:29.558 10:15:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:29.558 10:15:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:29.558 10:15:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:29.558 10:15:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:29.558 10:15:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:29.558 10:15:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:29.558 10:15:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:29.558 10:15:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:29.558 10:15:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:29.559 10:15:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:29.559 10:15:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:26:29.559 Found net devices under 0000:84:00.0: cvl_0_0 00:26:29.559 10:15:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:29.559 10:15:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:29.559 10:15:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:29.559 10:15:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:29.559 10:15:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:29.559 10:15:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:29.559 10:15:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:29.559 10:15:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:29.559 10:15:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:26:29.559 Found net devices under 0000:84:00.1: cvl_0_1 00:26:29.559 10:15:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:29.559 10:15:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:29.559 10:15:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:26:29.559 10:15:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:29.559 10:15:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:29.559 10:15:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:29.559 10:15:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:29.559 10:15:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:29.559 10:15:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:29.559 10:15:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:29.559 10:15:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:29.559 10:15:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:29.559 10:15:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:29.559 10:15:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:29.559 10:15:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:29.559 10:15:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:29.559 10:15:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:29.559 10:15:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:29.559 10:15:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:29.559 10:15:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:29.559 10:15:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:29.559 10:15:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:29.559 10:15:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:29.559 10:15:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:29.559 10:15:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:29.559 10:15:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:29.559 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:29.559 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.191 ms 00:26:29.559 00:26:29.559 --- 10.0.0.2 ping statistics --- 00:26:29.559 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:29.559 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:26:29.559 10:15:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:29.559 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:29.559 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.091 ms 00:26:29.559 00:26:29.559 --- 10.0.0.1 ping statistics --- 00:26:29.559 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:29.559 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:26:29.559 10:15:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:29.559 10:15:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:26:29.559 10:15:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:29.559 10:15:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:29.559 10:15:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:29.559 10:15:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:29.559 10:15:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:29.559 10:15:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:29.559 10:15:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:29.559 10:15:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:26:29.559 10:15:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:26:29.559 10:15:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:26:29.559 10:15:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:29.559 10:15:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:29.559 10:15:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:29.559 10:15:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:29.559 10:15:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:29.559 10:15:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:29.559 10:15:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:29.559 10:15:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:29.559 10:15:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:29.559 10:15:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:26:29.559 10:15:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:26:29.559 10:15:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:26:29.559 10:15:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:26:29.559 10:15:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:29.559 10:15:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:29.559 10:15:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:26:29.559 10:15:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:26:29.559 10:15:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:26:29.559 10:15:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:26:29.559 10:15:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:26:29.559 10:15:14 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:26:30.933 Waiting for block devices as requested 00:26:30.933 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:26:30.933 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:26:31.191 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:26:31.191 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:26:31.191 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:26:31.448 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:26:31.448 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:26:31.448 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:26:31.448 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:26:31.705 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:26:31.705 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:26:31.705 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:26:31.705 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:26:31.966 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:26:31.966 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:26:31.966 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:26:32.223 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:26:32.224 10:15:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:26:32.224 10:15:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:26:32.224 10:15:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:26:32.224 10:15:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:26:32.224 10:15:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:26:32.224 10:15:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:26:32.224 10:15:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:26:32.224 10:15:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:26:32.224 10:15:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:26:32.224 No valid GPT data, bailing 00:26:32.224 10:15:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:26:32.224 10:15:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:26:32.224 10:15:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:26:32.224 10:15:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:26:32.224 10:15:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:26:32.224 10:15:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:32.224 10:15:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:32.224 10:15:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:26:32.224 10:15:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:26:32.224 10:15:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:26:32.224 10:15:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:26:32.224 10:15:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:26:32.224 10:15:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:26:32.224 10:15:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:26:32.224 10:15:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:26:32.224 10:15:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:26:32.224 10:15:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:26:32.483 10:15:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -a 10.0.0.1 -t tcp -s 4420 00:26:32.483 00:26:32.483 Discovery Log Number of Records 2, Generation counter 2 00:26:32.483 =====Discovery Log Entry 0====== 00:26:32.483 trtype: tcp 00:26:32.483 adrfam: ipv4 00:26:32.483 subtype: current discovery subsystem 00:26:32.483 treq: not specified, sq flow control disable supported 00:26:32.483 portid: 1 00:26:32.483 trsvcid: 4420 00:26:32.483 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:26:32.483 traddr: 10.0.0.1 00:26:32.483 eflags: none 00:26:32.483 sectype: none 00:26:32.483 =====Discovery Log Entry 1====== 00:26:32.483 trtype: tcp 00:26:32.483 adrfam: ipv4 00:26:32.483 subtype: nvme subsystem 00:26:32.483 treq: not specified, sq flow control disable supported 00:26:32.483 portid: 1 00:26:32.483 trsvcid: 4420 00:26:32.483 subnqn: nqn.2016-06.io.spdk:testnqn 00:26:32.483 traddr: 10.0.0.1 00:26:32.483 eflags: none 00:26:32.483 sectype: none 00:26:32.483 10:15:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:26:32.483 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:26:32.483 EAL: No free 2048 kB hugepages reported on node 1 00:26:32.483 ===================================================== 00:26:32.483 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:26:32.483 ===================================================== 00:26:32.483 Controller Capabilities/Features 00:26:32.483 ================================ 00:26:32.483 Vendor ID: 0000 00:26:32.483 Subsystem Vendor ID: 0000 00:26:32.483 Serial Number: 01a5e0b80f7d3a28c09b 00:26:32.483 Model Number: Linux 00:26:32.483 Firmware Version: 6.7.0-68 00:26:32.483 Recommended Arb Burst: 0 00:26:32.483 IEEE OUI Identifier: 00 00 00 00:26:32.483 Multi-path I/O 00:26:32.483 May have multiple subsystem ports: No 00:26:32.483 May have multiple controllers: No 00:26:32.483 Associated with SR-IOV VF: No 00:26:32.483 Max Data Transfer Size: Unlimited 00:26:32.483 Max Number of Namespaces: 0 00:26:32.483 Max Number of I/O Queues: 1024 00:26:32.483 NVMe Specification Version (VS): 1.3 00:26:32.483 NVMe Specification Version (Identify): 1.3 00:26:32.483 Maximum Queue Entries: 1024 00:26:32.483 Contiguous Queues Required: No 00:26:32.483 Arbitration Mechanisms Supported 00:26:32.483 Weighted Round Robin: Not Supported 00:26:32.483 Vendor Specific: Not Supported 00:26:32.483 Reset Timeout: 7500 ms 00:26:32.483 Doorbell Stride: 4 bytes 00:26:32.483 NVM Subsystem Reset: Not Supported 00:26:32.483 Command Sets Supported 00:26:32.483 NVM Command Set: Supported 00:26:32.483 Boot Partition: Not Supported 00:26:32.483 Memory Page Size Minimum: 4096 bytes 00:26:32.483 Memory Page Size Maximum: 4096 bytes 00:26:32.483 Persistent Memory Region: Not Supported 00:26:32.483 Optional Asynchronous Events Supported 00:26:32.483 Namespace Attribute Notices: Not Supported 00:26:32.483 Firmware Activation Notices: Not Supported 00:26:32.483 ANA Change Notices: Not Supported 00:26:32.483 PLE Aggregate Log Change Notices: Not Supported 00:26:32.483 LBA Status Info Alert Notices: Not Supported 00:26:32.483 EGE Aggregate Log Change Notices: Not Supported 00:26:32.483 Normal NVM Subsystem Shutdown event: Not Supported 00:26:32.483 Zone Descriptor Change Notices: Not Supported 00:26:32.483 Discovery Log Change Notices: Supported 00:26:32.483 Controller Attributes 00:26:32.483 128-bit Host Identifier: Not Supported 00:26:32.483 Non-Operational Permissive Mode: Not Supported 00:26:32.483 NVM Sets: Not Supported 00:26:32.483 Read Recovery Levels: Not Supported 00:26:32.483 Endurance Groups: Not Supported 00:26:32.483 Predictable Latency Mode: Not Supported 00:26:32.483 Traffic Based Keep ALive: Not Supported 00:26:32.483 Namespace Granularity: Not Supported 00:26:32.483 SQ Associations: Not Supported 00:26:32.483 UUID List: Not Supported 00:26:32.483 Multi-Domain Subsystem: Not Supported 00:26:32.483 Fixed Capacity Management: Not Supported 00:26:32.483 Variable Capacity Management: Not Supported 00:26:32.483 Delete Endurance Group: Not Supported 00:26:32.483 Delete NVM Set: Not Supported 00:26:32.483 Extended LBA Formats Supported: Not Supported 00:26:32.483 Flexible Data Placement Supported: Not Supported 00:26:32.483 00:26:32.483 Controller Memory Buffer Support 00:26:32.483 ================================ 00:26:32.483 Supported: No 00:26:32.483 00:26:32.483 Persistent Memory Region Support 00:26:32.483 ================================ 00:26:32.483 Supported: No 00:26:32.483 00:26:32.483 Admin Command Set Attributes 00:26:32.483 ============================ 00:26:32.483 Security Send/Receive: Not Supported 00:26:32.483 Format NVM: Not Supported 00:26:32.483 Firmware Activate/Download: Not Supported 00:26:32.483 Namespace Management: Not Supported 00:26:32.483 Device Self-Test: Not Supported 00:26:32.483 Directives: Not Supported 00:26:32.483 NVMe-MI: Not Supported 00:26:32.483 Virtualization Management: Not Supported 00:26:32.483 Doorbell Buffer Config: Not Supported 00:26:32.483 Get LBA Status Capability: Not Supported 00:26:32.483 Command & Feature Lockdown Capability: Not Supported 00:26:32.483 Abort Command Limit: 1 00:26:32.483 Async Event Request Limit: 1 00:26:32.483 Number of Firmware Slots: N/A 00:26:32.483 Firmware Slot 1 Read-Only: N/A 00:26:32.483 Firmware Activation Without Reset: N/A 00:26:32.483 Multiple Update Detection Support: N/A 00:26:32.483 Firmware Update Granularity: No Information Provided 00:26:32.483 Per-Namespace SMART Log: No 00:26:32.483 Asymmetric Namespace Access Log Page: Not Supported 00:26:32.483 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:26:32.483 Command Effects Log Page: Not Supported 00:26:32.483 Get Log Page Extended Data: Supported 00:26:32.483 Telemetry Log Pages: Not Supported 00:26:32.483 Persistent Event Log Pages: Not Supported 00:26:32.483 Supported Log Pages Log Page: May Support 00:26:32.483 Commands Supported & Effects Log Page: Not Supported 00:26:32.483 Feature Identifiers & Effects Log Page:May Support 00:26:32.483 NVMe-MI Commands & Effects Log Page: May Support 00:26:32.483 Data Area 4 for Telemetry Log: Not Supported 00:26:32.483 Error Log Page Entries Supported: 1 00:26:32.483 Keep Alive: Not Supported 00:26:32.483 00:26:32.483 NVM Command Set Attributes 00:26:32.483 ========================== 00:26:32.483 Submission Queue Entry Size 00:26:32.483 Max: 1 00:26:32.483 Min: 1 00:26:32.483 Completion Queue Entry Size 00:26:32.483 Max: 1 00:26:32.483 Min: 1 00:26:32.483 Number of Namespaces: 0 00:26:32.483 Compare Command: Not Supported 00:26:32.483 Write Uncorrectable Command: Not Supported 00:26:32.483 Dataset Management Command: Not Supported 00:26:32.483 Write Zeroes Command: Not Supported 00:26:32.483 Set Features Save Field: Not Supported 00:26:32.484 Reservations: Not Supported 00:26:32.484 Timestamp: Not Supported 00:26:32.484 Copy: Not Supported 00:26:32.484 Volatile Write Cache: Not Present 00:26:32.484 Atomic Write Unit (Normal): 1 00:26:32.484 Atomic Write Unit (PFail): 1 00:26:32.484 Atomic Compare & Write Unit: 1 00:26:32.484 Fused Compare & Write: Not Supported 00:26:32.484 Scatter-Gather List 00:26:32.484 SGL Command Set: Supported 00:26:32.484 SGL Keyed: Not Supported 00:26:32.484 SGL Bit Bucket Descriptor: Not Supported 00:26:32.484 SGL Metadata Pointer: Not Supported 00:26:32.484 Oversized SGL: Not Supported 00:26:32.484 SGL Metadata Address: Not Supported 00:26:32.484 SGL Offset: Supported 00:26:32.484 Transport SGL Data Block: Not Supported 00:26:32.484 Replay Protected Memory Block: Not Supported 00:26:32.484 00:26:32.484 Firmware Slot Information 00:26:32.484 ========================= 00:26:32.484 Active slot: 0 00:26:32.484 00:26:32.484 00:26:32.484 Error Log 00:26:32.484 ========= 00:26:32.484 00:26:32.484 Active Namespaces 00:26:32.484 ================= 00:26:32.484 Discovery Log Page 00:26:32.484 ================== 00:26:32.484 Generation Counter: 2 00:26:32.484 Number of Records: 2 00:26:32.484 Record Format: 0 00:26:32.484 00:26:32.484 Discovery Log Entry 0 00:26:32.484 ---------------------- 00:26:32.484 Transport Type: 3 (TCP) 00:26:32.484 Address Family: 1 (IPv4) 00:26:32.484 Subsystem Type: 3 (Current Discovery Subsystem) 00:26:32.484 Entry Flags: 00:26:32.484 Duplicate Returned Information: 0 00:26:32.484 Explicit Persistent Connection Support for Discovery: 0 00:26:32.484 Transport Requirements: 00:26:32.484 Secure Channel: Not Specified 00:26:32.484 Port ID: 1 (0x0001) 00:26:32.484 Controller ID: 65535 (0xffff) 00:26:32.484 Admin Max SQ Size: 32 00:26:32.484 Transport Service Identifier: 4420 00:26:32.484 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:26:32.484 Transport Address: 10.0.0.1 00:26:32.484 Discovery Log Entry 1 00:26:32.484 ---------------------- 00:26:32.484 Transport Type: 3 (TCP) 00:26:32.484 Address Family: 1 (IPv4) 00:26:32.484 Subsystem Type: 2 (NVM Subsystem) 00:26:32.484 Entry Flags: 00:26:32.484 Duplicate Returned Information: 0 00:26:32.484 Explicit Persistent Connection Support for Discovery: 0 00:26:32.484 Transport Requirements: 00:26:32.484 Secure Channel: Not Specified 00:26:32.484 Port ID: 1 (0x0001) 00:26:32.484 Controller ID: 65535 (0xffff) 00:26:32.484 Admin Max SQ Size: 32 00:26:32.484 Transport Service Identifier: 4420 00:26:32.484 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:26:32.484 Transport Address: 10.0.0.1 00:26:32.484 10:15:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:32.742 EAL: No free 2048 kB hugepages reported on node 1 00:26:32.742 get_feature(0x01) failed 00:26:32.742 get_feature(0x02) failed 00:26:32.742 get_feature(0x04) failed 00:26:32.742 ===================================================== 00:26:32.742 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:26:32.742 ===================================================== 00:26:32.742 Controller Capabilities/Features 00:26:32.742 ================================ 00:26:32.742 Vendor ID: 0000 00:26:32.743 Subsystem Vendor ID: 0000 00:26:32.743 Serial Number: ee1422f5869e0e17dcf4 00:26:32.743 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:26:32.743 Firmware Version: 6.7.0-68 00:26:32.743 Recommended Arb Burst: 6 00:26:32.743 IEEE OUI Identifier: 00 00 00 00:26:32.743 Multi-path I/O 00:26:32.743 May have multiple subsystem ports: Yes 00:26:32.743 May have multiple controllers: Yes 00:26:32.743 Associated with SR-IOV VF: No 00:26:32.743 Max Data Transfer Size: Unlimited 00:26:32.743 Max Number of Namespaces: 1024 00:26:32.743 Max Number of I/O Queues: 128 00:26:32.743 NVMe Specification Version (VS): 1.3 00:26:32.743 NVMe Specification Version (Identify): 1.3 00:26:32.743 Maximum Queue Entries: 1024 00:26:32.743 Contiguous Queues Required: No 00:26:32.743 Arbitration Mechanisms Supported 00:26:32.743 Weighted Round Robin: Not Supported 00:26:32.743 Vendor Specific: Not Supported 00:26:32.743 Reset Timeout: 7500 ms 00:26:32.743 Doorbell Stride: 4 bytes 00:26:32.743 NVM Subsystem Reset: Not Supported 00:26:32.743 Command Sets Supported 00:26:32.743 NVM Command Set: Supported 00:26:32.743 Boot Partition: Not Supported 00:26:32.743 Memory Page Size Minimum: 4096 bytes 00:26:32.743 Memory Page Size Maximum: 4096 bytes 00:26:32.743 Persistent Memory Region: Not Supported 00:26:32.743 Optional Asynchronous Events Supported 00:26:32.743 Namespace Attribute Notices: Supported 00:26:32.743 Firmware Activation Notices: Not Supported 00:26:32.743 ANA Change Notices: Supported 00:26:32.743 PLE Aggregate Log Change Notices: Not Supported 00:26:32.743 LBA Status Info Alert Notices: Not Supported 00:26:32.743 EGE Aggregate Log Change Notices: Not Supported 00:26:32.743 Normal NVM Subsystem Shutdown event: Not Supported 00:26:32.743 Zone Descriptor Change Notices: Not Supported 00:26:32.743 Discovery Log Change Notices: Not Supported 00:26:32.743 Controller Attributes 00:26:32.743 128-bit Host Identifier: Supported 00:26:32.743 Non-Operational Permissive Mode: Not Supported 00:26:32.743 NVM Sets: Not Supported 00:26:32.743 Read Recovery Levels: Not Supported 00:26:32.743 Endurance Groups: Not Supported 00:26:32.743 Predictable Latency Mode: Not Supported 00:26:32.743 Traffic Based Keep ALive: Supported 00:26:32.743 Namespace Granularity: Not Supported 00:26:32.743 SQ Associations: Not Supported 00:26:32.743 UUID List: Not Supported 00:26:32.743 Multi-Domain Subsystem: Not Supported 00:26:32.743 Fixed Capacity Management: Not Supported 00:26:32.743 Variable Capacity Management: Not Supported 00:26:32.743 Delete Endurance Group: Not Supported 00:26:32.743 Delete NVM Set: Not Supported 00:26:32.743 Extended LBA Formats Supported: Not Supported 00:26:32.743 Flexible Data Placement Supported: Not Supported 00:26:32.743 00:26:32.743 Controller Memory Buffer Support 00:26:32.743 ================================ 00:26:32.743 Supported: No 00:26:32.743 00:26:32.743 Persistent Memory Region Support 00:26:32.743 ================================ 00:26:32.743 Supported: No 00:26:32.743 00:26:32.743 Admin Command Set Attributes 00:26:32.743 ============================ 00:26:32.743 Security Send/Receive: Not Supported 00:26:32.743 Format NVM: Not Supported 00:26:32.743 Firmware Activate/Download: Not Supported 00:26:32.743 Namespace Management: Not Supported 00:26:32.743 Device Self-Test: Not Supported 00:26:32.743 Directives: Not Supported 00:26:32.743 NVMe-MI: Not Supported 00:26:32.743 Virtualization Management: Not Supported 00:26:32.743 Doorbell Buffer Config: Not Supported 00:26:32.743 Get LBA Status Capability: Not Supported 00:26:32.743 Command & Feature Lockdown Capability: Not Supported 00:26:32.743 Abort Command Limit: 4 00:26:32.743 Async Event Request Limit: 4 00:26:32.743 Number of Firmware Slots: N/A 00:26:32.743 Firmware Slot 1 Read-Only: N/A 00:26:32.743 Firmware Activation Without Reset: N/A 00:26:32.743 Multiple Update Detection Support: N/A 00:26:32.743 Firmware Update Granularity: No Information Provided 00:26:32.743 Per-Namespace SMART Log: Yes 00:26:32.743 Asymmetric Namespace Access Log Page: Supported 00:26:32.743 ANA Transition Time : 10 sec 00:26:32.743 00:26:32.743 Asymmetric Namespace Access Capabilities 00:26:32.743 ANA Optimized State : Supported 00:26:32.743 ANA Non-Optimized State : Supported 00:26:32.743 ANA Inaccessible State : Supported 00:26:32.743 ANA Persistent Loss State : Supported 00:26:32.743 ANA Change State : Supported 00:26:32.743 ANAGRPID is not changed : No 00:26:32.743 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:26:32.743 00:26:32.743 ANA Group Identifier Maximum : 128 00:26:32.743 Number of ANA Group Identifiers : 128 00:26:32.743 Max Number of Allowed Namespaces : 1024 00:26:32.743 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:26:32.743 Command Effects Log Page: Supported 00:26:32.743 Get Log Page Extended Data: Supported 00:26:32.743 Telemetry Log Pages: Not Supported 00:26:32.743 Persistent Event Log Pages: Not Supported 00:26:32.743 Supported Log Pages Log Page: May Support 00:26:32.743 Commands Supported & Effects Log Page: Not Supported 00:26:32.743 Feature Identifiers & Effects Log Page:May Support 00:26:32.743 NVMe-MI Commands & Effects Log Page: May Support 00:26:32.743 Data Area 4 for Telemetry Log: Not Supported 00:26:32.743 Error Log Page Entries Supported: 128 00:26:32.743 Keep Alive: Supported 00:26:32.743 Keep Alive Granularity: 1000 ms 00:26:32.743 00:26:32.743 NVM Command Set Attributes 00:26:32.743 ========================== 00:26:32.743 Submission Queue Entry Size 00:26:32.743 Max: 64 00:26:32.743 Min: 64 00:26:32.743 Completion Queue Entry Size 00:26:32.743 Max: 16 00:26:32.743 Min: 16 00:26:32.743 Number of Namespaces: 1024 00:26:32.743 Compare Command: Not Supported 00:26:32.743 Write Uncorrectable Command: Not Supported 00:26:32.743 Dataset Management Command: Supported 00:26:32.743 Write Zeroes Command: Supported 00:26:32.743 Set Features Save Field: Not Supported 00:26:32.743 Reservations: Not Supported 00:26:32.743 Timestamp: Not Supported 00:26:32.743 Copy: Not Supported 00:26:32.743 Volatile Write Cache: Present 00:26:32.743 Atomic Write Unit (Normal): 1 00:26:32.743 Atomic Write Unit (PFail): 1 00:26:32.743 Atomic Compare & Write Unit: 1 00:26:32.743 Fused Compare & Write: Not Supported 00:26:32.743 Scatter-Gather List 00:26:32.743 SGL Command Set: Supported 00:26:32.743 SGL Keyed: Not Supported 00:26:32.743 SGL Bit Bucket Descriptor: Not Supported 00:26:32.743 SGL Metadata Pointer: Not Supported 00:26:32.743 Oversized SGL: Not Supported 00:26:32.743 SGL Metadata Address: Not Supported 00:26:32.743 SGL Offset: Supported 00:26:32.743 Transport SGL Data Block: Not Supported 00:26:32.743 Replay Protected Memory Block: Not Supported 00:26:32.743 00:26:32.743 Firmware Slot Information 00:26:32.743 ========================= 00:26:32.743 Active slot: 0 00:26:32.743 00:26:32.743 Asymmetric Namespace Access 00:26:32.743 =========================== 00:26:32.743 Change Count : 0 00:26:32.743 Number of ANA Group Descriptors : 1 00:26:32.743 ANA Group Descriptor : 0 00:26:32.743 ANA Group ID : 1 00:26:32.743 Number of NSID Values : 1 00:26:32.743 Change Count : 0 00:26:32.743 ANA State : 1 00:26:32.743 Namespace Identifier : 1 00:26:32.743 00:26:32.743 Commands Supported and Effects 00:26:32.743 ============================== 00:26:32.743 Admin Commands 00:26:32.743 -------------- 00:26:32.743 Get Log Page (02h): Supported 00:26:32.743 Identify (06h): Supported 00:26:32.743 Abort (08h): Supported 00:26:32.743 Set Features (09h): Supported 00:26:32.743 Get Features (0Ah): Supported 00:26:32.743 Asynchronous Event Request (0Ch): Supported 00:26:32.743 Keep Alive (18h): Supported 00:26:32.743 I/O Commands 00:26:32.743 ------------ 00:26:32.743 Flush (00h): Supported 00:26:32.743 Write (01h): Supported LBA-Change 00:26:32.743 Read (02h): Supported 00:26:32.743 Write Zeroes (08h): Supported LBA-Change 00:26:32.743 Dataset Management (09h): Supported 00:26:32.743 00:26:32.743 Error Log 00:26:32.743 ========= 00:26:32.743 Entry: 0 00:26:32.743 Error Count: 0x3 00:26:32.743 Submission Queue Id: 0x0 00:26:32.743 Command Id: 0x5 00:26:32.743 Phase Bit: 0 00:26:32.743 Status Code: 0x2 00:26:32.743 Status Code Type: 0x0 00:26:32.743 Do Not Retry: 1 00:26:32.743 Error Location: 0x28 00:26:32.743 LBA: 0x0 00:26:32.743 Namespace: 0x0 00:26:32.743 Vendor Log Page: 0x0 00:26:32.743 ----------- 00:26:32.744 Entry: 1 00:26:32.744 Error Count: 0x2 00:26:32.744 Submission Queue Id: 0x0 00:26:32.744 Command Id: 0x5 00:26:32.744 Phase Bit: 0 00:26:32.744 Status Code: 0x2 00:26:32.744 Status Code Type: 0x0 00:26:32.744 Do Not Retry: 1 00:26:32.744 Error Location: 0x28 00:26:32.744 LBA: 0x0 00:26:32.744 Namespace: 0x0 00:26:32.744 Vendor Log Page: 0x0 00:26:32.744 ----------- 00:26:32.744 Entry: 2 00:26:32.744 Error Count: 0x1 00:26:32.744 Submission Queue Id: 0x0 00:26:32.744 Command Id: 0x4 00:26:32.744 Phase Bit: 0 00:26:32.744 Status Code: 0x2 00:26:32.744 Status Code Type: 0x0 00:26:32.744 Do Not Retry: 1 00:26:32.744 Error Location: 0x28 00:26:32.744 LBA: 0x0 00:26:32.744 Namespace: 0x0 00:26:32.744 Vendor Log Page: 0x0 00:26:32.744 00:26:32.744 Number of Queues 00:26:32.744 ================ 00:26:32.744 Number of I/O Submission Queues: 128 00:26:32.744 Number of I/O Completion Queues: 128 00:26:32.744 00:26:32.744 ZNS Specific Controller Data 00:26:32.744 ============================ 00:26:32.744 Zone Append Size Limit: 0 00:26:32.744 00:26:32.744 00:26:32.744 Active Namespaces 00:26:32.744 ================= 00:26:32.744 get_feature(0x05) failed 00:26:32.744 Namespace ID:1 00:26:32.744 Command Set Identifier: NVM (00h) 00:26:32.744 Deallocate: Supported 00:26:32.744 Deallocated/Unwritten Error: Not Supported 00:26:32.744 Deallocated Read Value: Unknown 00:26:32.744 Deallocate in Write Zeroes: Not Supported 00:26:32.744 Deallocated Guard Field: 0xFFFF 00:26:32.744 Flush: Supported 00:26:32.744 Reservation: Not Supported 00:26:32.744 Namespace Sharing Capabilities: Multiple Controllers 00:26:32.744 Size (in LBAs): 1953525168 (931GiB) 00:26:32.744 Capacity (in LBAs): 1953525168 (931GiB) 00:26:32.744 Utilization (in LBAs): 1953525168 (931GiB) 00:26:32.744 UUID: 12f54231-3cf5-4163-be62-a316b655fc42 00:26:32.744 Thin Provisioning: Not Supported 00:26:32.744 Per-NS Atomic Units: Yes 00:26:32.744 Atomic Boundary Size (Normal): 0 00:26:32.744 Atomic Boundary Size (PFail): 0 00:26:32.744 Atomic Boundary Offset: 0 00:26:32.744 NGUID/EUI64 Never Reused: No 00:26:32.744 ANA group ID: 1 00:26:32.744 Namespace Write Protected: No 00:26:32.744 Number of LBA Formats: 1 00:26:32.744 Current LBA Format: LBA Format #00 00:26:32.744 LBA Format #00: Data Size: 512 Metadata Size: 0 00:26:32.744 00:26:32.744 10:15:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:26:32.744 10:15:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:32.744 10:15:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:26:32.744 10:15:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:32.744 10:15:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:26:32.744 10:15:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:32.744 10:15:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:32.744 rmmod nvme_tcp 00:26:32.744 rmmod nvme_fabrics 00:26:32.744 10:15:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:32.744 10:15:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:26:32.744 10:15:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:26:32.744 10:15:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:26:32.744 10:15:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:32.744 10:15:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:32.744 10:15:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:32.744 10:15:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:32.744 10:15:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:32.744 10:15:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:32.744 10:15:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:32.744 10:15:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:34.693 10:15:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:34.693 10:15:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:26:34.693 10:15:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:26:34.694 10:15:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:26:34.694 10:15:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:34.694 10:15:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:26:34.694 10:15:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:26:34.694 10:15:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:26:34.951 10:15:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:26:34.951 10:15:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:26:34.951 10:15:19 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:36.324 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:26:36.324 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:26:36.324 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:26:36.324 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:26:36.324 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:26:36.324 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:26:36.324 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:26:36.324 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:26:36.324 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:26:36.324 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:26:36.324 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:26:36.324 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:26:36.324 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:26:36.324 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:26:36.324 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:26:36.324 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:26:37.258 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:26:37.516 00:26:37.516 real 0m10.605s 00:26:37.516 user 0m2.249s 00:26:37.516 sys 0m4.263s 00:26:37.516 10:15:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:37.517 10:15:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:26:37.517 ************************************ 00:26:37.517 END TEST nvmf_identify_kernel_target 00:26:37.517 ************************************ 00:26:37.517 10:15:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:26:37.517 10:15:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:26:37.517 10:15:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:37.517 10:15:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.517 ************************************ 00:26:37.517 START TEST nvmf_auth_host 00:26:37.517 ************************************ 00:26:37.517 10:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:26:37.517 * Looking for test storage... 00:26:37.517 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:37.517 10:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:37.517 10:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:26:37.517 10:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:37.517 10:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:37.517 10:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:37.517 10:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:37.517 10:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:37.517 10:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:37.517 10:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:37.517 10:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:37.517 10:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:37.517 10:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:37.517 10:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:26:37.517 10:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:26:37.517 10:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:37.517 10:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:37.517 10:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:37.517 10:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:37.517 10:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:37.517 10:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:37.517 10:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:37.517 10:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:37.517 10:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:37.517 10:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:37.517 10:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:37.517 10:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:26:37.517 10:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:37.517 10:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:26:37.517 10:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:37.517 10:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:37.517 10:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:37.517 10:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:37.517 10:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:37.517 10:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:37.517 10:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:37.517 10:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:37.517 10:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:26:37.517 10:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:26:37.517 10:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:26:37.517 10:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:26:37.517 10:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:37.517 10:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:37.517 10:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:26:37.517 10:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:26:37.517 10:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:26:37.517 10:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:37.517 10:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:37.517 10:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:37.517 10:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:37.517 10:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:37.517 10:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:37.517 10:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:37.517 10:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:37.776 10:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:37.776 10:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:37.776 10:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:26:37.776 10:15:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.306 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:40.306 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:26:40.306 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:40.306 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:40.306 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:40.306 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:40.306 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:40.306 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:26:40.306 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:40.306 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:26:40.306 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:26:40.306 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:26:40.306 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:26:40.306 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:26:40.306 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:26:40.306 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:40.306 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:40.306 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:40.306 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:40.306 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:40.306 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:40.306 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:40.306 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:40.306 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:40.306 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:40.306 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:40.306 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:40.306 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:40.306 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:40.306 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:40.306 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:40.306 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:40.306 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:40.306 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:26:40.306 Found 0000:84:00.0 (0x8086 - 0x159b) 00:26:40.306 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:40.306 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:40.306 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:40.306 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:40.306 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:40.306 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:40.306 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:26:40.306 Found 0000:84:00.1 (0x8086 - 0x159b) 00:26:40.306 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:40.306 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:40.306 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:40.306 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:40.306 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:40.306 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:40.306 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:40.306 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:40.306 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:40.306 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:40.306 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:40.306 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:40.306 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:40.306 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:40.306 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:40.306 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:26:40.306 Found net devices under 0000:84:00.0: cvl_0_0 00:26:40.306 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:40.306 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:40.306 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:40.306 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:40.306 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:40.306 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:40.306 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:40.306 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:40.306 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:26:40.306 Found net devices under 0000:84:00.1: cvl_0_1 00:26:40.306 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:40.306 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:40.306 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:26:40.306 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:40.306 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:40.306 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:40.306 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:40.306 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:40.306 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:40.306 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:40.306 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:40.306 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:40.306 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:40.306 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:40.306 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:40.306 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:40.306 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:40.306 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:40.306 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:40.306 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:40.306 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:40.306 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:40.306 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:40.306 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:40.306 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:40.306 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:40.306 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:40.306 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.260 ms 00:26:40.306 00:26:40.306 --- 10.0.0.2 ping statistics --- 00:26:40.306 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:40.306 rtt min/avg/max/mdev = 0.260/0.260/0.260/0.000 ms 00:26:40.306 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:40.306 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:40.306 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.137 ms 00:26:40.306 00:26:40.306 --- 10.0.0.1 ping statistics --- 00:26:40.306 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:40.306 rtt min/avg/max/mdev = 0.137/0.137/0.137/0.000 ms 00:26:40.306 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:40.306 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:26:40.306 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:40.306 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:40.306 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:40.306 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:40.306 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:40.306 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:40.306 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:40.306 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:26:40.306 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:40.307 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:40.307 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.307 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=531981 00:26:40.307 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:26:40.307 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 531981 00:26:40.307 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 531981 ']' 00:26:40.307 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:40.307 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:40.307 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:40.307 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:40.307 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.565 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:40.565 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:26:40.565 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:40.565 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:40.565 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:40.565 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:40.565 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:26:40.565 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:26:40.565 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:40.565 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:40.565 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:40.565 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:26:40.565 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:26:40.565 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:40.565 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=6e9c0eefd2de0eb2db4986ee24e103f5 00:26:40.565 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:26:40.565 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.E0v 00:26:40.565 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 6e9c0eefd2de0eb2db4986ee24e103f5 0 00:26:40.565 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 6e9c0eefd2de0eb2db4986ee24e103f5 0 00:26:40.565 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:40.565 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:40.565 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=6e9c0eefd2de0eb2db4986ee24e103f5 00:26:40.565 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:26:40.565 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:40.565 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.E0v 00:26:40.565 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.E0v 00:26:40.565 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.E0v 00:26:40.823 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:26:40.823 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:40.823 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:40.823 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:40.823 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:26:40.823 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:26:40.823 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:26:40.823 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=5b667f25dd885b28d43242c38dcf879e88aebda923de297440928247592e19e9 00:26:40.823 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:26:40.823 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.D7Y 00:26:40.823 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 5b667f25dd885b28d43242c38dcf879e88aebda923de297440928247592e19e9 3 00:26:40.823 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 5b667f25dd885b28d43242c38dcf879e88aebda923de297440928247592e19e9 3 00:26:40.823 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:40.823 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:40.823 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=5b667f25dd885b28d43242c38dcf879e88aebda923de297440928247592e19e9 00:26:40.823 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:26:40.823 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:40.823 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.D7Y 00:26:40.823 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.D7Y 00:26:40.823 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.D7Y 00:26:40.823 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:26:40.823 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:40.823 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:40.823 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:40.823 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:26:40.823 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:26:40.823 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:26:40.823 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=cf2aa9b24591db1a6f7048a861b64c1f38c8e13616b853d3 00:26:40.823 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:26:40.823 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.lrr 00:26:40.823 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key cf2aa9b24591db1a6f7048a861b64c1f38c8e13616b853d3 0 00:26:40.823 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 cf2aa9b24591db1a6f7048a861b64c1f38c8e13616b853d3 0 00:26:40.823 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:40.823 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:40.823 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=cf2aa9b24591db1a6f7048a861b64c1f38c8e13616b853d3 00:26:40.823 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:26:40.823 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:40.823 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.lrr 00:26:40.823 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.lrr 00:26:40.823 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.lrr 00:26:40.823 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:26:40.823 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:40.823 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:40.824 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:40.824 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:26:40.824 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:26:40.824 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:26:40.824 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=51525184ce44ef37c0b396b09341a403e7983b21a6506a10 00:26:40.824 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:26:40.824 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.Uxu 00:26:40.824 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 51525184ce44ef37c0b396b09341a403e7983b21a6506a10 2 00:26:40.824 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 51525184ce44ef37c0b396b09341a403e7983b21a6506a10 2 00:26:40.824 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:40.824 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:40.824 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=51525184ce44ef37c0b396b09341a403e7983b21a6506a10 00:26:40.824 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:26:40.824 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:40.824 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.Uxu 00:26:40.824 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.Uxu 00:26:40.824 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.Uxu 00:26:40.824 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:26:40.824 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:40.824 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:40.824 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:40.824 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:26:40.824 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:26:40.824 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:40.824 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=4a625df0110c05b8fc3508b62bcd96ce 00:26:40.824 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:26:40.824 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.8Cu 00:26:40.824 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 4a625df0110c05b8fc3508b62bcd96ce 1 00:26:40.824 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 4a625df0110c05b8fc3508b62bcd96ce 1 00:26:40.824 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:40.824 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:40.824 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=4a625df0110c05b8fc3508b62bcd96ce 00:26:40.824 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:26:40.824 10:15:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:41.082 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.8Cu 00:26:41.082 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.8Cu 00:26:41.082 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.8Cu 00:26:41.082 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:26:41.082 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:41.082 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:41.082 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:41.082 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:26:41.082 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:26:41.082 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:41.082 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=ab857990f8bdbebb808a9b31521a30aa 00:26:41.082 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:26:41.082 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.9Rt 00:26:41.082 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key ab857990f8bdbebb808a9b31521a30aa 1 00:26:41.082 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 ab857990f8bdbebb808a9b31521a30aa 1 00:26:41.082 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:41.082 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:41.082 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=ab857990f8bdbebb808a9b31521a30aa 00:26:41.082 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:26:41.082 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:41.082 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.9Rt 00:26:41.082 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.9Rt 00:26:41.082 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.9Rt 00:26:41.082 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:26:41.082 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:41.082 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:41.082 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:41.082 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:26:41.082 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:26:41.082 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:26:41.082 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=364a42901cb7e3f5908a93f44139e9ce7f3e1aea1a80f2a7 00:26:41.082 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:26:41.082 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.kYO 00:26:41.082 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 364a42901cb7e3f5908a93f44139e9ce7f3e1aea1a80f2a7 2 00:26:41.082 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 364a42901cb7e3f5908a93f44139e9ce7f3e1aea1a80f2a7 2 00:26:41.082 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:41.082 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:41.082 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=364a42901cb7e3f5908a93f44139e9ce7f3e1aea1a80f2a7 00:26:41.082 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:26:41.082 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:41.082 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.kYO 00:26:41.082 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.kYO 00:26:41.082 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.kYO 00:26:41.082 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:26:41.082 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:41.082 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:41.082 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:41.082 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:26:41.082 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:26:41.082 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:26:41.082 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=511ca8196298c7d14b072a4e76b63b50 00:26:41.082 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:26:41.082 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.Jcq 00:26:41.082 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 511ca8196298c7d14b072a4e76b63b50 0 00:26:41.082 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 511ca8196298c7d14b072a4e76b63b50 0 00:26:41.082 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:41.083 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:41.083 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=511ca8196298c7d14b072a4e76b63b50 00:26:41.083 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:26:41.083 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:41.083 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.Jcq 00:26:41.083 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.Jcq 00:26:41.083 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.Jcq 00:26:41.341 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:26:41.341 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:26:41.341 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:26:41.341 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:26:41.341 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:26:41.341 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:26:41.341 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:26:41.341 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=3e52c2220fdbac79d9bf36c723a2b06d65e8b1541800b03efdf1918ceedf483f 00:26:41.341 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:26:41.341 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.023 00:26:41.341 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 3e52c2220fdbac79d9bf36c723a2b06d65e8b1541800b03efdf1918ceedf483f 3 00:26:41.341 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 3e52c2220fdbac79d9bf36c723a2b06d65e8b1541800b03efdf1918ceedf483f 3 00:26:41.341 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:26:41.341 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:26:41.341 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=3e52c2220fdbac79d9bf36c723a2b06d65e8b1541800b03efdf1918ceedf483f 00:26:41.341 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:26:41.341 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:26:41.341 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.023 00:26:41.341 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.023 00:26:41.341 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.023 00:26:41.341 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:26:41.341 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 531981 00:26:41.341 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 531981 ']' 00:26:41.341 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:41.341 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:41.341 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:41.341 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:41.341 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:41.341 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.600 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:41.600 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:26:41.600 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:41.600 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.E0v 00:26:41.600 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:41.600 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.600 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:41.600 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.D7Y ]] 00:26:41.600 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.D7Y 00:26:41.600 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:41.600 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.600 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:41.600 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:41.600 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.lrr 00:26:41.600 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:41.600 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.600 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:41.600 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.Uxu ]] 00:26:41.600 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.Uxu 00:26:41.600 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:41.600 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.600 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:41.600 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:41.600 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.8Cu 00:26:41.600 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:41.600 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.600 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:41.600 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.9Rt ]] 00:26:41.600 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.9Rt 00:26:41.600 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:41.600 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.600 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:41.600 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:41.600 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.kYO 00:26:41.600 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:41.600 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.600 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:41.600 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.Jcq ]] 00:26:41.600 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.Jcq 00:26:41.600 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:41.600 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.600 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:41.600 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:26:41.600 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.023 00:26:41.600 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:41.600 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:41.600 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:41.600 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:26:41.600 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:26:41.600 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:26:41.600 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:41.600 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:41.600 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:41.600 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:41.600 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:41.600 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:41.600 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:41.600 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:41.600 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:41.600 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:41.600 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:26:41.600 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:26:41.600 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:26:41.600 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:41.600 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:41.600 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:26:41.600 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:26:41.600 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:26:41.600 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:26:41.600 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:26:41.600 10:15:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:26:42.972 Waiting for block devices as requested 00:26:42.972 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:26:42.972 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:26:42.972 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:26:43.230 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:26:43.230 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:26:43.230 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:26:43.488 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:26:43.488 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:26:43.488 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:26:43.488 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:26:43.746 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:26:43.746 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:26:43.746 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:26:43.746 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:26:44.004 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:26:44.004 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:26:44.004 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:26:44.569 10:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:26:44.569 10:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:26:44.569 10:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:26:44.569 10:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:26:44.569 10:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:26:44.569 10:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:26:44.569 10:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:26:44.569 10:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:26:44.569 10:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:26:44.569 No valid GPT data, bailing 00:26:44.569 10:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:26:44.569 10:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:26:44.569 10:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:26:44.569 10:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:26:44.569 10:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:26:44.569 10:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:44.569 10:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:44.569 10:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:26:44.569 10:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:26:44.569 10:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:26:44.569 10:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:26:44.569 10:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:26:44.569 10:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:26:44.569 10:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:26:44.569 10:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:26:44.569 10:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:26:44.569 10:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:26:44.569 10:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -a 10.0.0.1 -t tcp -s 4420 00:26:44.827 00:26:44.827 Discovery Log Number of Records 2, Generation counter 2 00:26:44.827 =====Discovery Log Entry 0====== 00:26:44.827 trtype: tcp 00:26:44.827 adrfam: ipv4 00:26:44.827 subtype: current discovery subsystem 00:26:44.827 treq: not specified, sq flow control disable supported 00:26:44.827 portid: 1 00:26:44.827 trsvcid: 4420 00:26:44.827 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:26:44.827 traddr: 10.0.0.1 00:26:44.827 eflags: none 00:26:44.827 sectype: none 00:26:44.827 =====Discovery Log Entry 1====== 00:26:44.827 trtype: tcp 00:26:44.827 adrfam: ipv4 00:26:44.827 subtype: nvme subsystem 00:26:44.827 treq: not specified, sq flow control disable supported 00:26:44.827 portid: 1 00:26:44.827 trsvcid: 4420 00:26:44.827 subnqn: nqn.2024-02.io.spdk:cnode0 00:26:44.827 traddr: 10.0.0.1 00:26:44.827 eflags: none 00:26:44.827 sectype: none 00:26:44.827 10:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:44.827 10:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:26:44.827 10:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:26:44.827 10:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:44.827 10:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:44.827 10:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:44.827 10:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:44.827 10:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:44.828 10:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2YyYWE5YjI0NTkxZGIxYTZmNzA0OGE4NjFiNjRjMWYzOGM4ZTEzNjE2Yjg1M2QzK+wtZA==: 00:26:44.828 10:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTE1MjUxODRjZTQ0ZWYzN2MwYjM5NmIwOTM0MWE0MDNlNzk4M2IyMWE2NTA2YTEw2ACdyA==: 00:26:44.828 10:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:44.828 10:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:44.828 10:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2YyYWE5YjI0NTkxZGIxYTZmNzA0OGE4NjFiNjRjMWYzOGM4ZTEzNjE2Yjg1M2QzK+wtZA==: 00:26:44.828 10:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTE1MjUxODRjZTQ0ZWYzN2MwYjM5NmIwOTM0MWE0MDNlNzk4M2IyMWE2NTA2YTEw2ACdyA==: ]] 00:26:44.828 10:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTE1MjUxODRjZTQ0ZWYzN2MwYjM5NmIwOTM0MWE0MDNlNzk4M2IyMWE2NTA2YTEw2ACdyA==: 00:26:44.828 10:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:26:44.828 10:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:26:44.828 10:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:26:44.828 10:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:44.828 10:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:26:44.828 10:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:44.828 10:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:26:44.828 10:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:44.828 10:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:44.828 10:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:44.828 10:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:26:44.828 10:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:44.828 10:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.828 10:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:44.828 10:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:44.828 10:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:44.828 10:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:44.828 10:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:44.828 10:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:44.828 10:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:44.828 10:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:44.828 10:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:44.828 10:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:44.828 10:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:44.828 10:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:44.828 10:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:44.828 10:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:44.828 10:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.828 nvme0n1 00:26:44.828 10:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:44.828 10:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:44.828 10:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:44.828 10:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.828 10:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:45.086 10:15:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.086 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:45.086 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:45.086 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.086 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.086 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.086 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:45.086 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:45.086 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:45.086 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:26:45.086 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:45.086 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:45.087 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:45.087 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:45.087 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmU5YzBlZWZkMmRlMGViMmRiNDk4NmVlMjRlMTAzZjX5F6ol: 00:26:45.087 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWI2NjdmMjVkZDg4NWIyOGQ0MzI0MmMzOGRjZjg3OWU4OGFlYmRhOTIzZGUyOTc0NDA5MjgyNDc1OTJlMTllObrkimg=: 00:26:45.087 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:45.087 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:45.087 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmU5YzBlZWZkMmRlMGViMmRiNDk4NmVlMjRlMTAzZjX5F6ol: 00:26:45.087 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWI2NjdmMjVkZDg4NWIyOGQ0MzI0MmMzOGRjZjg3OWU4OGFlYmRhOTIzZGUyOTc0NDA5MjgyNDc1OTJlMTllObrkimg=: ]] 00:26:45.087 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWI2NjdmMjVkZDg4NWIyOGQ0MzI0MmMzOGRjZjg3OWU4OGFlYmRhOTIzZGUyOTc0NDA5MjgyNDc1OTJlMTllObrkimg=: 00:26:45.087 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:26:45.087 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:45.087 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:45.087 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:45.087 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:45.087 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:45.087 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:45.087 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.087 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.087 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.087 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:45.087 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:45.087 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:45.087 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:45.087 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:45.087 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:45.087 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:45.087 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:45.087 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:45.087 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:45.087 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:45.087 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:45.087 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.087 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.087 nvme0n1 00:26:45.087 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.087 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:45.087 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.087 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.087 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:45.087 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.345 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:45.345 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:45.345 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.345 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.345 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.345 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:45.345 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:26:45.345 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:45.345 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:45.345 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:45.345 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:45.345 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2YyYWE5YjI0NTkxZGIxYTZmNzA0OGE4NjFiNjRjMWYzOGM4ZTEzNjE2Yjg1M2QzK+wtZA==: 00:26:45.345 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTE1MjUxODRjZTQ0ZWYzN2MwYjM5NmIwOTM0MWE0MDNlNzk4M2IyMWE2NTA2YTEw2ACdyA==: 00:26:45.345 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:45.345 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:45.345 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2YyYWE5YjI0NTkxZGIxYTZmNzA0OGE4NjFiNjRjMWYzOGM4ZTEzNjE2Yjg1M2QzK+wtZA==: 00:26:45.345 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTE1MjUxODRjZTQ0ZWYzN2MwYjM5NmIwOTM0MWE0MDNlNzk4M2IyMWE2NTA2YTEw2ACdyA==: ]] 00:26:45.345 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTE1MjUxODRjZTQ0ZWYzN2MwYjM5NmIwOTM0MWE0MDNlNzk4M2IyMWE2NTA2YTEw2ACdyA==: 00:26:45.345 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:26:45.345 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:45.345 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:45.345 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:45.345 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:45.345 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:45.345 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:45.345 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.345 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.345 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.345 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:45.345 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:45.345 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:45.345 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:45.345 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:45.345 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:45.345 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:45.345 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:45.345 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:45.345 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:45.345 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:45.345 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:45.345 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.345 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.345 nvme0n1 00:26:45.345 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.345 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:45.345 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:45.345 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.345 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.345 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.603 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:45.604 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:45.604 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.604 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.604 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.604 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:45.604 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:26:45.604 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:45.604 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:45.604 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:45.604 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:45.604 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGE2MjVkZjAxMTBjMDViOGZjMzUwOGI2MmJjZDk2Y2UJepV+: 00:26:45.604 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWI4NTc5OTBmOGJkYmViYjgwOGE5YjMxNTIxYTMwYWF/Qx7D: 00:26:45.604 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:45.604 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:45.604 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGE2MjVkZjAxMTBjMDViOGZjMzUwOGI2MmJjZDk2Y2UJepV+: 00:26:45.604 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWI4NTc5OTBmOGJkYmViYjgwOGE5YjMxNTIxYTMwYWF/Qx7D: ]] 00:26:45.604 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWI4NTc5OTBmOGJkYmViYjgwOGE5YjMxNTIxYTMwYWF/Qx7D: 00:26:45.604 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:26:45.604 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:45.604 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:45.604 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:45.604 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:45.604 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:45.604 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:45.604 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.604 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.604 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.604 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:45.604 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:45.604 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:45.604 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:45.604 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:45.604 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:45.604 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:45.604 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:45.604 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:45.604 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:45.604 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:45.604 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:45.604 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.604 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.604 nvme0n1 00:26:45.604 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.604 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:45.604 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.604 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.604 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:45.604 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.604 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:45.604 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:45.604 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.604 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.604 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.604 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:45.604 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:26:45.604 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:45.604 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:45.604 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:45.604 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:45.604 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzY0YTQyOTAxY2I3ZTNmNTkwOGE5M2Y0NDEzOWU5Y2U3ZjNlMWFlYTFhODBmMmE3up7JUQ==: 00:26:45.604 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTExY2E4MTk2Mjk4YzdkMTRiMDcyYTRlNzZiNjNiNTDQl/Qu: 00:26:45.604 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:45.604 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:45.604 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzY0YTQyOTAxY2I3ZTNmNTkwOGE5M2Y0NDEzOWU5Y2U3ZjNlMWFlYTFhODBmMmE3up7JUQ==: 00:26:45.604 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTExY2E4MTk2Mjk4YzdkMTRiMDcyYTRlNzZiNjNiNTDQl/Qu: ]] 00:26:45.604 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTExY2E4MTk2Mjk4YzdkMTRiMDcyYTRlNzZiNjNiNTDQl/Qu: 00:26:45.604 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:26:45.604 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:45.604 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:45.604 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:45.604 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:45.604 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:45.604 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:45.604 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.604 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.862 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.862 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:45.862 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:45.862 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:45.862 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:45.862 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:45.862 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:45.862 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:45.862 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:45.862 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:45.862 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:45.862 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:45.862 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:45.862 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.862 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.862 nvme0n1 00:26:45.862 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.862 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:45.862 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:45.862 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.862 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.862 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.862 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:45.862 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:45.862 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.862 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.862 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.862 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:45.862 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:26:45.862 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:45.862 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:45.862 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:45.862 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:45.862 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2U1MmMyMjIwZmRiYWM3OWQ5YmYzNmM3MjNhMmIwNmQ2NWU4YjE1NDE4MDBiMDNlZmRmMTkxOGNlZWRmNDgzZtUKgKU=: 00:26:45.862 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:45.862 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:45.862 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:45.862 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2U1MmMyMjIwZmRiYWM3OWQ5YmYzNmM3MjNhMmIwNmQ2NWU4YjE1NDE4MDBiMDNlZmRmMTkxOGNlZWRmNDgzZtUKgKU=: 00:26:45.862 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:45.862 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:26:45.862 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:45.862 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:45.862 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:45.862 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:45.862 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:45.862 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:26:45.862 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.862 10:15:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:45.862 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:45.862 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:45.862 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:45.862 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:45.862 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:45.862 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:45.862 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:45.862 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:45.862 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:45.862 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:45.862 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:45.862 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:45.862 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:45.862 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:45.862 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.119 nvme0n1 00:26:46.119 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.119 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:46.119 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.119 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.119 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:46.119 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.119 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:46.119 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:46.119 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.119 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.119 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.119 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:46.119 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:46.119 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:26:46.119 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:46.119 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:46.119 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:46.119 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:46.119 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmU5YzBlZWZkMmRlMGViMmRiNDk4NmVlMjRlMTAzZjX5F6ol: 00:26:46.119 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWI2NjdmMjVkZDg4NWIyOGQ0MzI0MmMzOGRjZjg3OWU4OGFlYmRhOTIzZGUyOTc0NDA5MjgyNDc1OTJlMTllObrkimg=: 00:26:46.119 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:46.119 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:46.119 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmU5YzBlZWZkMmRlMGViMmRiNDk4NmVlMjRlMTAzZjX5F6ol: 00:26:46.119 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWI2NjdmMjVkZDg4NWIyOGQ0MzI0MmMzOGRjZjg3OWU4OGFlYmRhOTIzZGUyOTc0NDA5MjgyNDc1OTJlMTllObrkimg=: ]] 00:26:46.119 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWI2NjdmMjVkZDg4NWIyOGQ0MzI0MmMzOGRjZjg3OWU4OGFlYmRhOTIzZGUyOTc0NDA5MjgyNDc1OTJlMTllObrkimg=: 00:26:46.119 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:26:46.119 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:46.119 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:46.119 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:46.119 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:46.119 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:46.119 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:46.119 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.119 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.120 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.120 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:46.120 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:46.120 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:46.120 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:46.120 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:46.120 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:46.120 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:46.120 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:46.120 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:46.120 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:46.120 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:46.120 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:46.120 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.120 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.377 nvme0n1 00:26:46.377 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.377 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:46.377 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.377 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.377 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:46.377 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.377 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:46.377 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:46.377 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.377 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.377 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.377 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:46.377 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:26:46.377 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:46.377 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:46.377 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:46.377 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:46.377 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2YyYWE5YjI0NTkxZGIxYTZmNzA0OGE4NjFiNjRjMWYzOGM4ZTEzNjE2Yjg1M2QzK+wtZA==: 00:26:46.377 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTE1MjUxODRjZTQ0ZWYzN2MwYjM5NmIwOTM0MWE0MDNlNzk4M2IyMWE2NTA2YTEw2ACdyA==: 00:26:46.377 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:46.377 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:46.377 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2YyYWE5YjI0NTkxZGIxYTZmNzA0OGE4NjFiNjRjMWYzOGM4ZTEzNjE2Yjg1M2QzK+wtZA==: 00:26:46.377 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTE1MjUxODRjZTQ0ZWYzN2MwYjM5NmIwOTM0MWE0MDNlNzk4M2IyMWE2NTA2YTEw2ACdyA==: ]] 00:26:46.377 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTE1MjUxODRjZTQ0ZWYzN2MwYjM5NmIwOTM0MWE0MDNlNzk4M2IyMWE2NTA2YTEw2ACdyA==: 00:26:46.377 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:26:46.377 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:46.377 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:46.377 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:46.377 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:46.377 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:46.377 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:46.377 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.377 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.377 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.377 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:46.377 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:46.377 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:46.377 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:46.377 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:46.377 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:46.377 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:46.377 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:46.377 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:46.377 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:46.377 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:46.377 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:46.377 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.377 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.634 nvme0n1 00:26:46.634 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.634 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:46.634 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.634 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.634 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:46.634 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.891 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:46.891 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:46.891 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.891 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.891 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.891 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:46.891 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:26:46.891 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:46.891 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:46.891 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:46.891 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:46.891 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGE2MjVkZjAxMTBjMDViOGZjMzUwOGI2MmJjZDk2Y2UJepV+: 00:26:46.891 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWI4NTc5OTBmOGJkYmViYjgwOGE5YjMxNTIxYTMwYWF/Qx7D: 00:26:46.891 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:46.891 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:46.891 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGE2MjVkZjAxMTBjMDViOGZjMzUwOGI2MmJjZDk2Y2UJepV+: 00:26:46.891 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWI4NTc5OTBmOGJkYmViYjgwOGE5YjMxNTIxYTMwYWF/Qx7D: ]] 00:26:46.891 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWI4NTc5OTBmOGJkYmViYjgwOGE5YjMxNTIxYTMwYWF/Qx7D: 00:26:46.891 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:26:46.891 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:46.891 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:46.891 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:46.891 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:46.891 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:46.891 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:46.892 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.892 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.892 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.892 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:46.892 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:46.892 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:46.892 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:46.892 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:46.892 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:46.892 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:46.892 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:46.892 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:46.892 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:46.892 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:46.892 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:46.892 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.892 10:15:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.892 nvme0n1 00:26:46.892 10:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:46.892 10:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:46.892 10:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:46.892 10:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:46.892 10:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:46.892 10:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:47.150 10:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:47.150 10:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:47.150 10:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.150 10:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.150 10:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:47.150 10:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:47.150 10:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:26:47.150 10:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:47.150 10:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:47.150 10:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:47.150 10:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:47.150 10:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzY0YTQyOTAxY2I3ZTNmNTkwOGE5M2Y0NDEzOWU5Y2U3ZjNlMWFlYTFhODBmMmE3up7JUQ==: 00:26:47.150 10:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTExY2E4MTk2Mjk4YzdkMTRiMDcyYTRlNzZiNjNiNTDQl/Qu: 00:26:47.150 10:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:47.150 10:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:47.150 10:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzY0YTQyOTAxY2I3ZTNmNTkwOGE5M2Y0NDEzOWU5Y2U3ZjNlMWFlYTFhODBmMmE3up7JUQ==: 00:26:47.150 10:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTExY2E4MTk2Mjk4YzdkMTRiMDcyYTRlNzZiNjNiNTDQl/Qu: ]] 00:26:47.150 10:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTExY2E4MTk2Mjk4YzdkMTRiMDcyYTRlNzZiNjNiNTDQl/Qu: 00:26:47.150 10:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:26:47.150 10:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:47.150 10:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:47.150 10:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:47.150 10:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:47.150 10:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:47.150 10:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:47.150 10:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.150 10:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.150 10:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:47.150 10:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:47.150 10:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:47.150 10:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:47.150 10:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:47.150 10:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:47.150 10:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:47.150 10:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:47.150 10:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:47.150 10:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:47.150 10:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:47.150 10:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:47.150 10:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:47.150 10:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.150 10:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.408 nvme0n1 00:26:47.408 10:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:47.408 10:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:47.408 10:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:47.408 10:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.408 10:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.408 10:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:47.408 10:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:47.408 10:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:47.408 10:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.408 10:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.408 10:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:47.408 10:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:47.408 10:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:26:47.408 10:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:47.408 10:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:47.408 10:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:26:47.408 10:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:47.408 10:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2U1MmMyMjIwZmRiYWM3OWQ5YmYzNmM3MjNhMmIwNmQ2NWU4YjE1NDE4MDBiMDNlZmRmMTkxOGNlZWRmNDgzZtUKgKU=: 00:26:47.408 10:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:47.408 10:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:47.408 10:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:26:47.408 10:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2U1MmMyMjIwZmRiYWM3OWQ5YmYzNmM3MjNhMmIwNmQ2NWU4YjE1NDE4MDBiMDNlZmRmMTkxOGNlZWRmNDgzZtUKgKU=: 00:26:47.408 10:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:47.409 10:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:26:47.409 10:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:47.409 10:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:47.409 10:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:26:47.409 10:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:47.409 10:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:47.409 10:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:26:47.409 10:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.409 10:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.409 10:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:47.409 10:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:47.409 10:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:47.409 10:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:47.409 10:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:47.409 10:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:47.409 10:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:47.409 10:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:47.409 10:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:47.409 10:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:47.409 10:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:47.409 10:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:47.409 10:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:47.409 10:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.409 10:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.673 nvme0n1 00:26:47.673 10:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:47.673 10:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:47.673 10:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:47.673 10:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.673 10:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.673 10:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:47.673 10:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:47.673 10:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:47.673 10:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.673 10:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.673 10:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:47.673 10:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:47.673 10:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:47.673 10:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:26:47.673 10:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:47.673 10:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:47.673 10:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:47.673 10:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:47.673 10:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmU5YzBlZWZkMmRlMGViMmRiNDk4NmVlMjRlMTAzZjX5F6ol: 00:26:47.673 10:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWI2NjdmMjVkZDg4NWIyOGQ0MzI0MmMzOGRjZjg3OWU4OGFlYmRhOTIzZGUyOTc0NDA5MjgyNDc1OTJlMTllObrkimg=: 00:26:47.673 10:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:47.673 10:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:47.673 10:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmU5YzBlZWZkMmRlMGViMmRiNDk4NmVlMjRlMTAzZjX5F6ol: 00:26:47.673 10:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWI2NjdmMjVkZDg4NWIyOGQ0MzI0MmMzOGRjZjg3OWU4OGFlYmRhOTIzZGUyOTc0NDA5MjgyNDc1OTJlMTllObrkimg=: ]] 00:26:47.673 10:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWI2NjdmMjVkZDg4NWIyOGQ0MzI0MmMzOGRjZjg3OWU4OGFlYmRhOTIzZGUyOTc0NDA5MjgyNDc1OTJlMTllObrkimg=: 00:26:47.673 10:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:26:47.673 10:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:47.673 10:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:47.673 10:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:47.673 10:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:47.673 10:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:47.673 10:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:47.673 10:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.673 10:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.673 10:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:47.673 10:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:47.673 10:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:47.673 10:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:47.673 10:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:47.673 10:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:47.673 10:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:47.673 10:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:47.673 10:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:47.673 10:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:47.673 10:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:47.673 10:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:47.673 10:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:47.673 10:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.673 10:15:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.999 nvme0n1 00:26:47.999 10:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:47.999 10:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:47.999 10:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:47.999 10:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.999 10:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:47.999 10:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:48.257 10:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:48.257 10:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:48.257 10:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:48.257 10:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.257 10:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:48.257 10:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:48.257 10:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:26:48.257 10:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:48.257 10:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:48.257 10:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:48.257 10:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:48.257 10:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2YyYWE5YjI0NTkxZGIxYTZmNzA0OGE4NjFiNjRjMWYzOGM4ZTEzNjE2Yjg1M2QzK+wtZA==: 00:26:48.257 10:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTE1MjUxODRjZTQ0ZWYzN2MwYjM5NmIwOTM0MWE0MDNlNzk4M2IyMWE2NTA2YTEw2ACdyA==: 00:26:48.257 10:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:48.257 10:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:48.257 10:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2YyYWE5YjI0NTkxZGIxYTZmNzA0OGE4NjFiNjRjMWYzOGM4ZTEzNjE2Yjg1M2QzK+wtZA==: 00:26:48.257 10:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTE1MjUxODRjZTQ0ZWYzN2MwYjM5NmIwOTM0MWE0MDNlNzk4M2IyMWE2NTA2YTEw2ACdyA==: ]] 00:26:48.257 10:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTE1MjUxODRjZTQ0ZWYzN2MwYjM5NmIwOTM0MWE0MDNlNzk4M2IyMWE2NTA2YTEw2ACdyA==: 00:26:48.257 10:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:26:48.257 10:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:48.257 10:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:48.257 10:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:48.257 10:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:48.257 10:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:48.257 10:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:48.257 10:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:48.257 10:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.258 10:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:48.258 10:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:48.258 10:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:48.258 10:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:48.258 10:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:48.258 10:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:48.258 10:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:48.258 10:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:48.258 10:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:48.258 10:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:48.258 10:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:48.258 10:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:48.258 10:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:48.258 10:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:48.258 10:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.516 nvme0n1 00:26:48.516 10:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:48.516 10:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:48.516 10:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:48.516 10:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.516 10:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:48.516 10:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:48.516 10:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:48.516 10:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:48.516 10:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:48.516 10:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.516 10:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:48.516 10:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:48.516 10:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:26:48.516 10:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:48.516 10:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:48.516 10:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:48.516 10:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:48.516 10:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGE2MjVkZjAxMTBjMDViOGZjMzUwOGI2MmJjZDk2Y2UJepV+: 00:26:48.516 10:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWI4NTc5OTBmOGJkYmViYjgwOGE5YjMxNTIxYTMwYWF/Qx7D: 00:26:48.516 10:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:48.516 10:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:48.516 10:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGE2MjVkZjAxMTBjMDViOGZjMzUwOGI2MmJjZDk2Y2UJepV+: 00:26:48.516 10:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWI4NTc5OTBmOGJkYmViYjgwOGE5YjMxNTIxYTMwYWF/Qx7D: ]] 00:26:48.516 10:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWI4NTc5OTBmOGJkYmViYjgwOGE5YjMxNTIxYTMwYWF/Qx7D: 00:26:48.516 10:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:26:48.516 10:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:48.516 10:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:48.516 10:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:48.516 10:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:48.516 10:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:48.516 10:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:48.516 10:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:48.516 10:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:48.516 10:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:48.516 10:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:48.516 10:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:48.516 10:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:48.516 10:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:48.516 10:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:48.516 10:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:48.516 10:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:48.516 10:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:48.516 10:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:48.516 10:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:48.516 10:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:48.516 10:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:48.516 10:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:48.516 10:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.081 nvme0n1 00:26:49.081 10:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.081 10:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:49.081 10:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.081 10:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.081 10:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:49.081 10:15:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.081 10:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:49.081 10:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:49.081 10:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.081 10:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.081 10:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.081 10:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:49.081 10:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:26:49.081 10:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:49.081 10:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:49.081 10:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:49.081 10:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:49.081 10:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzY0YTQyOTAxY2I3ZTNmNTkwOGE5M2Y0NDEzOWU5Y2U3ZjNlMWFlYTFhODBmMmE3up7JUQ==: 00:26:49.081 10:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTExY2E4MTk2Mjk4YzdkMTRiMDcyYTRlNzZiNjNiNTDQl/Qu: 00:26:49.081 10:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:49.081 10:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:49.081 10:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzY0YTQyOTAxY2I3ZTNmNTkwOGE5M2Y0NDEzOWU5Y2U3ZjNlMWFlYTFhODBmMmE3up7JUQ==: 00:26:49.081 10:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTExY2E4MTk2Mjk4YzdkMTRiMDcyYTRlNzZiNjNiNTDQl/Qu: ]] 00:26:49.081 10:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTExY2E4MTk2Mjk4YzdkMTRiMDcyYTRlNzZiNjNiNTDQl/Qu: 00:26:49.081 10:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:26:49.081 10:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:49.081 10:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:49.081 10:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:49.081 10:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:49.081 10:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:49.081 10:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:49.081 10:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.081 10:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.081 10:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.081 10:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:49.081 10:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:49.081 10:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:49.081 10:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:49.081 10:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:49.081 10:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:49.082 10:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:49.082 10:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:49.082 10:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:49.082 10:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:49.082 10:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:49.082 10:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:49.082 10:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.082 10:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.339 nvme0n1 00:26:49.339 10:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.339 10:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:49.339 10:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:49.339 10:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.339 10:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.339 10:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.339 10:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:49.339 10:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:49.339 10:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.339 10:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.339 10:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.339 10:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:49.339 10:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:26:49.339 10:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:49.339 10:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:49.339 10:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:26:49.339 10:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:49.340 10:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2U1MmMyMjIwZmRiYWM3OWQ5YmYzNmM3MjNhMmIwNmQ2NWU4YjE1NDE4MDBiMDNlZmRmMTkxOGNlZWRmNDgzZtUKgKU=: 00:26:49.340 10:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:49.340 10:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:49.340 10:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:26:49.340 10:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2U1MmMyMjIwZmRiYWM3OWQ5YmYzNmM3MjNhMmIwNmQ2NWU4YjE1NDE4MDBiMDNlZmRmMTkxOGNlZWRmNDgzZtUKgKU=: 00:26:49.340 10:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:49.340 10:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:26:49.340 10:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:49.340 10:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:49.340 10:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:26:49.340 10:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:49.340 10:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:49.340 10:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:26:49.340 10:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.340 10:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.340 10:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.340 10:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:49.340 10:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:49.340 10:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:49.340 10:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:49.340 10:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:49.340 10:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:49.340 10:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:49.340 10:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:49.340 10:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:49.340 10:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:49.340 10:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:49.340 10:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:49.340 10:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.340 10:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.597 nvme0n1 00:26:49.597 10:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.597 10:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:49.597 10:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:49.597 10:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.597 10:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.597 10:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.855 10:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:49.855 10:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:49.855 10:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.855 10:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.855 10:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.855 10:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:49.855 10:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:49.855 10:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:26:49.855 10:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:49.855 10:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:49.855 10:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:49.855 10:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:49.855 10:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmU5YzBlZWZkMmRlMGViMmRiNDk4NmVlMjRlMTAzZjX5F6ol: 00:26:49.855 10:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWI2NjdmMjVkZDg4NWIyOGQ0MzI0MmMzOGRjZjg3OWU4OGFlYmRhOTIzZGUyOTc0NDA5MjgyNDc1OTJlMTllObrkimg=: 00:26:49.855 10:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:49.855 10:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:49.855 10:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmU5YzBlZWZkMmRlMGViMmRiNDk4NmVlMjRlMTAzZjX5F6ol: 00:26:49.855 10:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWI2NjdmMjVkZDg4NWIyOGQ0MzI0MmMzOGRjZjg3OWU4OGFlYmRhOTIzZGUyOTc0NDA5MjgyNDc1OTJlMTllObrkimg=: ]] 00:26:49.855 10:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWI2NjdmMjVkZDg4NWIyOGQ0MzI0MmMzOGRjZjg3OWU4OGFlYmRhOTIzZGUyOTc0NDA5MjgyNDc1OTJlMTllObrkimg=: 00:26:49.855 10:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:26:49.855 10:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:49.855 10:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:49.855 10:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:49.855 10:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:49.855 10:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:49.855 10:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:49.855 10:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.855 10:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.855 10:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:49.855 10:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:49.855 10:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:49.855 10:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:49.855 10:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:49.855 10:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:49.855 10:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:49.855 10:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:49.855 10:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:49.855 10:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:49.855 10:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:49.855 10:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:49.855 10:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:49.855 10:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:49.855 10:15:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.421 nvme0n1 00:26:50.421 10:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:50.421 10:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:50.421 10:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:50.421 10:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.421 10:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:50.421 10:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:50.421 10:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:50.421 10:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:50.421 10:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:50.421 10:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.421 10:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:50.421 10:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:50.421 10:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:26:50.421 10:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:50.421 10:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:50.421 10:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:50.421 10:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:50.421 10:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2YyYWE5YjI0NTkxZGIxYTZmNzA0OGE4NjFiNjRjMWYzOGM4ZTEzNjE2Yjg1M2QzK+wtZA==: 00:26:50.421 10:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTE1MjUxODRjZTQ0ZWYzN2MwYjM5NmIwOTM0MWE0MDNlNzk4M2IyMWE2NTA2YTEw2ACdyA==: 00:26:50.421 10:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:50.421 10:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:50.421 10:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2YyYWE5YjI0NTkxZGIxYTZmNzA0OGE4NjFiNjRjMWYzOGM4ZTEzNjE2Yjg1M2QzK+wtZA==: 00:26:50.421 10:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTE1MjUxODRjZTQ0ZWYzN2MwYjM5NmIwOTM0MWE0MDNlNzk4M2IyMWE2NTA2YTEw2ACdyA==: ]] 00:26:50.421 10:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTE1MjUxODRjZTQ0ZWYzN2MwYjM5NmIwOTM0MWE0MDNlNzk4M2IyMWE2NTA2YTEw2ACdyA==: 00:26:50.421 10:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:26:50.421 10:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:50.421 10:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:50.421 10:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:50.421 10:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:50.421 10:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:50.421 10:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:50.421 10:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:50.421 10:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:50.422 10:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:50.422 10:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:50.422 10:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:50.422 10:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:50.422 10:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:50.422 10:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:50.422 10:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:50.422 10:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:50.422 10:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:50.422 10:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:50.422 10:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:50.422 10:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:50.422 10:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:50.422 10:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:50.422 10:15:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.352 nvme0n1 00:26:51.352 10:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:51.352 10:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:51.352 10:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:51.352 10:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:51.352 10:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.352 10:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:51.352 10:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:51.352 10:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:51.352 10:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:51.352 10:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.352 10:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:51.352 10:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:51.352 10:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:26:51.352 10:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:51.352 10:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:51.352 10:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:51.352 10:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:51.352 10:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGE2MjVkZjAxMTBjMDViOGZjMzUwOGI2MmJjZDk2Y2UJepV+: 00:26:51.352 10:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWI4NTc5OTBmOGJkYmViYjgwOGE5YjMxNTIxYTMwYWF/Qx7D: 00:26:51.352 10:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:51.352 10:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:51.352 10:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGE2MjVkZjAxMTBjMDViOGZjMzUwOGI2MmJjZDk2Y2UJepV+: 00:26:51.352 10:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWI4NTc5OTBmOGJkYmViYjgwOGE5YjMxNTIxYTMwYWF/Qx7D: ]] 00:26:51.352 10:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWI4NTc5OTBmOGJkYmViYjgwOGE5YjMxNTIxYTMwYWF/Qx7D: 00:26:51.352 10:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:26:51.352 10:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:51.352 10:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:51.352 10:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:51.352 10:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:51.352 10:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:51.352 10:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:51.352 10:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:51.352 10:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.352 10:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:51.352 10:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:51.352 10:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:51.352 10:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:51.352 10:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:51.352 10:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:51.352 10:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:51.352 10:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:51.352 10:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:51.352 10:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:51.352 10:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:51.352 10:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:51.352 10:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:51.352 10:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:51.352 10:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.918 nvme0n1 00:26:51.918 10:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:51.918 10:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:51.918 10:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:51.918 10:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:51.918 10:15:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:51.918 10:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:51.918 10:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:51.918 10:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:51.918 10:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:51.918 10:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.176 10:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:52.176 10:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:52.176 10:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:26:52.176 10:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:52.176 10:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:52.176 10:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:52.176 10:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:52.176 10:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzY0YTQyOTAxY2I3ZTNmNTkwOGE5M2Y0NDEzOWU5Y2U3ZjNlMWFlYTFhODBmMmE3up7JUQ==: 00:26:52.176 10:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTExY2E4MTk2Mjk4YzdkMTRiMDcyYTRlNzZiNjNiNTDQl/Qu: 00:26:52.176 10:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:52.176 10:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:52.176 10:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzY0YTQyOTAxY2I3ZTNmNTkwOGE5M2Y0NDEzOWU5Y2U3ZjNlMWFlYTFhODBmMmE3up7JUQ==: 00:26:52.176 10:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTExY2E4MTk2Mjk4YzdkMTRiMDcyYTRlNzZiNjNiNTDQl/Qu: ]] 00:26:52.176 10:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTExY2E4MTk2Mjk4YzdkMTRiMDcyYTRlNzZiNjNiNTDQl/Qu: 00:26:52.176 10:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:26:52.176 10:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:52.176 10:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:52.176 10:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:52.176 10:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:52.176 10:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:52.176 10:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:52.176 10:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:52.176 10:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.176 10:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:52.176 10:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:52.176 10:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:52.176 10:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:52.176 10:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:52.176 10:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:52.176 10:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:52.176 10:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:52.176 10:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:52.176 10:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:52.176 10:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:52.177 10:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:52.177 10:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:52.177 10:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:52.177 10:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.742 nvme0n1 00:26:52.743 10:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:52.743 10:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:52.743 10:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:52.743 10:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.743 10:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:52.743 10:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:52.743 10:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:52.743 10:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:52.743 10:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:52.743 10:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.743 10:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:52.743 10:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:52.743 10:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:26:52.743 10:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:52.743 10:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:52.743 10:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:26:52.743 10:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:52.743 10:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2U1MmMyMjIwZmRiYWM3OWQ5YmYzNmM3MjNhMmIwNmQ2NWU4YjE1NDE4MDBiMDNlZmRmMTkxOGNlZWRmNDgzZtUKgKU=: 00:26:52.743 10:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:52.743 10:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:52.743 10:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:26:52.743 10:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2U1MmMyMjIwZmRiYWM3OWQ5YmYzNmM3MjNhMmIwNmQ2NWU4YjE1NDE4MDBiMDNlZmRmMTkxOGNlZWRmNDgzZtUKgKU=: 00:26:52.743 10:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:52.743 10:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:26:52.743 10:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:52.743 10:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:52.743 10:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:26:52.743 10:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:52.743 10:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:52.743 10:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:26:52.743 10:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:52.743 10:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:52.743 10:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:52.743 10:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:52.743 10:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:52.743 10:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:52.743 10:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:52.743 10:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:52.743 10:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:52.743 10:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:52.743 10:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:52.743 10:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:52.743 10:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:52.743 10:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:52.743 10:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:52.743 10:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:52.743 10:15:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.308 nvme0n1 00:26:53.308 10:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:53.308 10:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:53.308 10:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:53.308 10:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:53.308 10:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.308 10:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:53.308 10:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:53.308 10:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:53.308 10:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:53.308 10:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.566 10:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:53.566 10:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:53.566 10:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:53.566 10:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:26:53.566 10:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:53.566 10:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:53.566 10:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:53.566 10:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:53.566 10:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmU5YzBlZWZkMmRlMGViMmRiNDk4NmVlMjRlMTAzZjX5F6ol: 00:26:53.566 10:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWI2NjdmMjVkZDg4NWIyOGQ0MzI0MmMzOGRjZjg3OWU4OGFlYmRhOTIzZGUyOTc0NDA5MjgyNDc1OTJlMTllObrkimg=: 00:26:53.566 10:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:53.566 10:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:53.566 10:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmU5YzBlZWZkMmRlMGViMmRiNDk4NmVlMjRlMTAzZjX5F6ol: 00:26:53.566 10:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWI2NjdmMjVkZDg4NWIyOGQ0MzI0MmMzOGRjZjg3OWU4OGFlYmRhOTIzZGUyOTc0NDA5MjgyNDc1OTJlMTllObrkimg=: ]] 00:26:53.566 10:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWI2NjdmMjVkZDg4NWIyOGQ0MzI0MmMzOGRjZjg3OWU4OGFlYmRhOTIzZGUyOTc0NDA5MjgyNDc1OTJlMTllObrkimg=: 00:26:53.566 10:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:26:53.566 10:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:53.566 10:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:53.566 10:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:53.566 10:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:53.566 10:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:53.566 10:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:53.566 10:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:53.566 10:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:53.566 10:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:53.566 10:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:53.566 10:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:53.566 10:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:53.566 10:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:53.566 10:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:53.566 10:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:53.566 10:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:53.566 10:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:53.566 10:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:53.566 10:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:53.566 10:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:53.566 10:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:53.566 10:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:53.566 10:15:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.500 nvme0n1 00:26:54.500 10:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:54.500 10:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:54.500 10:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:54.500 10:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:54.500 10:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.500 10:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:54.500 10:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:54.500 10:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:54.500 10:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:54.500 10:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.957 10:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:54.957 10:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:54.957 10:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:26:54.957 10:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:54.957 10:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:54.957 10:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:54.957 10:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:54.957 10:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2YyYWE5YjI0NTkxZGIxYTZmNzA0OGE4NjFiNjRjMWYzOGM4ZTEzNjE2Yjg1M2QzK+wtZA==: 00:26:54.957 10:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTE1MjUxODRjZTQ0ZWYzN2MwYjM5NmIwOTM0MWE0MDNlNzk4M2IyMWE2NTA2YTEw2ACdyA==: 00:26:54.957 10:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:54.957 10:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:54.957 10:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2YyYWE5YjI0NTkxZGIxYTZmNzA0OGE4NjFiNjRjMWYzOGM4ZTEzNjE2Yjg1M2QzK+wtZA==: 00:26:54.957 10:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTE1MjUxODRjZTQ0ZWYzN2MwYjM5NmIwOTM0MWE0MDNlNzk4M2IyMWE2NTA2YTEw2ACdyA==: ]] 00:26:54.957 10:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTE1MjUxODRjZTQ0ZWYzN2MwYjM5NmIwOTM0MWE0MDNlNzk4M2IyMWE2NTA2YTEw2ACdyA==: 00:26:54.957 10:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:26:54.957 10:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:54.957 10:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:54.957 10:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:54.957 10:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:54.957 10:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:54.957 10:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:54.957 10:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:54.957 10:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.957 10:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:54.957 10:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:54.957 10:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:54.957 10:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:54.957 10:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:54.957 10:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:54.957 10:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:54.957 10:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:54.957 10:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:54.957 10:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:54.957 10:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:54.957 10:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:54.957 10:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:54.957 10:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:54.957 10:15:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.888 nvme0n1 00:26:55.888 10:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:55.888 10:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:55.888 10:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:55.888 10:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:55.888 10:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.888 10:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:55.888 10:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:55.888 10:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:55.888 10:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:55.888 10:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.888 10:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:55.888 10:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:55.888 10:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:26:55.888 10:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:55.888 10:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:55.889 10:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:55.889 10:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:55.889 10:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGE2MjVkZjAxMTBjMDViOGZjMzUwOGI2MmJjZDk2Y2UJepV+: 00:26:55.889 10:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWI4NTc5OTBmOGJkYmViYjgwOGE5YjMxNTIxYTMwYWF/Qx7D: 00:26:55.889 10:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:55.889 10:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:55.889 10:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGE2MjVkZjAxMTBjMDViOGZjMzUwOGI2MmJjZDk2Y2UJepV+: 00:26:55.889 10:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWI4NTc5OTBmOGJkYmViYjgwOGE5YjMxNTIxYTMwYWF/Qx7D: ]] 00:26:55.889 10:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWI4NTc5OTBmOGJkYmViYjgwOGE5YjMxNTIxYTMwYWF/Qx7D: 00:26:55.889 10:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:26:55.889 10:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:55.889 10:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:55.889 10:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:55.889 10:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:55.889 10:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:55.889 10:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:55.889 10:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:55.889 10:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:55.889 10:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:55.889 10:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:55.889 10:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:55.889 10:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:55.889 10:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:55.889 10:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:55.889 10:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:55.889 10:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:55.889 10:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:55.889 10:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:55.889 10:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:55.889 10:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:55.889 10:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:55.889 10:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:55.889 10:15:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.820 nvme0n1 00:26:56.820 10:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:56.820 10:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:56.821 10:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:56.821 10:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:56.821 10:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:56.821 10:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.078 10:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:57.078 10:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:57.078 10:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.078 10:15:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.078 10:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.078 10:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:57.078 10:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:26:57.078 10:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:57.078 10:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:57.078 10:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:57.078 10:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:26:57.078 10:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzY0YTQyOTAxY2I3ZTNmNTkwOGE5M2Y0NDEzOWU5Y2U3ZjNlMWFlYTFhODBmMmE3up7JUQ==: 00:26:57.078 10:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTExY2E4MTk2Mjk4YzdkMTRiMDcyYTRlNzZiNjNiNTDQl/Qu: 00:26:57.078 10:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:57.078 10:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:57.078 10:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzY0YTQyOTAxY2I3ZTNmNTkwOGE5M2Y0NDEzOWU5Y2U3ZjNlMWFlYTFhODBmMmE3up7JUQ==: 00:26:57.078 10:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTExY2E4MTk2Mjk4YzdkMTRiMDcyYTRlNzZiNjNiNTDQl/Qu: ]] 00:26:57.078 10:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTExY2E4MTk2Mjk4YzdkMTRiMDcyYTRlNzZiNjNiNTDQl/Qu: 00:26:57.078 10:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:26:57.078 10:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:57.078 10:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:57.078 10:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:57.078 10:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:26:57.078 10:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:57.078 10:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:57.078 10:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.078 10:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:57.078 10:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:57.078 10:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:57.078 10:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:57.078 10:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:57.078 10:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:57.078 10:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:57.078 10:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:57.078 10:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:57.078 10:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:57.078 10:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:57.078 10:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:57.078 10:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:57.078 10:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:26:57.078 10:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:57.078 10:15:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.450 nvme0n1 00:26:58.450 10:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.450 10:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:58.450 10:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.450 10:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.450 10:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:58.450 10:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.450 10:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:58.450 10:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:58.450 10:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.450 10:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.450 10:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.450 10:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:58.450 10:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:26:58.450 10:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:58.450 10:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:26:58.450 10:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:26:58.450 10:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:26:58.450 10:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2U1MmMyMjIwZmRiYWM3OWQ5YmYzNmM3MjNhMmIwNmQ2NWU4YjE1NDE4MDBiMDNlZmRmMTkxOGNlZWRmNDgzZtUKgKU=: 00:26:58.450 10:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:26:58.450 10:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:26:58.450 10:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:26:58.450 10:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2U1MmMyMjIwZmRiYWM3OWQ5YmYzNmM3MjNhMmIwNmQ2NWU4YjE1NDE4MDBiMDNlZmRmMTkxOGNlZWRmNDgzZtUKgKU=: 00:26:58.450 10:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:26:58.450 10:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:26:58.450 10:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:58.450 10:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:26:58.450 10:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:26:58.450 10:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:26:58.450 10:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:58.450 10:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:26:58.450 10:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.450 10:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:58.450 10:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.450 10:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:58.450 10:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:58.450 10:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:58.450 10:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:58.450 10:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:58.450 10:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:58.450 10:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:58.450 10:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:58.450 10:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:58.450 10:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:58.450 10:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:58.450 10:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:26:58.450 10:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.450 10:15:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.382 nvme0n1 00:26:59.382 10:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:59.382 10:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:59.382 10:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:59.382 10:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:59.382 10:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.382 10:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:59.382 10:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:59.382 10:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:59.382 10:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:59.382 10:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.382 10:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:59.382 10:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:26:59.382 10:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:26:59.382 10:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:59.382 10:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:26:59.382 10:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:59.382 10:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:59.382 10:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:59.382 10:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:26:59.382 10:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmU5YzBlZWZkMmRlMGViMmRiNDk4NmVlMjRlMTAzZjX5F6ol: 00:26:59.382 10:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWI2NjdmMjVkZDg4NWIyOGQ0MzI0MmMzOGRjZjg3OWU4OGFlYmRhOTIzZGUyOTc0NDA5MjgyNDc1OTJlMTllObrkimg=: 00:26:59.382 10:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:59.382 10:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:59.382 10:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmU5YzBlZWZkMmRlMGViMmRiNDk4NmVlMjRlMTAzZjX5F6ol: 00:26:59.382 10:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWI2NjdmMjVkZDg4NWIyOGQ0MzI0MmMzOGRjZjg3OWU4OGFlYmRhOTIzZGUyOTc0NDA5MjgyNDc1OTJlMTllObrkimg=: ]] 00:26:59.382 10:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWI2NjdmMjVkZDg4NWIyOGQ0MzI0MmMzOGRjZjg3OWU4OGFlYmRhOTIzZGUyOTc0NDA5MjgyNDc1OTJlMTllObrkimg=: 00:26:59.382 10:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:26:59.382 10:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:59.382 10:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:59.382 10:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:59.382 10:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:26:59.382 10:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:59.382 10:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:59.382 10:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:59.382 10:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.382 10:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:59.382 10:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:59.383 10:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:59.383 10:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:59.383 10:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:59.383 10:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:59.383 10:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:59.383 10:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:59.383 10:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:59.383 10:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:59.383 10:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:59.383 10:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:59.383 10:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:26:59.383 10:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:59.383 10:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.641 nvme0n1 00:26:59.641 10:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:59.641 10:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:59.641 10:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:59.641 10:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.641 10:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:59.641 10:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:59.641 10:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:59.641 10:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:59.641 10:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:59.641 10:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.641 10:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:59.641 10:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:59.641 10:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:26:59.641 10:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:59.641 10:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:59.641 10:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:59.641 10:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:26:59.641 10:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2YyYWE5YjI0NTkxZGIxYTZmNzA0OGE4NjFiNjRjMWYzOGM4ZTEzNjE2Yjg1M2QzK+wtZA==: 00:26:59.641 10:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTE1MjUxODRjZTQ0ZWYzN2MwYjM5NmIwOTM0MWE0MDNlNzk4M2IyMWE2NTA2YTEw2ACdyA==: 00:26:59.641 10:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:59.641 10:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:59.641 10:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2YyYWE5YjI0NTkxZGIxYTZmNzA0OGE4NjFiNjRjMWYzOGM4ZTEzNjE2Yjg1M2QzK+wtZA==: 00:26:59.641 10:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTE1MjUxODRjZTQ0ZWYzN2MwYjM5NmIwOTM0MWE0MDNlNzk4M2IyMWE2NTA2YTEw2ACdyA==: ]] 00:26:59.641 10:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTE1MjUxODRjZTQ0ZWYzN2MwYjM5NmIwOTM0MWE0MDNlNzk4M2IyMWE2NTA2YTEw2ACdyA==: 00:26:59.641 10:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:26:59.641 10:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:59.641 10:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:59.641 10:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:59.641 10:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:26:59.641 10:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:59.641 10:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:59.641 10:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:59.641 10:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.641 10:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:59.641 10:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:59.641 10:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:59.641 10:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:59.641 10:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:59.641 10:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:59.641 10:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:59.641 10:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:59.641 10:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:59.641 10:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:59.641 10:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:59.641 10:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:59.641 10:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:26:59.641 10:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:59.641 10:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.900 nvme0n1 00:26:59.900 10:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:59.900 10:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:26:59.900 10:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:59.900 10:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.900 10:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:26:59.900 10:15:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:59.900 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:59.900 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:59.900 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:59.900 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.900 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:59.900 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:26:59.900 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:26:59.900 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:26:59.900 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:26:59.900 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:26:59.900 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:26:59.900 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGE2MjVkZjAxMTBjMDViOGZjMzUwOGI2MmJjZDk2Y2UJepV+: 00:26:59.900 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWI4NTc5OTBmOGJkYmViYjgwOGE5YjMxNTIxYTMwYWF/Qx7D: 00:26:59.900 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:26:59.900 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:26:59.900 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGE2MjVkZjAxMTBjMDViOGZjMzUwOGI2MmJjZDk2Y2UJepV+: 00:26:59.900 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWI4NTc5OTBmOGJkYmViYjgwOGE5YjMxNTIxYTMwYWF/Qx7D: ]] 00:26:59.900 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWI4NTc5OTBmOGJkYmViYjgwOGE5YjMxNTIxYTMwYWF/Qx7D: 00:26:59.900 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:26:59.900 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:26:59.900 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:26:59.900 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:26:59.900 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:26:59.900 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:26:59.900 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:26:59.900 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:59.900 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:59.900 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:59.900 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:26:59.900 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:26:59.900 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:26:59.900 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:26:59.900 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:26:59.900 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:26:59.900 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:26:59.900 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:26:59.900 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:26:59.900 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:26:59.900 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:26:59.900 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:26:59.900 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:59.900 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.158 nvme0n1 00:27:00.158 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:00.158 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:00.158 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:00.158 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.158 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:00.158 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:00.158 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:00.158 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:00.158 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:00.158 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.416 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:00.416 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:00.416 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:27:00.416 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:00.416 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:00.416 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:00.416 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:00.416 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzY0YTQyOTAxY2I3ZTNmNTkwOGE5M2Y0NDEzOWU5Y2U3ZjNlMWFlYTFhODBmMmE3up7JUQ==: 00:27:00.416 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTExY2E4MTk2Mjk4YzdkMTRiMDcyYTRlNzZiNjNiNTDQl/Qu: 00:27:00.416 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:00.416 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:00.416 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzY0YTQyOTAxY2I3ZTNmNTkwOGE5M2Y0NDEzOWU5Y2U3ZjNlMWFlYTFhODBmMmE3up7JUQ==: 00:27:00.416 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTExY2E4MTk2Mjk4YzdkMTRiMDcyYTRlNzZiNjNiNTDQl/Qu: ]] 00:27:00.416 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTExY2E4MTk2Mjk4YzdkMTRiMDcyYTRlNzZiNjNiNTDQl/Qu: 00:27:00.416 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:27:00.416 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:00.416 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:00.416 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:00.416 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:00.416 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:00.416 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:00.417 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:00.417 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.417 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:00.417 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:00.417 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:00.417 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:00.417 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:00.417 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:00.417 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:00.417 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:00.417 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:00.417 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:00.417 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:00.417 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:00.417 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:00.417 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:00.417 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.417 nvme0n1 00:27:00.417 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:00.417 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:00.417 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:00.417 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.417 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:00.417 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:00.417 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:00.417 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:00.417 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:00.417 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.417 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:00.417 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:00.417 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:27:00.417 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:00.417 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:00.417 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:00.417 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:00.417 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2U1MmMyMjIwZmRiYWM3OWQ5YmYzNmM3MjNhMmIwNmQ2NWU4YjE1NDE4MDBiMDNlZmRmMTkxOGNlZWRmNDgzZtUKgKU=: 00:27:00.417 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:00.417 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:00.417 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:00.417 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2U1MmMyMjIwZmRiYWM3OWQ5YmYzNmM3MjNhMmIwNmQ2NWU4YjE1NDE4MDBiMDNlZmRmMTkxOGNlZWRmNDgzZtUKgKU=: 00:27:00.417 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:00.417 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:27:00.417 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:00.417 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:00.417 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:00.417 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:00.417 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:00.417 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:27:00.417 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:00.417 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.675 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:00.675 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:00.675 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:00.675 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:00.675 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:00.675 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:00.675 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:00.675 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:00.675 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:00.675 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:00.675 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:00.675 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:00.675 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:00.675 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:00.675 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.675 nvme0n1 00:27:00.675 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:00.675 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:00.675 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:00.675 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.675 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:00.675 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:00.675 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:00.675 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:00.675 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:00.675 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.933 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:00.933 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:00.933 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:00.933 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:27:00.933 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:00.933 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:00.933 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:00.933 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:00.933 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmU5YzBlZWZkMmRlMGViMmRiNDk4NmVlMjRlMTAzZjX5F6ol: 00:27:00.933 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWI2NjdmMjVkZDg4NWIyOGQ0MzI0MmMzOGRjZjg3OWU4OGFlYmRhOTIzZGUyOTc0NDA5MjgyNDc1OTJlMTllObrkimg=: 00:27:00.933 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:00.933 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:00.933 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmU5YzBlZWZkMmRlMGViMmRiNDk4NmVlMjRlMTAzZjX5F6ol: 00:27:00.933 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWI2NjdmMjVkZDg4NWIyOGQ0MzI0MmMzOGRjZjg3OWU4OGFlYmRhOTIzZGUyOTc0NDA5MjgyNDc1OTJlMTllObrkimg=: ]] 00:27:00.933 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWI2NjdmMjVkZDg4NWIyOGQ0MzI0MmMzOGRjZjg3OWU4OGFlYmRhOTIzZGUyOTc0NDA5MjgyNDc1OTJlMTllObrkimg=: 00:27:00.933 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:27:00.933 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:00.933 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:00.933 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:00.933 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:00.933 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:00.933 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:00.933 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:00.933 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.933 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:00.933 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:00.933 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:00.933 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:00.933 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:00.933 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:00.933 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:00.933 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:00.933 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:00.933 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:00.933 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:00.933 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:00.933 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:00.933 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:00.933 10:15:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:00.933 nvme0n1 00:27:00.933 10:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:00.933 10:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:00.933 10:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:00.933 10:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:00.933 10:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.192 10:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:01.192 10:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:01.192 10:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:01.192 10:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:01.193 10:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.193 10:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:01.193 10:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:01.193 10:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:27:01.193 10:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:01.193 10:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:01.193 10:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:01.193 10:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:01.193 10:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2YyYWE5YjI0NTkxZGIxYTZmNzA0OGE4NjFiNjRjMWYzOGM4ZTEzNjE2Yjg1M2QzK+wtZA==: 00:27:01.193 10:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTE1MjUxODRjZTQ0ZWYzN2MwYjM5NmIwOTM0MWE0MDNlNzk4M2IyMWE2NTA2YTEw2ACdyA==: 00:27:01.193 10:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:01.193 10:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:01.193 10:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2YyYWE5YjI0NTkxZGIxYTZmNzA0OGE4NjFiNjRjMWYzOGM4ZTEzNjE2Yjg1M2QzK+wtZA==: 00:27:01.193 10:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTE1MjUxODRjZTQ0ZWYzN2MwYjM5NmIwOTM0MWE0MDNlNzk4M2IyMWE2NTA2YTEw2ACdyA==: ]] 00:27:01.193 10:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTE1MjUxODRjZTQ0ZWYzN2MwYjM5NmIwOTM0MWE0MDNlNzk4M2IyMWE2NTA2YTEw2ACdyA==: 00:27:01.193 10:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:27:01.193 10:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:01.193 10:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:01.193 10:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:01.193 10:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:01.193 10:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:01.193 10:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:01.193 10:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:01.193 10:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.193 10:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:01.193 10:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:01.193 10:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:01.193 10:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:01.193 10:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:01.193 10:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:01.193 10:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:01.193 10:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:01.193 10:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:01.193 10:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:01.193 10:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:01.193 10:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:01.193 10:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:01.193 10:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:01.193 10:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.451 nvme0n1 00:27:01.451 10:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:01.451 10:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:01.451 10:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:01.451 10:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:01.451 10:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.451 10:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:01.451 10:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:01.451 10:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:01.451 10:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:01.451 10:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.451 10:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:01.451 10:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:01.451 10:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:27:01.451 10:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:01.451 10:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:01.451 10:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:01.451 10:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:01.451 10:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGE2MjVkZjAxMTBjMDViOGZjMzUwOGI2MmJjZDk2Y2UJepV+: 00:27:01.452 10:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWI4NTc5OTBmOGJkYmViYjgwOGE5YjMxNTIxYTMwYWF/Qx7D: 00:27:01.452 10:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:01.452 10:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:01.452 10:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGE2MjVkZjAxMTBjMDViOGZjMzUwOGI2MmJjZDk2Y2UJepV+: 00:27:01.452 10:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWI4NTc5OTBmOGJkYmViYjgwOGE5YjMxNTIxYTMwYWF/Qx7D: ]] 00:27:01.452 10:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWI4NTc5OTBmOGJkYmViYjgwOGE5YjMxNTIxYTMwYWF/Qx7D: 00:27:01.452 10:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:27:01.452 10:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:01.452 10:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:01.452 10:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:01.452 10:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:01.452 10:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:01.452 10:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:01.452 10:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:01.452 10:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.452 10:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:01.452 10:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:01.452 10:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:01.452 10:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:01.452 10:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:01.452 10:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:01.452 10:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:01.452 10:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:01.452 10:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:01.452 10:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:01.452 10:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:01.452 10:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:01.452 10:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:01.452 10:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:01.452 10:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.728 nvme0n1 00:27:01.728 10:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:01.728 10:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:01.728 10:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:01.728 10:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:01.728 10:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.728 10:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:01.728 10:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:01.728 10:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:01.728 10:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:01.728 10:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.728 10:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:01.728 10:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:01.728 10:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:27:01.728 10:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:01.728 10:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:01.728 10:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:01.728 10:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:01.728 10:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzY0YTQyOTAxY2I3ZTNmNTkwOGE5M2Y0NDEzOWU5Y2U3ZjNlMWFlYTFhODBmMmE3up7JUQ==: 00:27:01.728 10:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTExY2E4MTk2Mjk4YzdkMTRiMDcyYTRlNzZiNjNiNTDQl/Qu: 00:27:01.728 10:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:01.728 10:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:01.728 10:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzY0YTQyOTAxY2I3ZTNmNTkwOGE5M2Y0NDEzOWU5Y2U3ZjNlMWFlYTFhODBmMmE3up7JUQ==: 00:27:01.728 10:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTExY2E4MTk2Mjk4YzdkMTRiMDcyYTRlNzZiNjNiNTDQl/Qu: ]] 00:27:01.728 10:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTExY2E4MTk2Mjk4YzdkMTRiMDcyYTRlNzZiNjNiNTDQl/Qu: 00:27:01.728 10:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:27:01.728 10:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:01.728 10:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:01.728 10:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:01.728 10:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:01.728 10:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:01.728 10:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:01.728 10:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:01.728 10:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.728 10:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:01.728 10:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:01.728 10:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:01.728 10:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:01.728 10:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:01.728 10:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:01.728 10:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:01.728 10:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:01.728 10:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:01.728 10:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:01.728 10:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:01.728 10:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:01.728 10:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:01.728 10:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:01.728 10:15:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.002 nvme0n1 00:27:02.003 10:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.003 10:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:02.003 10:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.003 10:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.003 10:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:02.003 10:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.003 10:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:02.003 10:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:02.003 10:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.003 10:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.003 10:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.003 10:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:02.003 10:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:27:02.003 10:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:02.003 10:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:02.003 10:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:02.003 10:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:02.003 10:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2U1MmMyMjIwZmRiYWM3OWQ5YmYzNmM3MjNhMmIwNmQ2NWU4YjE1NDE4MDBiMDNlZmRmMTkxOGNlZWRmNDgzZtUKgKU=: 00:27:02.003 10:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:02.003 10:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:02.003 10:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:02.003 10:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2U1MmMyMjIwZmRiYWM3OWQ5YmYzNmM3MjNhMmIwNmQ2NWU4YjE1NDE4MDBiMDNlZmRmMTkxOGNlZWRmNDgzZtUKgKU=: 00:27:02.003 10:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:02.003 10:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:27:02.003 10:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:02.003 10:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:02.003 10:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:02.003 10:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:02.003 10:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:02.003 10:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:27:02.003 10:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.003 10:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.003 10:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.003 10:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:02.003 10:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:02.003 10:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:02.003 10:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:02.003 10:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:02.003 10:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:02.003 10:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:02.003 10:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:02.003 10:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:02.003 10:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:02.003 10:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:02.003 10:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:02.003 10:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.003 10:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.261 nvme0n1 00:27:02.261 10:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.261 10:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:02.261 10:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.261 10:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:02.261 10:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.261 10:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.261 10:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:02.261 10:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:02.261 10:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.261 10:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.261 10:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.261 10:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:02.261 10:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:02.261 10:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:27:02.261 10:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:02.261 10:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:02.261 10:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:02.261 10:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:02.261 10:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmU5YzBlZWZkMmRlMGViMmRiNDk4NmVlMjRlMTAzZjX5F6ol: 00:27:02.261 10:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWI2NjdmMjVkZDg4NWIyOGQ0MzI0MmMzOGRjZjg3OWU4OGFlYmRhOTIzZGUyOTc0NDA5MjgyNDc1OTJlMTllObrkimg=: 00:27:02.261 10:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:02.261 10:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:02.261 10:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmU5YzBlZWZkMmRlMGViMmRiNDk4NmVlMjRlMTAzZjX5F6ol: 00:27:02.261 10:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWI2NjdmMjVkZDg4NWIyOGQ0MzI0MmMzOGRjZjg3OWU4OGFlYmRhOTIzZGUyOTc0NDA5MjgyNDc1OTJlMTllObrkimg=: ]] 00:27:02.261 10:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWI2NjdmMjVkZDg4NWIyOGQ0MzI0MmMzOGRjZjg3OWU4OGFlYmRhOTIzZGUyOTc0NDA5MjgyNDc1OTJlMTllObrkimg=: 00:27:02.261 10:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:27:02.261 10:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:02.261 10:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:02.261 10:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:02.261 10:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:02.261 10:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:02.261 10:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:02.261 10:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.261 10:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.261 10:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.261 10:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:02.262 10:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:02.262 10:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:02.262 10:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:02.262 10:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:02.262 10:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:02.262 10:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:02.262 10:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:02.262 10:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:02.262 10:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:02.262 10:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:02.262 10:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:02.262 10:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.262 10:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.828 nvme0n1 00:27:02.829 10:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.829 10:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:02.829 10:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.829 10:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.829 10:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:02.829 10:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.829 10:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:02.829 10:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:02.829 10:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.829 10:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.829 10:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.829 10:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:02.829 10:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:27:02.829 10:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:02.829 10:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:02.829 10:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:02.829 10:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:02.829 10:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2YyYWE5YjI0NTkxZGIxYTZmNzA0OGE4NjFiNjRjMWYzOGM4ZTEzNjE2Yjg1M2QzK+wtZA==: 00:27:02.829 10:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTE1MjUxODRjZTQ0ZWYzN2MwYjM5NmIwOTM0MWE0MDNlNzk4M2IyMWE2NTA2YTEw2ACdyA==: 00:27:02.829 10:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:02.829 10:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:02.829 10:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2YyYWE5YjI0NTkxZGIxYTZmNzA0OGE4NjFiNjRjMWYzOGM4ZTEzNjE2Yjg1M2QzK+wtZA==: 00:27:02.829 10:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTE1MjUxODRjZTQ0ZWYzN2MwYjM5NmIwOTM0MWE0MDNlNzk4M2IyMWE2NTA2YTEw2ACdyA==: ]] 00:27:02.829 10:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTE1MjUxODRjZTQ0ZWYzN2MwYjM5NmIwOTM0MWE0MDNlNzk4M2IyMWE2NTA2YTEw2ACdyA==: 00:27:02.829 10:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:27:02.829 10:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:02.829 10:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:02.829 10:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:02.829 10:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:02.829 10:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:02.829 10:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:02.829 10:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.829 10:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:02.829 10:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:02.829 10:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:02.829 10:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:02.829 10:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:02.829 10:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:02.829 10:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:02.829 10:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:02.829 10:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:02.829 10:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:02.829 10:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:02.829 10:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:02.829 10:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:02.829 10:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:02.829 10:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:02.829 10:15:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.087 nvme0n1 00:27:03.087 10:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.087 10:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:03.087 10:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:03.087 10:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.087 10:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.087 10:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.087 10:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:03.087 10:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:03.087 10:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.087 10:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.087 10:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.087 10:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:03.087 10:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:27:03.087 10:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:03.087 10:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:03.087 10:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:03.087 10:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:03.087 10:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGE2MjVkZjAxMTBjMDViOGZjMzUwOGI2MmJjZDk2Y2UJepV+: 00:27:03.087 10:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWI4NTc5OTBmOGJkYmViYjgwOGE5YjMxNTIxYTMwYWF/Qx7D: 00:27:03.087 10:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:03.087 10:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:03.087 10:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGE2MjVkZjAxMTBjMDViOGZjMzUwOGI2MmJjZDk2Y2UJepV+: 00:27:03.087 10:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWI4NTc5OTBmOGJkYmViYjgwOGE5YjMxNTIxYTMwYWF/Qx7D: ]] 00:27:03.087 10:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWI4NTc5OTBmOGJkYmViYjgwOGE5YjMxNTIxYTMwYWF/Qx7D: 00:27:03.087 10:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:27:03.087 10:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:03.087 10:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:03.087 10:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:03.087 10:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:03.087 10:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:03.087 10:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:03.087 10:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.087 10:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.087 10:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.087 10:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:03.087 10:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:03.087 10:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:03.087 10:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:03.087 10:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:03.087 10:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:03.087 10:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:03.087 10:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:03.087 10:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:03.087 10:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:03.087 10:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:03.087 10:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:03.087 10:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.087 10:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.654 nvme0n1 00:27:03.654 10:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.654 10:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:03.654 10:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.654 10:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:03.654 10:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.654 10:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.654 10:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:03.654 10:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:03.654 10:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.654 10:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.654 10:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.654 10:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:03.654 10:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:27:03.654 10:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:03.654 10:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:03.654 10:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:03.654 10:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:03.654 10:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzY0YTQyOTAxY2I3ZTNmNTkwOGE5M2Y0NDEzOWU5Y2U3ZjNlMWFlYTFhODBmMmE3up7JUQ==: 00:27:03.654 10:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTExY2E4MTk2Mjk4YzdkMTRiMDcyYTRlNzZiNjNiNTDQl/Qu: 00:27:03.654 10:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:03.654 10:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:03.654 10:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzY0YTQyOTAxY2I3ZTNmNTkwOGE5M2Y0NDEzOWU5Y2U3ZjNlMWFlYTFhODBmMmE3up7JUQ==: 00:27:03.654 10:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTExY2E4MTk2Mjk4YzdkMTRiMDcyYTRlNzZiNjNiNTDQl/Qu: ]] 00:27:03.654 10:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTExY2E4MTk2Mjk4YzdkMTRiMDcyYTRlNzZiNjNiNTDQl/Qu: 00:27:03.654 10:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:27:03.654 10:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:03.654 10:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:03.654 10:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:03.654 10:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:03.654 10:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:03.654 10:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:03.654 10:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.654 10:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.654 10:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.654 10:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:03.654 10:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:03.654 10:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:03.654 10:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:03.654 10:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:03.654 10:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:03.654 10:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:03.654 10:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:03.654 10:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:03.654 10:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:03.654 10:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:03.654 10:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:03.654 10:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.654 10:15:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.912 nvme0n1 00:27:03.912 10:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:03.912 10:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:03.912 10:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:03.912 10:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:03.912 10:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:03.912 10:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:04.171 10:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:04.171 10:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:04.171 10:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:04.171 10:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.171 10:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:04.171 10:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:04.171 10:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:27:04.172 10:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:04.172 10:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:04.172 10:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:04.172 10:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:04.172 10:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2U1MmMyMjIwZmRiYWM3OWQ5YmYzNmM3MjNhMmIwNmQ2NWU4YjE1NDE4MDBiMDNlZmRmMTkxOGNlZWRmNDgzZtUKgKU=: 00:27:04.172 10:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:04.172 10:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:04.172 10:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:04.172 10:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2U1MmMyMjIwZmRiYWM3OWQ5YmYzNmM3MjNhMmIwNmQ2NWU4YjE1NDE4MDBiMDNlZmRmMTkxOGNlZWRmNDgzZtUKgKU=: 00:27:04.172 10:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:04.172 10:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:27:04.172 10:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:04.172 10:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:04.172 10:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:04.172 10:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:04.172 10:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:04.172 10:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:27:04.172 10:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:04.172 10:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.172 10:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:04.172 10:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:04.172 10:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:04.172 10:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:04.172 10:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:04.172 10:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:04.172 10:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:04.172 10:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:04.172 10:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:04.172 10:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:04.172 10:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:04.172 10:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:04.172 10:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:04.172 10:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:04.172 10:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.430 nvme0n1 00:27:04.430 10:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:04.430 10:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:04.430 10:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:04.430 10:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.430 10:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:04.430 10:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:04.430 10:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:04.430 10:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:04.430 10:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:04.430 10:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.430 10:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:04.430 10:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:04.430 10:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:04.430 10:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:27:04.430 10:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:04.430 10:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:04.430 10:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:04.430 10:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:04.430 10:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmU5YzBlZWZkMmRlMGViMmRiNDk4NmVlMjRlMTAzZjX5F6ol: 00:27:04.430 10:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWI2NjdmMjVkZDg4NWIyOGQ0MzI0MmMzOGRjZjg3OWU4OGFlYmRhOTIzZGUyOTc0NDA5MjgyNDc1OTJlMTllObrkimg=: 00:27:04.430 10:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:04.430 10:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:04.430 10:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmU5YzBlZWZkMmRlMGViMmRiNDk4NmVlMjRlMTAzZjX5F6ol: 00:27:04.430 10:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWI2NjdmMjVkZDg4NWIyOGQ0MzI0MmMzOGRjZjg3OWU4OGFlYmRhOTIzZGUyOTc0NDA5MjgyNDc1OTJlMTllObrkimg=: ]] 00:27:04.430 10:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWI2NjdmMjVkZDg4NWIyOGQ0MzI0MmMzOGRjZjg3OWU4OGFlYmRhOTIzZGUyOTc0NDA5MjgyNDc1OTJlMTllObrkimg=: 00:27:04.430 10:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:27:04.430 10:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:04.430 10:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:04.430 10:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:04.430 10:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:04.430 10:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:04.430 10:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:04.430 10:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:04.430 10:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:04.430 10:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:04.430 10:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:04.430 10:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:04.430 10:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:04.430 10:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:04.430 10:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:04.430 10:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:04.430 10:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:04.430 10:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:04.430 10:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:04.430 10:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:04.430 10:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:04.430 10:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:04.430 10:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:04.430 10:15:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.366 nvme0n1 00:27:05.366 10:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.366 10:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:05.366 10:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.366 10:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.366 10:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:05.366 10:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.366 10:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:05.366 10:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:05.366 10:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.366 10:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.366 10:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.366 10:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:05.366 10:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:27:05.366 10:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:05.366 10:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:05.366 10:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:05.366 10:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:05.366 10:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2YyYWE5YjI0NTkxZGIxYTZmNzA0OGE4NjFiNjRjMWYzOGM4ZTEzNjE2Yjg1M2QzK+wtZA==: 00:27:05.366 10:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTE1MjUxODRjZTQ0ZWYzN2MwYjM5NmIwOTM0MWE0MDNlNzk4M2IyMWE2NTA2YTEw2ACdyA==: 00:27:05.366 10:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:05.366 10:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:05.366 10:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2YyYWE5YjI0NTkxZGIxYTZmNzA0OGE4NjFiNjRjMWYzOGM4ZTEzNjE2Yjg1M2QzK+wtZA==: 00:27:05.366 10:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTE1MjUxODRjZTQ0ZWYzN2MwYjM5NmIwOTM0MWE0MDNlNzk4M2IyMWE2NTA2YTEw2ACdyA==: ]] 00:27:05.366 10:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTE1MjUxODRjZTQ0ZWYzN2MwYjM5NmIwOTM0MWE0MDNlNzk4M2IyMWE2NTA2YTEw2ACdyA==: 00:27:05.366 10:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:27:05.366 10:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:05.366 10:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:05.366 10:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:05.366 10:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:05.366 10:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:05.366 10:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:05.366 10:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.366 10:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.366 10:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.366 10:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:05.366 10:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:05.366 10:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:05.366 10:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:05.366 10:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:05.366 10:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:05.366 10:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:05.366 10:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:05.366 10:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:05.366 10:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:05.366 10:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:05.366 10:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:05.366 10:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.366 10:15:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.932 nvme0n1 00:27:05.932 10:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.932 10:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:05.932 10:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.932 10:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.933 10:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:05.933 10:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.933 10:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:05.933 10:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:05.933 10:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.933 10:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:05.933 10:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:05.933 10:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:05.933 10:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:27:05.933 10:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:05.933 10:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:05.933 10:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:05.933 10:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:05.933 10:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGE2MjVkZjAxMTBjMDViOGZjMzUwOGI2MmJjZDk2Y2UJepV+: 00:27:05.933 10:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWI4NTc5OTBmOGJkYmViYjgwOGE5YjMxNTIxYTMwYWF/Qx7D: 00:27:05.933 10:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:05.933 10:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:05.933 10:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGE2MjVkZjAxMTBjMDViOGZjMzUwOGI2MmJjZDk2Y2UJepV+: 00:27:05.933 10:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWI4NTc5OTBmOGJkYmViYjgwOGE5YjMxNTIxYTMwYWF/Qx7D: ]] 00:27:05.933 10:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWI4NTc5OTBmOGJkYmViYjgwOGE5YjMxNTIxYTMwYWF/Qx7D: 00:27:05.933 10:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:27:05.933 10:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:05.933 10:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:05.933 10:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:05.933 10:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:05.933 10:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:05.933 10:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:05.933 10:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:05.933 10:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.191 10:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.191 10:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:06.191 10:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:06.191 10:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:06.191 10:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:06.191 10:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:06.191 10:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:06.191 10:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:06.191 10:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:06.191 10:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:06.191 10:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:06.191 10:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:06.191 10:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:06.191 10:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.191 10:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.786 nvme0n1 00:27:06.786 10:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.786 10:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:06.786 10:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.786 10:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.786 10:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:06.786 10:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.786 10:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:06.786 10:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:06.786 10:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.786 10:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.786 10:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.786 10:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:06.786 10:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:27:06.786 10:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:06.786 10:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:06.786 10:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:06.786 10:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:06.786 10:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzY0YTQyOTAxY2I3ZTNmNTkwOGE5M2Y0NDEzOWU5Y2U3ZjNlMWFlYTFhODBmMmE3up7JUQ==: 00:27:06.786 10:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTExY2E4MTk2Mjk4YzdkMTRiMDcyYTRlNzZiNjNiNTDQl/Qu: 00:27:06.786 10:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:06.786 10:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:06.786 10:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzY0YTQyOTAxY2I3ZTNmNTkwOGE5M2Y0NDEzOWU5Y2U3ZjNlMWFlYTFhODBmMmE3up7JUQ==: 00:27:06.786 10:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTExY2E4MTk2Mjk4YzdkMTRiMDcyYTRlNzZiNjNiNTDQl/Qu: ]] 00:27:06.786 10:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTExY2E4MTk2Mjk4YzdkMTRiMDcyYTRlNzZiNjNiNTDQl/Qu: 00:27:06.786 10:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:27:06.786 10:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:06.786 10:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:06.786 10:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:06.786 10:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:06.786 10:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:06.786 10:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:06.786 10:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.786 10:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:06.786 10:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:06.786 10:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:06.786 10:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:06.786 10:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:06.786 10:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:06.786 10:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:06.786 10:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:06.786 10:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:06.786 10:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:06.786 10:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:06.786 10:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:06.786 10:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:06.786 10:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:06.786 10:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:06.786 10:15:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.719 nvme0n1 00:27:07.719 10:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.719 10:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:07.719 10:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.719 10:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.719 10:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:07.719 10:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.719 10:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:07.719 10:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:07.719 10:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.719 10:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.719 10:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.719 10:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:07.719 10:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:27:07.719 10:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:07.719 10:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:07.719 10:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:07.719 10:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:07.719 10:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2U1MmMyMjIwZmRiYWM3OWQ5YmYzNmM3MjNhMmIwNmQ2NWU4YjE1NDE4MDBiMDNlZmRmMTkxOGNlZWRmNDgzZtUKgKU=: 00:27:07.719 10:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:07.719 10:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:07.719 10:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:07.720 10:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2U1MmMyMjIwZmRiYWM3OWQ5YmYzNmM3MjNhMmIwNmQ2NWU4YjE1NDE4MDBiMDNlZmRmMTkxOGNlZWRmNDgzZtUKgKU=: 00:27:07.720 10:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:07.720 10:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:27:07.720 10:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:07.720 10:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:07.720 10:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:07.720 10:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:07.720 10:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:07.720 10:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:27:07.720 10:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.720 10:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:07.720 10:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.720 10:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:07.720 10:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:07.720 10:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:07.720 10:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:07.720 10:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:07.720 10:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:07.720 10:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:07.720 10:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:07.720 10:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:07.720 10:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:07.720 10:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:07.720 10:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:07.720 10:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.720 10:15:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.286 nvme0n1 00:27:08.286 10:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.286 10:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:08.286 10:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.286 10:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.286 10:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:08.286 10:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.286 10:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:08.286 10:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:08.286 10:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.286 10:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.286 10:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.286 10:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:08.286 10:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:08.286 10:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:27:08.286 10:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:08.286 10:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:08.286 10:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:08.286 10:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:08.286 10:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmU5YzBlZWZkMmRlMGViMmRiNDk4NmVlMjRlMTAzZjX5F6ol: 00:27:08.286 10:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWI2NjdmMjVkZDg4NWIyOGQ0MzI0MmMzOGRjZjg3OWU4OGFlYmRhOTIzZGUyOTc0NDA5MjgyNDc1OTJlMTllObrkimg=: 00:27:08.286 10:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:08.286 10:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:08.286 10:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmU5YzBlZWZkMmRlMGViMmRiNDk4NmVlMjRlMTAzZjX5F6ol: 00:27:08.286 10:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWI2NjdmMjVkZDg4NWIyOGQ0MzI0MmMzOGRjZjg3OWU4OGFlYmRhOTIzZGUyOTc0NDA5MjgyNDc1OTJlMTllObrkimg=: ]] 00:27:08.287 10:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWI2NjdmMjVkZDg4NWIyOGQ0MzI0MmMzOGRjZjg3OWU4OGFlYmRhOTIzZGUyOTc0NDA5MjgyNDc1OTJlMTllObrkimg=: 00:27:08.287 10:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:27:08.287 10:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:08.287 10:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:08.287 10:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:08.287 10:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:08.287 10:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:08.287 10:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:08.287 10:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.287 10:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:08.287 10:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:08.287 10:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:08.287 10:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:08.287 10:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:08.287 10:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:08.287 10:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:08.287 10:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:08.287 10:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:08.287 10:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:08.287 10:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:08.287 10:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:08.287 10:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:08.287 10:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:08.287 10:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:08.287 10:15:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.661 nvme0n1 00:27:09.661 10:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.661 10:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:09.661 10:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:09.661 10:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.661 10:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.661 10:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.661 10:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:09.661 10:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:09.661 10:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.661 10:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.661 10:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.661 10:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:09.661 10:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:27:09.662 10:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:09.662 10:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:09.662 10:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:09.662 10:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:09.662 10:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2YyYWE5YjI0NTkxZGIxYTZmNzA0OGE4NjFiNjRjMWYzOGM4ZTEzNjE2Yjg1M2QzK+wtZA==: 00:27:09.662 10:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTE1MjUxODRjZTQ0ZWYzN2MwYjM5NmIwOTM0MWE0MDNlNzk4M2IyMWE2NTA2YTEw2ACdyA==: 00:27:09.662 10:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:09.662 10:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:09.662 10:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2YyYWE5YjI0NTkxZGIxYTZmNzA0OGE4NjFiNjRjMWYzOGM4ZTEzNjE2Yjg1M2QzK+wtZA==: 00:27:09.662 10:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTE1MjUxODRjZTQ0ZWYzN2MwYjM5NmIwOTM0MWE0MDNlNzk4M2IyMWE2NTA2YTEw2ACdyA==: ]] 00:27:09.662 10:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTE1MjUxODRjZTQ0ZWYzN2MwYjM5NmIwOTM0MWE0MDNlNzk4M2IyMWE2NTA2YTEw2ACdyA==: 00:27:09.662 10:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:27:09.662 10:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:09.662 10:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:09.662 10:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:09.662 10:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:09.662 10:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:09.662 10:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:09.662 10:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.662 10:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:09.662 10:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.662 10:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:09.662 10:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:09.662 10:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:09.662 10:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:09.662 10:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:09.662 10:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:09.662 10:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:09.662 10:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:09.662 10:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:09.662 10:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:09.662 10:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:09.662 10:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:09.662 10:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:09.662 10:15:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.036 nvme0n1 00:27:11.036 10:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.036 10:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:11.036 10:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.036 10:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:11.036 10:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.036 10:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.036 10:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:11.036 10:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:11.036 10:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.036 10:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.036 10:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.036 10:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:11.036 10:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:27:11.036 10:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:11.036 10:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:11.036 10:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:11.036 10:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:11.036 10:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGE2MjVkZjAxMTBjMDViOGZjMzUwOGI2MmJjZDk2Y2UJepV+: 00:27:11.036 10:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWI4NTc5OTBmOGJkYmViYjgwOGE5YjMxNTIxYTMwYWF/Qx7D: 00:27:11.036 10:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:11.036 10:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:11.036 10:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGE2MjVkZjAxMTBjMDViOGZjMzUwOGI2MmJjZDk2Y2UJepV+: 00:27:11.036 10:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWI4NTc5OTBmOGJkYmViYjgwOGE5YjMxNTIxYTMwYWF/Qx7D: ]] 00:27:11.036 10:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWI4NTc5OTBmOGJkYmViYjgwOGE5YjMxNTIxYTMwYWF/Qx7D: 00:27:11.036 10:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:27:11.036 10:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:11.036 10:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:11.036 10:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:11.036 10:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:11.036 10:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:11.036 10:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:11.036 10:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.036 10:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.036 10:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.036 10:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:11.036 10:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:11.036 10:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:11.036 10:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:11.036 10:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:11.036 10:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:11.036 10:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:11.036 10:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:11.036 10:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:11.036 10:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:11.036 10:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:11.036 10:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:11.036 10:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.036 10:15:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.969 nvme0n1 00:27:11.969 10:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.969 10:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:11.969 10:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.969 10:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.969 10:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:11.969 10:15:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.969 10:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:11.969 10:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:11.969 10:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.969 10:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.969 10:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.969 10:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:11.969 10:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:27:11.969 10:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:11.969 10:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:11.969 10:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:11.969 10:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:11.969 10:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzY0YTQyOTAxY2I3ZTNmNTkwOGE5M2Y0NDEzOWU5Y2U3ZjNlMWFlYTFhODBmMmE3up7JUQ==: 00:27:11.969 10:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTExY2E4MTk2Mjk4YzdkMTRiMDcyYTRlNzZiNjNiNTDQl/Qu: 00:27:11.969 10:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:11.969 10:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:11.969 10:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzY0YTQyOTAxY2I3ZTNmNTkwOGE5M2Y0NDEzOWU5Y2U3ZjNlMWFlYTFhODBmMmE3up7JUQ==: 00:27:11.969 10:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTExY2E4MTk2Mjk4YzdkMTRiMDcyYTRlNzZiNjNiNTDQl/Qu: ]] 00:27:11.969 10:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTExY2E4MTk2Mjk4YzdkMTRiMDcyYTRlNzZiNjNiNTDQl/Qu: 00:27:11.969 10:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:27:11.969 10:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:11.969 10:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:11.969 10:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:11.969 10:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:11.969 10:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:11.969 10:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:11.969 10:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.969 10:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:11.969 10:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:11.969 10:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:11.969 10:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:11.969 10:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:11.969 10:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:11.969 10:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:11.969 10:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:11.969 10:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:11.969 10:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:11.969 10:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:11.969 10:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:11.969 10:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:11.969 10:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:11.969 10:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:11.969 10:15:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.341 nvme0n1 00:27:13.341 10:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.341 10:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:13.341 10:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.341 10:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.341 10:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:13.341 10:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.341 10:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:13.341 10:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:13.341 10:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.341 10:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.341 10:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.341 10:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:13.341 10:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:27:13.341 10:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:13.341 10:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:27:13.341 10:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:13.341 10:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:13.341 10:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2U1MmMyMjIwZmRiYWM3OWQ5YmYzNmM3MjNhMmIwNmQ2NWU4YjE1NDE4MDBiMDNlZmRmMTkxOGNlZWRmNDgzZtUKgKU=: 00:27:13.341 10:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:13.341 10:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:27:13.341 10:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:13.341 10:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2U1MmMyMjIwZmRiYWM3OWQ5YmYzNmM3MjNhMmIwNmQ2NWU4YjE1NDE4MDBiMDNlZmRmMTkxOGNlZWRmNDgzZtUKgKU=: 00:27:13.341 10:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:13.341 10:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:27:13.341 10:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:13.341 10:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:27:13.341 10:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:13.341 10:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:13.341 10:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:13.341 10:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:27:13.341 10:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.341 10:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:13.341 10:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:13.341 10:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:13.341 10:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:13.341 10:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:13.342 10:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:13.342 10:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:13.342 10:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:13.342 10:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:13.342 10:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:13.342 10:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:13.342 10:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:13.342 10:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:13.342 10:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:13.342 10:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:13.342 10:15:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.274 nvme0n1 00:27:14.274 10:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.274 10:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:14.274 10:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:14.274 10:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.274 10:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.274 10:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.533 10:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:14.533 10:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:14.533 10:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.533 10:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.533 10:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.533 10:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:27:14.533 10:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:14.533 10:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:14.533 10:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:27:14.533 10:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:14.533 10:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:14.533 10:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:14.533 10:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:14.533 10:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmU5YzBlZWZkMmRlMGViMmRiNDk4NmVlMjRlMTAzZjX5F6ol: 00:27:14.533 10:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWI2NjdmMjVkZDg4NWIyOGQ0MzI0MmMzOGRjZjg3OWU4OGFlYmRhOTIzZGUyOTc0NDA5MjgyNDc1OTJlMTllObrkimg=: 00:27:14.533 10:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:14.533 10:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:14.533 10:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmU5YzBlZWZkMmRlMGViMmRiNDk4NmVlMjRlMTAzZjX5F6ol: 00:27:14.533 10:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWI2NjdmMjVkZDg4NWIyOGQ0MzI0MmMzOGRjZjg3OWU4OGFlYmRhOTIzZGUyOTc0NDA5MjgyNDc1OTJlMTllObrkimg=: ]] 00:27:14.533 10:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWI2NjdmMjVkZDg4NWIyOGQ0MzI0MmMzOGRjZjg3OWU4OGFlYmRhOTIzZGUyOTc0NDA5MjgyNDc1OTJlMTllObrkimg=: 00:27:14.533 10:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:27:14.533 10:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:14.533 10:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:14.533 10:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:14.533 10:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:14.533 10:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:14.533 10:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:14.533 10:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.533 10:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.533 10:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.533 10:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:14.533 10:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:14.533 10:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:14.533 10:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:14.533 10:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:14.533 10:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:14.533 10:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:14.533 10:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:14.533 10:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:14.533 10:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:14.533 10:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:14.533 10:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:14.533 10:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.533 10:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.533 nvme0n1 00:27:14.533 10:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.533 10:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:14.533 10:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:14.533 10:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.533 10:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.533 10:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.791 10:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:14.791 10:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:14.791 10:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.791 10:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.791 10:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.791 10:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:14.791 10:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:27:14.791 10:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:14.791 10:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:14.791 10:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:14.791 10:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:14.791 10:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2YyYWE5YjI0NTkxZGIxYTZmNzA0OGE4NjFiNjRjMWYzOGM4ZTEzNjE2Yjg1M2QzK+wtZA==: 00:27:14.791 10:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTE1MjUxODRjZTQ0ZWYzN2MwYjM5NmIwOTM0MWE0MDNlNzk4M2IyMWE2NTA2YTEw2ACdyA==: 00:27:14.791 10:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:14.791 10:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:14.791 10:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2YyYWE5YjI0NTkxZGIxYTZmNzA0OGE4NjFiNjRjMWYzOGM4ZTEzNjE2Yjg1M2QzK+wtZA==: 00:27:14.791 10:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTE1MjUxODRjZTQ0ZWYzN2MwYjM5NmIwOTM0MWE0MDNlNzk4M2IyMWE2NTA2YTEw2ACdyA==: ]] 00:27:14.791 10:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTE1MjUxODRjZTQ0ZWYzN2MwYjM5NmIwOTM0MWE0MDNlNzk4M2IyMWE2NTA2YTEw2ACdyA==: 00:27:14.791 10:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:27:14.791 10:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:14.791 10:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:14.791 10:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:14.791 10:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:14.791 10:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:14.791 10:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:14.791 10:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.791 10:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.791 10:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.791 10:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:14.791 10:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:14.791 10:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:14.791 10:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:14.792 10:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:14.792 10:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:14.792 10:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:14.792 10:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:14.792 10:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:14.792 10:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:14.792 10:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:14.792 10:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:14.792 10:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.792 10:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:14.792 nvme0n1 00:27:14.792 10:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:14.792 10:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:14.792 10:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:14.792 10:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:14.792 10:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.048 10:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.048 10:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:15.048 10:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:15.048 10:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.048 10:15:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.048 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.048 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:15.048 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:27:15.048 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:15.048 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:15.048 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:15.048 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:15.048 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGE2MjVkZjAxMTBjMDViOGZjMzUwOGI2MmJjZDk2Y2UJepV+: 00:27:15.048 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWI4NTc5OTBmOGJkYmViYjgwOGE5YjMxNTIxYTMwYWF/Qx7D: 00:27:15.048 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:15.048 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:15.048 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGE2MjVkZjAxMTBjMDViOGZjMzUwOGI2MmJjZDk2Y2UJepV+: 00:27:15.048 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWI4NTc5OTBmOGJkYmViYjgwOGE5YjMxNTIxYTMwYWF/Qx7D: ]] 00:27:15.048 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWI4NTc5OTBmOGJkYmViYjgwOGE5YjMxNTIxYTMwYWF/Qx7D: 00:27:15.048 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:27:15.048 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:15.048 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:15.048 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:15.048 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:15.048 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:15.048 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:15.048 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.048 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.049 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.049 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:15.049 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:15.049 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:15.049 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:15.049 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:15.049 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:15.049 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:15.049 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:15.049 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:15.049 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:15.049 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:15.049 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:15.049 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.049 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.049 nvme0n1 00:27:15.049 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.049 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:15.049 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.049 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.049 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:15.049 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.305 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:15.305 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:15.305 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.305 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.305 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.305 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:15.305 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:27:15.305 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:15.305 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:15.305 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:15.305 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:15.305 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzY0YTQyOTAxY2I3ZTNmNTkwOGE5M2Y0NDEzOWU5Y2U3ZjNlMWFlYTFhODBmMmE3up7JUQ==: 00:27:15.305 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTExY2E4MTk2Mjk4YzdkMTRiMDcyYTRlNzZiNjNiNTDQl/Qu: 00:27:15.306 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:15.306 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:15.306 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzY0YTQyOTAxY2I3ZTNmNTkwOGE5M2Y0NDEzOWU5Y2U3ZjNlMWFlYTFhODBmMmE3up7JUQ==: 00:27:15.306 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTExY2E4MTk2Mjk4YzdkMTRiMDcyYTRlNzZiNjNiNTDQl/Qu: ]] 00:27:15.306 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTExY2E4MTk2Mjk4YzdkMTRiMDcyYTRlNzZiNjNiNTDQl/Qu: 00:27:15.306 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:27:15.306 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:15.306 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:15.306 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:15.306 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:15.306 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:15.306 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:15.306 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.306 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.306 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.306 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:15.306 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:15.306 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:15.306 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:15.306 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:15.306 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:15.306 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:15.306 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:15.306 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:15.306 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:15.306 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:15.306 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:15.306 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.306 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.306 nvme0n1 00:27:15.306 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.306 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:15.306 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.306 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.306 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:15.306 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.563 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:15.563 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:15.563 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.563 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.563 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.563 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:15.563 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:27:15.563 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:15.563 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:15.563 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:15.563 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:15.563 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2U1MmMyMjIwZmRiYWM3OWQ5YmYzNmM3MjNhMmIwNmQ2NWU4YjE1NDE4MDBiMDNlZmRmMTkxOGNlZWRmNDgzZtUKgKU=: 00:27:15.563 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:15.563 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:15.563 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:15.563 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2U1MmMyMjIwZmRiYWM3OWQ5YmYzNmM3MjNhMmIwNmQ2NWU4YjE1NDE4MDBiMDNlZmRmMTkxOGNlZWRmNDgzZtUKgKU=: 00:27:15.563 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:15.563 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:27:15.563 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:15.563 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:15.563 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:27:15.563 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:15.563 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:15.563 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:27:15.563 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.563 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.563 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.563 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:15.563 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:15.563 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:15.563 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:15.563 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:15.563 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:15.563 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:15.563 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:15.563 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:15.563 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:15.563 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:15.563 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:15.563 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.563 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.563 nvme0n1 00:27:15.563 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.563 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:15.563 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.563 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.563 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:15.563 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.820 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:15.820 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:15.820 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.820 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.820 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.820 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:15.820 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:15.820 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:27:15.820 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:15.820 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:15.820 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:15.820 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:15.820 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmU5YzBlZWZkMmRlMGViMmRiNDk4NmVlMjRlMTAzZjX5F6ol: 00:27:15.820 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWI2NjdmMjVkZDg4NWIyOGQ0MzI0MmMzOGRjZjg3OWU4OGFlYmRhOTIzZGUyOTc0NDA5MjgyNDc1OTJlMTllObrkimg=: 00:27:15.820 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:15.820 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:15.820 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmU5YzBlZWZkMmRlMGViMmRiNDk4NmVlMjRlMTAzZjX5F6ol: 00:27:15.820 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWI2NjdmMjVkZDg4NWIyOGQ0MzI0MmMzOGRjZjg3OWU4OGFlYmRhOTIzZGUyOTc0NDA5MjgyNDc1OTJlMTllObrkimg=: ]] 00:27:15.820 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWI2NjdmMjVkZDg4NWIyOGQ0MzI0MmMzOGRjZjg3OWU4OGFlYmRhOTIzZGUyOTc0NDA5MjgyNDc1OTJlMTllObrkimg=: 00:27:15.820 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:27:15.820 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:15.820 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:15.820 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:15.820 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:15.820 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:15.820 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:15.820 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.820 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.820 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:15.820 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:15.820 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:15.820 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:15.820 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:15.820 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:15.820 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:15.820 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:15.820 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:15.820 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:15.820 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:15.820 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:15.820 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:15.820 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:15.820 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.092 nvme0n1 00:27:16.092 10:16:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.092 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:16.092 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:16.092 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.092 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.092 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.092 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:16.092 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:16.092 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.092 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.092 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.092 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:16.092 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:27:16.092 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:16.092 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:16.092 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:16.092 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:16.092 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2YyYWE5YjI0NTkxZGIxYTZmNzA0OGE4NjFiNjRjMWYzOGM4ZTEzNjE2Yjg1M2QzK+wtZA==: 00:27:16.092 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTE1MjUxODRjZTQ0ZWYzN2MwYjM5NmIwOTM0MWE0MDNlNzk4M2IyMWE2NTA2YTEw2ACdyA==: 00:27:16.092 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:16.092 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:16.092 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2YyYWE5YjI0NTkxZGIxYTZmNzA0OGE4NjFiNjRjMWYzOGM4ZTEzNjE2Yjg1M2QzK+wtZA==: 00:27:16.092 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTE1MjUxODRjZTQ0ZWYzN2MwYjM5NmIwOTM0MWE0MDNlNzk4M2IyMWE2NTA2YTEw2ACdyA==: ]] 00:27:16.092 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTE1MjUxODRjZTQ0ZWYzN2MwYjM5NmIwOTM0MWE0MDNlNzk4M2IyMWE2NTA2YTEw2ACdyA==: 00:27:16.092 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:27:16.092 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:16.092 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:16.092 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:16.092 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:16.092 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:16.092 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:16.092 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.092 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.092 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.092 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:16.092 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:16.092 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:16.092 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:16.092 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:16.092 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:16.092 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:16.092 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:16.092 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:16.092 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:16.092 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:16.092 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:16.092 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.092 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.378 nvme0n1 00:27:16.378 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.378 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:16.378 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.378 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.378 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:16.378 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.378 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:16.378 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:16.378 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.378 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.378 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.378 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:16.378 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:27:16.378 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:16.378 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:16.378 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:16.378 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:16.378 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGE2MjVkZjAxMTBjMDViOGZjMzUwOGI2MmJjZDk2Y2UJepV+: 00:27:16.378 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWI4NTc5OTBmOGJkYmViYjgwOGE5YjMxNTIxYTMwYWF/Qx7D: 00:27:16.378 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:16.378 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:16.378 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGE2MjVkZjAxMTBjMDViOGZjMzUwOGI2MmJjZDk2Y2UJepV+: 00:27:16.378 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWI4NTc5OTBmOGJkYmViYjgwOGE5YjMxNTIxYTMwYWF/Qx7D: ]] 00:27:16.378 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWI4NTc5OTBmOGJkYmViYjgwOGE5YjMxNTIxYTMwYWF/Qx7D: 00:27:16.378 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:27:16.378 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:16.378 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:16.378 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:16.378 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:16.378 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:16.378 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:16.378 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.378 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.378 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.378 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:16.378 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:16.378 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:16.378 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:16.378 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:16.378 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:16.378 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:16.378 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:16.378 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:16.378 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:16.378 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:16.378 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:16.378 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.378 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.636 nvme0n1 00:27:16.636 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.636 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:16.636 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:16.636 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.636 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.636 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.636 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:16.636 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:16.636 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.636 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.636 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.636 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:16.636 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:27:16.636 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:16.636 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:16.636 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:16.636 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:16.636 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzY0YTQyOTAxY2I3ZTNmNTkwOGE5M2Y0NDEzOWU5Y2U3ZjNlMWFlYTFhODBmMmE3up7JUQ==: 00:27:16.636 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTExY2E4MTk2Mjk4YzdkMTRiMDcyYTRlNzZiNjNiNTDQl/Qu: 00:27:16.636 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:16.636 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:16.636 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzY0YTQyOTAxY2I3ZTNmNTkwOGE5M2Y0NDEzOWU5Y2U3ZjNlMWFlYTFhODBmMmE3up7JUQ==: 00:27:16.636 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTExY2E4MTk2Mjk4YzdkMTRiMDcyYTRlNzZiNjNiNTDQl/Qu: ]] 00:27:16.636 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTExY2E4MTk2Mjk4YzdkMTRiMDcyYTRlNzZiNjNiNTDQl/Qu: 00:27:16.636 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:27:16.636 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:16.636 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:16.636 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:16.636 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:16.636 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:16.636 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:16.636 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.636 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.636 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.636 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:16.636 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:16.636 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:16.636 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:16.636 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:16.636 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:16.636 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:16.636 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:16.636 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:16.636 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:16.636 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:16.636 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:16.636 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.636 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.894 nvme0n1 00:27:16.894 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.894 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:16.894 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.894 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.894 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:16.894 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.894 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:16.894 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:16.894 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.894 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.894 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.894 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:16.894 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:27:16.894 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:16.894 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:16.894 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:16.894 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:16.894 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2U1MmMyMjIwZmRiYWM3OWQ5YmYzNmM3MjNhMmIwNmQ2NWU4YjE1NDE4MDBiMDNlZmRmMTkxOGNlZWRmNDgzZtUKgKU=: 00:27:16.894 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:16.894 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:16.894 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:27:16.894 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2U1MmMyMjIwZmRiYWM3OWQ5YmYzNmM3MjNhMmIwNmQ2NWU4YjE1NDE4MDBiMDNlZmRmMTkxOGNlZWRmNDgzZtUKgKU=: 00:27:16.894 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:16.894 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:27:16.894 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:16.894 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:16.894 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:27:16.894 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:16.894 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:16.894 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:27:16.894 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.894 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.894 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:16.894 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:16.894 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:16.894 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:16.894 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:16.894 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:16.894 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:16.894 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:16.894 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:16.894 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:16.894 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:16.894 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:16.894 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:16.894 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:16.894 10:16:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.152 nvme0n1 00:27:17.152 10:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.152 10:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:17.152 10:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:17.152 10:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.153 10:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.153 10:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.153 10:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:17.153 10:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:17.153 10:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.153 10:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.153 10:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.153 10:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:17.153 10:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:17.153 10:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:27:17.153 10:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:17.153 10:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:17.153 10:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:17.153 10:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:17.153 10:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmU5YzBlZWZkMmRlMGViMmRiNDk4NmVlMjRlMTAzZjX5F6ol: 00:27:17.153 10:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWI2NjdmMjVkZDg4NWIyOGQ0MzI0MmMzOGRjZjg3OWU4OGFlYmRhOTIzZGUyOTc0NDA5MjgyNDc1OTJlMTllObrkimg=: 00:27:17.153 10:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:17.153 10:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:17.153 10:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmU5YzBlZWZkMmRlMGViMmRiNDk4NmVlMjRlMTAzZjX5F6ol: 00:27:17.153 10:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWI2NjdmMjVkZDg4NWIyOGQ0MzI0MmMzOGRjZjg3OWU4OGFlYmRhOTIzZGUyOTc0NDA5MjgyNDc1OTJlMTllObrkimg=: ]] 00:27:17.153 10:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWI2NjdmMjVkZDg4NWIyOGQ0MzI0MmMzOGRjZjg3OWU4OGFlYmRhOTIzZGUyOTc0NDA5MjgyNDc1OTJlMTllObrkimg=: 00:27:17.153 10:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:27:17.153 10:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:17.153 10:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:17.153 10:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:17.153 10:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:17.153 10:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:17.153 10:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:17.153 10:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.153 10:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.153 10:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.153 10:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:17.153 10:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:17.153 10:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:17.153 10:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:17.153 10:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:17.153 10:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:17.153 10:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:17.153 10:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:17.153 10:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:17.153 10:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:17.153 10:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:17.153 10:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:17.153 10:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.153 10:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.720 nvme0n1 00:27:17.720 10:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.720 10:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:17.720 10:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:17.720 10:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.720 10:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.720 10:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.720 10:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:17.720 10:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:17.720 10:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.720 10:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.720 10:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.720 10:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:17.720 10:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:27:17.720 10:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:17.720 10:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:17.720 10:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:17.720 10:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:17.720 10:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2YyYWE5YjI0NTkxZGIxYTZmNzA0OGE4NjFiNjRjMWYzOGM4ZTEzNjE2Yjg1M2QzK+wtZA==: 00:27:17.720 10:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTE1MjUxODRjZTQ0ZWYzN2MwYjM5NmIwOTM0MWE0MDNlNzk4M2IyMWE2NTA2YTEw2ACdyA==: 00:27:17.720 10:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:17.720 10:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:17.720 10:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2YyYWE5YjI0NTkxZGIxYTZmNzA0OGE4NjFiNjRjMWYzOGM4ZTEzNjE2Yjg1M2QzK+wtZA==: 00:27:17.720 10:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTE1MjUxODRjZTQ0ZWYzN2MwYjM5NmIwOTM0MWE0MDNlNzk4M2IyMWE2NTA2YTEw2ACdyA==: ]] 00:27:17.720 10:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTE1MjUxODRjZTQ0ZWYzN2MwYjM5NmIwOTM0MWE0MDNlNzk4M2IyMWE2NTA2YTEw2ACdyA==: 00:27:17.720 10:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:27:17.720 10:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:17.720 10:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:17.720 10:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:17.720 10:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:17.720 10:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:17.720 10:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:17.720 10:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.720 10:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.720 10:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.720 10:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:17.720 10:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:17.720 10:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:17.720 10:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:17.720 10:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:17.720 10:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:17.720 10:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:17.720 10:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:17.720 10:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:17.720 10:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:17.720 10:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:17.720 10:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:17.720 10:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.720 10:16:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.978 nvme0n1 00:27:17.978 10:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.978 10:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:17.978 10:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.978 10:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.978 10:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:17.978 10:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.978 10:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:17.978 10:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:17.978 10:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.978 10:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.978 10:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.978 10:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:17.978 10:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:27:17.978 10:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:17.978 10:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:17.978 10:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:17.978 10:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:17.978 10:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGE2MjVkZjAxMTBjMDViOGZjMzUwOGI2MmJjZDk2Y2UJepV+: 00:27:17.978 10:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWI4NTc5OTBmOGJkYmViYjgwOGE5YjMxNTIxYTMwYWF/Qx7D: 00:27:17.978 10:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:17.978 10:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:17.978 10:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGE2MjVkZjAxMTBjMDViOGZjMzUwOGI2MmJjZDk2Y2UJepV+: 00:27:17.978 10:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWI4NTc5OTBmOGJkYmViYjgwOGE5YjMxNTIxYTMwYWF/Qx7D: ]] 00:27:17.978 10:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWI4NTc5OTBmOGJkYmViYjgwOGE5YjMxNTIxYTMwYWF/Qx7D: 00:27:17.978 10:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:27:17.978 10:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:17.978 10:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:17.978 10:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:17.978 10:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:17.978 10:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:17.978 10:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:17.978 10:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.978 10:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:17.978 10:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:17.978 10:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:17.978 10:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:17.978 10:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:17.978 10:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:17.978 10:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:17.978 10:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:17.978 10:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:17.978 10:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:17.978 10:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:17.978 10:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:17.978 10:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:17.978 10:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:17.978 10:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:17.978 10:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.542 nvme0n1 00:27:18.542 10:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.542 10:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:18.542 10:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.542 10:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.542 10:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:18.542 10:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.542 10:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:18.542 10:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:18.542 10:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.542 10:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.542 10:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.542 10:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:18.542 10:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:27:18.542 10:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:18.542 10:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:18.542 10:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:18.542 10:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:18.542 10:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzY0YTQyOTAxY2I3ZTNmNTkwOGE5M2Y0NDEzOWU5Y2U3ZjNlMWFlYTFhODBmMmE3up7JUQ==: 00:27:18.542 10:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTExY2E4MTk2Mjk4YzdkMTRiMDcyYTRlNzZiNjNiNTDQl/Qu: 00:27:18.542 10:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:18.542 10:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:18.542 10:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzY0YTQyOTAxY2I3ZTNmNTkwOGE5M2Y0NDEzOWU5Y2U3ZjNlMWFlYTFhODBmMmE3up7JUQ==: 00:27:18.542 10:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTExY2E4MTk2Mjk4YzdkMTRiMDcyYTRlNzZiNjNiNTDQl/Qu: ]] 00:27:18.542 10:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTExY2E4MTk2Mjk4YzdkMTRiMDcyYTRlNzZiNjNiNTDQl/Qu: 00:27:18.542 10:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:27:18.542 10:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:18.542 10:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:18.542 10:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:18.542 10:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:18.542 10:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:18.542 10:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:18.542 10:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.542 10:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.542 10:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.542 10:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:18.542 10:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:18.542 10:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:18.542 10:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:18.542 10:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:18.542 10:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:18.542 10:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:18.542 10:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:18.542 10:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:18.542 10:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:18.542 10:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:18.542 10:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:18.542 10:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.542 10:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.799 nvme0n1 00:27:18.799 10:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.799 10:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:18.800 10:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.800 10:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:18.800 10:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.800 10:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.800 10:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:18.800 10:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:18.800 10:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.800 10:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.800 10:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.800 10:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:18.800 10:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:27:18.800 10:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:18.800 10:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:18.800 10:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:18.800 10:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:18.800 10:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2U1MmMyMjIwZmRiYWM3OWQ5YmYzNmM3MjNhMmIwNmQ2NWU4YjE1NDE4MDBiMDNlZmRmMTkxOGNlZWRmNDgzZtUKgKU=: 00:27:18.800 10:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:18.800 10:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:18.800 10:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:27:18.800 10:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2U1MmMyMjIwZmRiYWM3OWQ5YmYzNmM3MjNhMmIwNmQ2NWU4YjE1NDE4MDBiMDNlZmRmMTkxOGNlZWRmNDgzZtUKgKU=: 00:27:18.800 10:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:18.800 10:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:27:18.800 10:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:18.800 10:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:18.800 10:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:27:18.800 10:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:18.800 10:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:18.800 10:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:27:18.800 10:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.800 10:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:18.800 10:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:18.800 10:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:18.800 10:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:18.800 10:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:18.800 10:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:18.800 10:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:18.800 10:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:18.800 10:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:18.800 10:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:18.800 10:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:18.800 10:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:18.800 10:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:18.800 10:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:18.800 10:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:18.800 10:16:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.058 nvme0n1 00:27:19.058 10:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.058 10:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:19.058 10:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.058 10:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:19.058 10:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.058 10:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.058 10:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:19.058 10:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:19.058 10:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.058 10:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.058 10:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.058 10:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:19.058 10:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:19.058 10:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:27:19.058 10:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:19.058 10:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:19.058 10:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:19.058 10:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:19.058 10:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmU5YzBlZWZkMmRlMGViMmRiNDk4NmVlMjRlMTAzZjX5F6ol: 00:27:19.058 10:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWI2NjdmMjVkZDg4NWIyOGQ0MzI0MmMzOGRjZjg3OWU4OGFlYmRhOTIzZGUyOTc0NDA5MjgyNDc1OTJlMTllObrkimg=: 00:27:19.058 10:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:19.058 10:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:19.058 10:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmU5YzBlZWZkMmRlMGViMmRiNDk4NmVlMjRlMTAzZjX5F6ol: 00:27:19.058 10:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWI2NjdmMjVkZDg4NWIyOGQ0MzI0MmMzOGRjZjg3OWU4OGFlYmRhOTIzZGUyOTc0NDA5MjgyNDc1OTJlMTllObrkimg=: ]] 00:27:19.058 10:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWI2NjdmMjVkZDg4NWIyOGQ0MzI0MmMzOGRjZjg3OWU4OGFlYmRhOTIzZGUyOTc0NDA5MjgyNDc1OTJlMTllObrkimg=: 00:27:19.058 10:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:27:19.058 10:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:19.058 10:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:19.058 10:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:19.058 10:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:19.058 10:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:19.058 10:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:19.058 10:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.058 10:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.058 10:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.058 10:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:19.058 10:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:19.058 10:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:19.058 10:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:19.058 10:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:19.058 10:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:19.058 10:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:19.058 10:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:19.058 10:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:19.058 10:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:19.058 10:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:19.058 10:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:19.058 10:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.058 10:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.990 nvme0n1 00:27:19.990 10:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.990 10:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:19.990 10:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:19.990 10:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.990 10:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.990 10:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.990 10:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:19.990 10:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:19.990 10:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.990 10:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.990 10:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.990 10:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:19.990 10:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:27:19.990 10:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:19.990 10:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:19.990 10:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:19.990 10:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:19.990 10:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2YyYWE5YjI0NTkxZGIxYTZmNzA0OGE4NjFiNjRjMWYzOGM4ZTEzNjE2Yjg1M2QzK+wtZA==: 00:27:19.990 10:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTE1MjUxODRjZTQ0ZWYzN2MwYjM5NmIwOTM0MWE0MDNlNzk4M2IyMWE2NTA2YTEw2ACdyA==: 00:27:19.990 10:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:19.990 10:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:19.990 10:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2YyYWE5YjI0NTkxZGIxYTZmNzA0OGE4NjFiNjRjMWYzOGM4ZTEzNjE2Yjg1M2QzK+wtZA==: 00:27:19.990 10:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTE1MjUxODRjZTQ0ZWYzN2MwYjM5NmIwOTM0MWE0MDNlNzk4M2IyMWE2NTA2YTEw2ACdyA==: ]] 00:27:19.990 10:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTE1MjUxODRjZTQ0ZWYzN2MwYjM5NmIwOTM0MWE0MDNlNzk4M2IyMWE2NTA2YTEw2ACdyA==: 00:27:19.990 10:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:27:19.990 10:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:19.990 10:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:19.991 10:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:19.991 10:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:19.991 10:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:19.991 10:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:19.991 10:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.991 10:16:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:19.991 10:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:19.991 10:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:19.991 10:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:19.991 10:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:19.991 10:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:19.991 10:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:19.991 10:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:19.991 10:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:19.991 10:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:19.991 10:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:19.991 10:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:19.991 10:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:19.991 10:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:19.991 10:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:19.991 10:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.556 nvme0n1 00:27:20.556 10:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.556 10:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:20.556 10:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:20.556 10:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.556 10:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.556 10:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.814 10:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:20.814 10:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:20.814 10:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.814 10:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.814 10:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.814 10:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:20.814 10:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:27:20.814 10:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:20.814 10:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:20.814 10:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:20.814 10:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:20.814 10:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGE2MjVkZjAxMTBjMDViOGZjMzUwOGI2MmJjZDk2Y2UJepV+: 00:27:20.814 10:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWI4NTc5OTBmOGJkYmViYjgwOGE5YjMxNTIxYTMwYWF/Qx7D: 00:27:20.814 10:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:20.814 10:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:20.814 10:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGE2MjVkZjAxMTBjMDViOGZjMzUwOGI2MmJjZDk2Y2UJepV+: 00:27:20.814 10:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWI4NTc5OTBmOGJkYmViYjgwOGE5YjMxNTIxYTMwYWF/Qx7D: ]] 00:27:20.814 10:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWI4NTc5OTBmOGJkYmViYjgwOGE5YjMxNTIxYTMwYWF/Qx7D: 00:27:20.814 10:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:27:20.814 10:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:20.814 10:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:20.814 10:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:20.814 10:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:20.814 10:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:20.814 10:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:20.814 10:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.814 10:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:20.814 10:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:20.814 10:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:20.814 10:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:20.814 10:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:20.814 10:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:20.814 10:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:20.814 10:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:20.814 10:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:20.814 10:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:20.814 10:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:20.814 10:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:20.814 10:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:20.814 10:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:20.814 10:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:20.814 10:16:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.379 nvme0n1 00:27:21.379 10:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.379 10:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:21.379 10:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:21.379 10:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.379 10:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.379 10:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.379 10:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:21.379 10:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:21.379 10:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.379 10:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.379 10:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.379 10:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:21.379 10:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:27:21.379 10:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:21.379 10:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:21.379 10:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:21.379 10:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:21.379 10:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzY0YTQyOTAxY2I3ZTNmNTkwOGE5M2Y0NDEzOWU5Y2U3ZjNlMWFlYTFhODBmMmE3up7JUQ==: 00:27:21.379 10:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTExY2E4MTk2Mjk4YzdkMTRiMDcyYTRlNzZiNjNiNTDQl/Qu: 00:27:21.379 10:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:21.379 10:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:21.379 10:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzY0YTQyOTAxY2I3ZTNmNTkwOGE5M2Y0NDEzOWU5Y2U3ZjNlMWFlYTFhODBmMmE3up7JUQ==: 00:27:21.379 10:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTExY2E4MTk2Mjk4YzdkMTRiMDcyYTRlNzZiNjNiNTDQl/Qu: ]] 00:27:21.379 10:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTExY2E4MTk2Mjk4YzdkMTRiMDcyYTRlNzZiNjNiNTDQl/Qu: 00:27:21.379 10:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:27:21.379 10:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:21.379 10:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:21.379 10:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:21.379 10:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:21.379 10:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:21.379 10:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:21.379 10:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.379 10:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:21.379 10:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:21.379 10:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:21.379 10:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:21.380 10:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:21.380 10:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:21.380 10:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:21.380 10:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:21.380 10:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:21.380 10:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:21.380 10:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:21.380 10:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:21.380 10:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:21.380 10:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:21.380 10:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:21.380 10:16:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.312 nvme0n1 00:27:22.312 10:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.312 10:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:22.312 10:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.312 10:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:22.312 10:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.312 10:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.312 10:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:22.312 10:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:22.312 10:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.312 10:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.312 10:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.312 10:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:22.312 10:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:27:22.312 10:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:22.312 10:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:22.312 10:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:22.312 10:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:22.312 10:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2U1MmMyMjIwZmRiYWM3OWQ5YmYzNmM3MjNhMmIwNmQ2NWU4YjE1NDE4MDBiMDNlZmRmMTkxOGNlZWRmNDgzZtUKgKU=: 00:27:22.313 10:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:22.313 10:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:22.313 10:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:27:22.313 10:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2U1MmMyMjIwZmRiYWM3OWQ5YmYzNmM3MjNhMmIwNmQ2NWU4YjE1NDE4MDBiMDNlZmRmMTkxOGNlZWRmNDgzZtUKgKU=: 00:27:22.313 10:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:22.313 10:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:27:22.313 10:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:22.313 10:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:22.313 10:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:27:22.313 10:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:22.313 10:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:22.313 10:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:27:22.313 10:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.313 10:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.313 10:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.313 10:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:22.313 10:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:22.313 10:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:22.313 10:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:22.313 10:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:22.313 10:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:22.313 10:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:22.313 10:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:22.313 10:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:22.313 10:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:22.313 10:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:22.313 10:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:22.313 10:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.313 10:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.879 nvme0n1 00:27:22.879 10:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.879 10:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:22.879 10:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.879 10:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.879 10:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:22.879 10:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.879 10:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:22.879 10:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:22.879 10:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.879 10:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.879 10:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.879 10:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:27:22.879 10:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:22.879 10:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:27:22.879 10:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:22.879 10:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:22.879 10:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:22.879 10:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:27:22.879 10:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NmU5YzBlZWZkMmRlMGViMmRiNDk4NmVlMjRlMTAzZjX5F6ol: 00:27:22.879 10:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NWI2NjdmMjVkZDg4NWIyOGQ0MzI0MmMzOGRjZjg3OWU4OGFlYmRhOTIzZGUyOTc0NDA5MjgyNDc1OTJlMTllObrkimg=: 00:27:22.879 10:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:22.879 10:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:22.879 10:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NmU5YzBlZWZkMmRlMGViMmRiNDk4NmVlMjRlMTAzZjX5F6ol: 00:27:22.879 10:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NWI2NjdmMjVkZDg4NWIyOGQ0MzI0MmMzOGRjZjg3OWU4OGFlYmRhOTIzZGUyOTc0NDA5MjgyNDc1OTJlMTllObrkimg=: ]] 00:27:22.879 10:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NWI2NjdmMjVkZDg4NWIyOGQ0MzI0MmMzOGRjZjg3OWU4OGFlYmRhOTIzZGUyOTc0NDA5MjgyNDc1OTJlMTllObrkimg=: 00:27:22.879 10:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:27:22.879 10:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:22.879 10:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:22.879 10:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:22.879 10:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:27:22.879 10:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:22.879 10:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:22.879 10:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.879 10:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.879 10:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:22.879 10:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:22.879 10:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:22.879 10:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:22.879 10:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:22.879 10:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:22.879 10:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:22.879 10:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:22.879 10:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:22.879 10:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:22.879 10:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:22.879 10:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:22.879 10:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:27:22.879 10:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:22.879 10:16:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.253 nvme0n1 00:27:24.253 10:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.253 10:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:24.253 10:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.253 10:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.253 10:16:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:24.253 10:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.253 10:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:24.253 10:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:24.253 10:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.253 10:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.253 10:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.253 10:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:24.253 10:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:27:24.253 10:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:24.253 10:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:24.253 10:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:24.253 10:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:24.253 10:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2YyYWE5YjI0NTkxZGIxYTZmNzA0OGE4NjFiNjRjMWYzOGM4ZTEzNjE2Yjg1M2QzK+wtZA==: 00:27:24.253 10:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTE1MjUxODRjZTQ0ZWYzN2MwYjM5NmIwOTM0MWE0MDNlNzk4M2IyMWE2NTA2YTEw2ACdyA==: 00:27:24.253 10:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:24.253 10:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:24.253 10:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2YyYWE5YjI0NTkxZGIxYTZmNzA0OGE4NjFiNjRjMWYzOGM4ZTEzNjE2Yjg1M2QzK+wtZA==: 00:27:24.253 10:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTE1MjUxODRjZTQ0ZWYzN2MwYjM5NmIwOTM0MWE0MDNlNzk4M2IyMWE2NTA2YTEw2ACdyA==: ]] 00:27:24.253 10:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTE1MjUxODRjZTQ0ZWYzN2MwYjM5NmIwOTM0MWE0MDNlNzk4M2IyMWE2NTA2YTEw2ACdyA==: 00:27:24.253 10:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:27:24.253 10:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:24.253 10:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:24.253 10:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:24.253 10:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:24.253 10:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:24.253 10:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:24.253 10:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.253 10:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:24.253 10:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:24.253 10:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:24.253 10:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:24.253 10:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:24.253 10:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:24.253 10:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:24.253 10:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:24.253 10:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:24.253 10:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:24.253 10:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:24.253 10:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:24.253 10:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:24.253 10:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:24.253 10:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:24.253 10:16:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.188 nvme0n1 00:27:25.188 10:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.188 10:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:25.188 10:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:25.188 10:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.188 10:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.188 10:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.188 10:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:25.188 10:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:25.188 10:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.188 10:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.188 10:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.188 10:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:25.188 10:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:27:25.188 10:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:25.188 10:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:25.188 10:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:25.188 10:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:27:25.188 10:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NGE2MjVkZjAxMTBjMDViOGZjMzUwOGI2MmJjZDk2Y2UJepV+: 00:27:25.188 10:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWI4NTc5OTBmOGJkYmViYjgwOGE5YjMxNTIxYTMwYWF/Qx7D: 00:27:25.188 10:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:25.188 10:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:25.188 10:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NGE2MjVkZjAxMTBjMDViOGZjMzUwOGI2MmJjZDk2Y2UJepV+: 00:27:25.188 10:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWI4NTc5OTBmOGJkYmViYjgwOGE5YjMxNTIxYTMwYWF/Qx7D: ]] 00:27:25.188 10:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWI4NTc5OTBmOGJkYmViYjgwOGE5YjMxNTIxYTMwYWF/Qx7D: 00:27:25.188 10:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:27:25.188 10:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:25.188 10:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:25.188 10:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:25.188 10:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:27:25.188 10:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:25.188 10:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:25.188 10:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.188 10:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.188 10:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:25.188 10:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:25.188 10:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:25.188 10:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:25.188 10:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:25.188 10:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:25.188 10:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:25.188 10:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:25.188 10:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:25.188 10:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:25.188 10:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:25.188 10:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:25.188 10:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:27:25.188 10:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:25.188 10:16:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.122 nvme0n1 00:27:26.122 10:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.122 10:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:26.122 10:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.122 10:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.122 10:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:26.122 10:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.380 10:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:26.380 10:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:26.380 10:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.380 10:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.380 10:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.380 10:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:26.380 10:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:27:26.380 10:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:26.380 10:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:26.380 10:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:26.380 10:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:27:26.380 10:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MzY0YTQyOTAxY2I3ZTNmNTkwOGE5M2Y0NDEzOWU5Y2U3ZjNlMWFlYTFhODBmMmE3up7JUQ==: 00:27:26.380 10:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTExY2E4MTk2Mjk4YzdkMTRiMDcyYTRlNzZiNjNiNTDQl/Qu: 00:27:26.380 10:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:26.380 10:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:26.380 10:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MzY0YTQyOTAxY2I3ZTNmNTkwOGE5M2Y0NDEzOWU5Y2U3ZjNlMWFlYTFhODBmMmE3up7JUQ==: 00:27:26.380 10:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTExY2E4MTk2Mjk4YzdkMTRiMDcyYTRlNzZiNjNiNTDQl/Qu: ]] 00:27:26.380 10:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTExY2E4MTk2Mjk4YzdkMTRiMDcyYTRlNzZiNjNiNTDQl/Qu: 00:27:26.380 10:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:27:26.380 10:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:26.380 10:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:26.380 10:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:26.380 10:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:27:26.380 10:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:26.380 10:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:26.380 10:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.380 10:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:26.380 10:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:26.380 10:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:26.380 10:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:26.380 10:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:26.380 10:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:26.380 10:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:26.380 10:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:26.380 10:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:26.380 10:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:26.380 10:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:26.380 10:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:26.380 10:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:26.380 10:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:27:26.380 10:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:26.380 10:16:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.314 nvme0n1 00:27:27.314 10:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.314 10:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:27.314 10:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.314 10:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.314 10:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:27.314 10:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.314 10:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:27.314 10:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:27.314 10:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.314 10:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.314 10:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.314 10:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:27:27.314 10:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:27:27.314 10:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:27.314 10:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:27:27.314 10:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:27:27.314 10:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:27:27.314 10:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:M2U1MmMyMjIwZmRiYWM3OWQ5YmYzNmM3MjNhMmIwNmQ2NWU4YjE1NDE4MDBiMDNlZmRmMTkxOGNlZWRmNDgzZtUKgKU=: 00:27:27.314 10:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:27:27.314 10:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:27:27.314 10:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:27:27.314 10:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:M2U1MmMyMjIwZmRiYWM3OWQ5YmYzNmM3MjNhMmIwNmQ2NWU4YjE1NDE4MDBiMDNlZmRmMTkxOGNlZWRmNDgzZtUKgKU=: 00:27:27.314 10:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:27:27.314 10:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:27:27.314 10:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:27.314 10:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:27:27.314 10:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:27:27.314 10:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:27:27.314 10:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:27.314 10:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:27:27.314 10:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.314 10:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:27.314 10:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:27.314 10:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:27.314 10:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:27.314 10:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:27.314 10:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:27.314 10:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:27.314 10:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:27.314 10:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:27.314 10:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:27.314 10:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:27.314 10:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:27.314 10:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:27.314 10:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:27.314 10:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:27.314 10:16:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.688 nvme0n1 00:27:28.688 10:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.688 10:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:27:28.688 10:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.688 10:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.688 10:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:27:28.688 10:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.688 10:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:28.688 10:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:28.688 10:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.688 10:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.688 10:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.688 10:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:28.688 10:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:28.688 10:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:28.688 10:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:28.688 10:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:28.688 10:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2YyYWE5YjI0NTkxZGIxYTZmNzA0OGE4NjFiNjRjMWYzOGM4ZTEzNjE2Yjg1M2QzK+wtZA==: 00:27:28.688 10:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NTE1MjUxODRjZTQ0ZWYzN2MwYjM5NmIwOTM0MWE0MDNlNzk4M2IyMWE2NTA2YTEw2ACdyA==: 00:27:28.688 10:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:28.688 10:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:28.688 10:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2YyYWE5YjI0NTkxZGIxYTZmNzA0OGE4NjFiNjRjMWYzOGM4ZTEzNjE2Yjg1M2QzK+wtZA==: 00:27:28.688 10:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NTE1MjUxODRjZTQ0ZWYzN2MwYjM5NmIwOTM0MWE0MDNlNzk4M2IyMWE2NTA2YTEw2ACdyA==: ]] 00:27:28.688 10:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NTE1MjUxODRjZTQ0ZWYzN2MwYjM5NmIwOTM0MWE0MDNlNzk4M2IyMWE2NTA2YTEw2ACdyA==: 00:27:28.688 10:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:28.688 10:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.688 10:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.688 10:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.688 10:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:27:28.688 10:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:28.688 10:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:28.688 10:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:28.688 10:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:28.688 10:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:28.688 10:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:28.688 10:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:28.688 10:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:28.688 10:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:28.688 10:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:28.688 10:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:28.688 10:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:27:28.688 10:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:28.688 10:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:28.688 10:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:28.688 10:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:28.688 10:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:28.688 10:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:27:28.688 10:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.688 10:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.688 request: 00:27:28.688 { 00:27:28.688 "name": "nvme0", 00:27:28.688 "trtype": "tcp", 00:27:28.688 "traddr": "10.0.0.1", 00:27:28.688 "adrfam": "ipv4", 00:27:28.688 "trsvcid": "4420", 00:27:28.688 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:28.688 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:28.688 "prchk_reftag": false, 00:27:28.688 "prchk_guard": false, 00:27:28.688 "hdgst": false, 00:27:28.688 "ddgst": false, 00:27:28.688 "method": "bdev_nvme_attach_controller", 00:27:28.688 "req_id": 1 00:27:28.688 } 00:27:28.688 Got JSON-RPC error response 00:27:28.688 response: 00:27:28.688 { 00:27:28.688 "code": -5, 00:27:28.688 "message": "Input/output error" 00:27:28.688 } 00:27:28.688 10:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:28.688 10:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:27:28.688 10:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:28.688 10:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:28.688 10:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:28.688 10:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:27:28.688 10:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.688 10:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.688 10:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:27:28.688 10:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.688 10:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:27:28.688 10:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:27:28.688 10:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:28.688 10:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:28.688 10:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:28.688 10:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:28.688 10:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:28.688 10:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:28.688 10:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:28.688 10:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:28.688 10:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:28.689 10:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:28.689 10:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:28.689 10:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:27:28.689 10:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:28.689 10:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:28.689 10:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:28.689 10:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:28.689 10:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:28.689 10:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:28.689 10:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.689 10:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.947 request: 00:27:28.947 { 00:27:28.947 "name": "nvme0", 00:27:28.947 "trtype": "tcp", 00:27:28.947 "traddr": "10.0.0.1", 00:27:28.947 "adrfam": "ipv4", 00:27:28.947 "trsvcid": "4420", 00:27:28.947 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:28.947 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:28.947 "prchk_reftag": false, 00:27:28.947 "prchk_guard": false, 00:27:28.947 "hdgst": false, 00:27:28.947 "ddgst": false, 00:27:28.947 "dhchap_key": "key2", 00:27:28.947 "method": "bdev_nvme_attach_controller", 00:27:28.947 "req_id": 1 00:27:28.947 } 00:27:28.947 Got JSON-RPC error response 00:27:28.947 response: 00:27:28.947 { 00:27:28.947 "code": -5, 00:27:28.947 "message": "Input/output error" 00:27:28.947 } 00:27:28.947 10:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:28.947 10:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:27:28.947 10:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:28.947 10:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:28.947 10:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:28.947 10:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:27:28.947 10:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.947 10:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.947 10:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:27:28.947 10:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.947 10:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:27:28.947 10:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:27:28.947 10:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:27:28.947 10:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:28.947 10:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:28.947 10:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:28.947 10:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:28.947 10:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:28.947 10:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:28.947 10:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:28.947 10:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:28.947 10:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:28.947 10:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:28.947 10:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:27:28.947 10:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:28.947 10:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:28.947 10:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:28.947 10:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:28.947 10:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:28.947 10:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:27:28.947 10:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.947 10:16:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:28.947 request: 00:27:28.947 { 00:27:28.947 "name": "nvme0", 00:27:28.947 "trtype": "tcp", 00:27:28.947 "traddr": "10.0.0.1", 00:27:28.947 "adrfam": "ipv4", 00:27:28.947 "trsvcid": "4420", 00:27:28.947 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:27:28.947 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:27:28.947 "prchk_reftag": false, 00:27:28.947 "prchk_guard": false, 00:27:28.947 "hdgst": false, 00:27:28.947 "ddgst": false, 00:27:28.947 "dhchap_key": "key1", 00:27:28.947 "dhchap_ctrlr_key": "ckey2", 00:27:28.947 "method": "bdev_nvme_attach_controller", 00:27:28.947 "req_id": 1 00:27:28.947 } 00:27:28.947 Got JSON-RPC error response 00:27:28.947 response: 00:27:28.947 { 00:27:28.947 "code": -5, 00:27:28.947 "message": "Input/output error" 00:27:28.947 } 00:27:28.947 10:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:28.947 10:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:27:28.947 10:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:28.947 10:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:28.947 10:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:28.947 10:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:27:28.947 10:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:27:28.947 10:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:27:28.947 10:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:28.947 10:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:27:28.947 10:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:28.947 10:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:27:28.947 10:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:28.947 10:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:28.947 rmmod nvme_tcp 00:27:28.947 rmmod nvme_fabrics 00:27:28.947 10:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:28.947 10:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:27:28.947 10:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:27:28.947 10:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 531981 ']' 00:27:28.947 10:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 531981 00:27:28.947 10:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@950 -- # '[' -z 531981 ']' 00:27:28.947 10:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # kill -0 531981 00:27:28.947 10:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # uname 00:27:28.947 10:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:28.947 10:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 531981 00:27:28.947 10:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:28.947 10:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:28.947 10:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 531981' 00:27:28.947 killing process with pid 531981 00:27:28.947 10:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@969 -- # kill 531981 00:27:28.947 10:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@974 -- # wait 531981 00:27:29.206 10:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:29.206 10:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:29.206 10:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:29.206 10:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:29.206 10:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:29.206 10:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:29.206 10:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:29.206 10:16:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:31.801 10:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:31.801 10:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:27:31.802 10:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:31.802 10:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:27:31.802 10:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:27:31.802 10:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:27:31.802 10:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:31.802 10:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:31.802 10:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:31.802 10:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:31.802 10:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:27:31.802 10:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:27:31.802 10:16:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:32.735 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:27:32.735 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:27:32.735 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:27:32.735 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:27:32.735 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:27:32.735 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:27:32.735 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:27:32.735 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:27:32.735 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:27:32.735 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:27:32.992 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:27:32.992 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:27:32.992 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:27:32.992 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:27:32.992 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:27:32.992 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:27:33.927 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:27:33.927 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.E0v /tmp/spdk.key-null.lrr /tmp/spdk.key-sha256.8Cu /tmp/spdk.key-sha384.kYO /tmp/spdk.key-sha512.023 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:27:33.927 10:16:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:35.301 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:27:35.301 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:27:35.301 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:27:35.301 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:27:35.301 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:27:35.301 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:27:35.301 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:27:35.301 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:27:35.301 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:27:35.301 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:27:35.301 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:27:35.301 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:27:35.301 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:27:35.302 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:27:35.302 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:27:35.302 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:27:35.302 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:27:35.561 00:27:35.561 real 0m57.881s 00:27:35.561 user 0m56.426s 00:27:35.561 sys 0m6.871s 00:27:35.561 10:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:35.561 10:16:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.561 ************************************ 00:27:35.561 END TEST nvmf_auth_host 00:27:35.561 ************************************ 00:27:35.561 10:16:20 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:27:35.561 10:16:20 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:27:35.561 10:16:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:27:35.561 10:16:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:35.561 10:16:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.561 ************************************ 00:27:35.561 START TEST nvmf_digest 00:27:35.561 ************************************ 00:27:35.561 10:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:27:35.561 * Looking for test storage... 00:27:35.561 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:35.561 10:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:35.561 10:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:27:35.561 10:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:35.561 10:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:35.561 10:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:35.561 10:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:35.561 10:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:35.561 10:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:35.561 10:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:35.561 10:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:35.561 10:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:35.561 10:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:35.561 10:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:27:35.561 10:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:27:35.561 10:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:35.561 10:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:35.561 10:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:35.561 10:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:35.561 10:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:35.561 10:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:35.561 10:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:35.561 10:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:35.561 10:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:35.561 10:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:35.561 10:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:35.561 10:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:27:35.561 10:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:35.561 10:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:27:35.561 10:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:35.561 10:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:35.561 10:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:35.561 10:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:35.561 10:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:35.561 10:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:35.561 10:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:35.561 10:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:35.561 10:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:27:35.561 10:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:27:35.561 10:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:27:35.561 10:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:27:35.561 10:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:27:35.561 10:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:35.561 10:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:35.561 10:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:35.561 10:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:35.561 10:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:35.561 10:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:35.561 10:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:35.561 10:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:35.561 10:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:35.561 10:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:35.561 10:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:27:35.561 10:16:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:38.093 10:16:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:38.093 10:16:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:27:38.093 10:16:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:38.093 10:16:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:38.093 10:16:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:38.093 10:16:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:38.093 10:16:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:38.093 10:16:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:27:38.093 10:16:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:38.093 10:16:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:27:38.093 10:16:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:27:38.093 10:16:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:27:38.094 10:16:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:27:38.094 10:16:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:27:38.094 10:16:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:27:38.094 10:16:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:38.094 10:16:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:38.094 10:16:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:38.094 10:16:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:38.094 10:16:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:38.094 10:16:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:38.094 10:16:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:38.094 10:16:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:38.094 10:16:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:38.094 10:16:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:38.094 10:16:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:38.094 10:16:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:38.094 10:16:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:38.094 10:16:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:38.094 10:16:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:38.094 10:16:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:38.094 10:16:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:38.094 10:16:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:38.094 10:16:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:27:38.094 Found 0000:84:00.0 (0x8086 - 0x159b) 00:27:38.094 10:16:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:38.094 10:16:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:38.094 10:16:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:38.094 10:16:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:38.094 10:16:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:38.094 10:16:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:38.094 10:16:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:27:38.094 Found 0000:84:00.1 (0x8086 - 0x159b) 00:27:38.094 10:16:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:38.094 10:16:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:38.094 10:16:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:38.094 10:16:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:38.094 10:16:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:38.094 10:16:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:38.094 10:16:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:38.094 10:16:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:38.094 10:16:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:38.094 10:16:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:38.094 10:16:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:38.094 10:16:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:38.094 10:16:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:38.094 10:16:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:38.094 10:16:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:38.094 10:16:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:27:38.094 Found net devices under 0000:84:00.0: cvl_0_0 00:27:38.094 10:16:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:38.094 10:16:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:38.094 10:16:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:38.094 10:16:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:38.094 10:16:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:38.094 10:16:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:38.094 10:16:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:38.094 10:16:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:38.094 10:16:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:27:38.094 Found net devices under 0000:84:00.1: cvl_0_1 00:27:38.094 10:16:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:38.094 10:16:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:38.094 10:16:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:27:38.094 10:16:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:38.094 10:16:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:38.094 10:16:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:38.094 10:16:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:38.094 10:16:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:38.094 10:16:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:38.094 10:16:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:38.094 10:16:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:38.094 10:16:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:38.094 10:16:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:38.094 10:16:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:38.094 10:16:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:38.094 10:16:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:38.094 10:16:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:38.094 10:16:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:38.094 10:16:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:38.094 10:16:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:38.094 10:16:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:38.094 10:16:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:38.094 10:16:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:38.094 10:16:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:38.094 10:16:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:38.094 10:16:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:38.094 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:38.094 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.121 ms 00:27:38.094 00:27:38.094 --- 10.0.0.2 ping statistics --- 00:27:38.094 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:38.094 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:27:38.094 10:16:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:38.094 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:38.094 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.139 ms 00:27:38.094 00:27:38.094 --- 10.0.0.1 ping statistics --- 00:27:38.094 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:38.094 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:27:38.094 10:16:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:38.094 10:16:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:27:38.094 10:16:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:38.094 10:16:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:38.094 10:16:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:38.094 10:16:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:38.094 10:16:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:38.094 10:16:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:38.094 10:16:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:38.094 10:16:23 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:27:38.094 10:16:23 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:27:38.094 10:16:23 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:27:38.094 10:16:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:27:38.094 10:16:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:38.094 10:16:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:38.094 ************************************ 00:27:38.094 START TEST nvmf_digest_clean 00:27:38.094 ************************************ 00:27:38.094 10:16:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1125 -- # run_digest 00:27:38.094 10:16:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:27:38.094 10:16:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:27:38.094 10:16:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:27:38.094 10:16:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:27:38.094 10:16:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:27:38.094 10:16:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:38.095 10:16:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:38.095 10:16:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:38.095 10:16:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=542390 00:27:38.095 10:16:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:27:38.095 10:16:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 542390 00:27:38.095 10:16:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 542390 ']' 00:27:38.095 10:16:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:38.095 10:16:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:38.095 10:16:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:38.095 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:38.095 10:16:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:38.095 10:16:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:38.352 [2024-07-25 10:16:23.306662] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:27:38.352 [2024-07-25 10:16:23.306844] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:38.352 EAL: No free 2048 kB hugepages reported on node 1 00:27:38.352 [2024-07-25 10:16:23.432736] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:38.609 [2024-07-25 10:16:23.556286] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:38.609 [2024-07-25 10:16:23.556354] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:38.609 [2024-07-25 10:16:23.556371] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:38.609 [2024-07-25 10:16:23.556386] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:38.609 [2024-07-25 10:16:23.556399] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:38.609 [2024-07-25 10:16:23.556444] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:38.609 10:16:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:38.609 10:16:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:27:38.609 10:16:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:38.609 10:16:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:38.609 10:16:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:38.609 10:16:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:38.609 10:16:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:27:38.609 10:16:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:27:38.609 10:16:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:27:38.609 10:16:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.609 10:16:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:38.866 null0 00:27:38.866 [2024-07-25 10:16:23.879735] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:38.866 [2024-07-25 10:16:23.903996] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:38.866 10:16:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.866 10:16:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:27:38.866 10:16:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:38.866 10:16:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:38.866 10:16:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:27:38.866 10:16:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:27:38.866 10:16:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:27:38.866 10:16:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:38.866 10:16:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=542533 00:27:38.866 10:16:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 542533 /var/tmp/bperf.sock 00:27:38.866 10:16:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 542533 ']' 00:27:38.866 10:16:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:38.866 10:16:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:38.866 10:16:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:27:38.866 10:16:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:38.866 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:38.866 10:16:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:38.866 10:16:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:38.866 [2024-07-25 10:16:23.978050] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:27:38.866 [2024-07-25 10:16:23.978135] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid542533 ] 00:27:38.866 EAL: No free 2048 kB hugepages reported on node 1 00:27:39.124 [2024-07-25 10:16:24.046671] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:39.124 [2024-07-25 10:16:24.168214] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:39.124 10:16:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:39.124 10:16:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:27:39.124 10:16:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:39.124 10:16:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:39.124 10:16:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:40.055 10:16:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:40.055 10:16:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:40.312 nvme0n1 00:27:40.569 10:16:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:40.569 10:16:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:40.569 Running I/O for 2 seconds... 00:27:43.097 00:27:43.097 Latency(us) 00:27:43.097 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:43.097 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:27:43.097 nvme0n1 : 2.01 19800.01 77.34 0.00 0.00 6455.56 3155.44 15922.82 00:27:43.097 =================================================================================================================== 00:27:43.097 Total : 19800.01 77.34 0.00 0.00 6455.56 3155.44 15922.82 00:27:43.097 0 00:27:43.097 10:16:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:43.097 10:16:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:43.097 10:16:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:43.097 10:16:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:43.097 | select(.opcode=="crc32c") 00:27:43.097 | "\(.module_name) \(.executed)"' 00:27:43.097 10:16:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:43.097 10:16:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:43.097 10:16:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:43.097 10:16:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:43.097 10:16:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:43.097 10:16:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 542533 00:27:43.097 10:16:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 542533 ']' 00:27:43.097 10:16:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 542533 00:27:43.097 10:16:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:27:43.097 10:16:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:43.097 10:16:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 542533 00:27:43.097 10:16:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:43.097 10:16:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:43.097 10:16:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 542533' 00:27:43.097 killing process with pid 542533 00:27:43.097 10:16:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 542533 00:27:43.097 Received shutdown signal, test time was about 2.000000 seconds 00:27:43.097 00:27:43.097 Latency(us) 00:27:43.097 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:43.097 =================================================================================================================== 00:27:43.097 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:43.097 10:16:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 542533 00:27:43.355 10:16:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:27:43.355 10:16:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:43.355 10:16:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:43.355 10:16:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:27:43.355 10:16:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:27:43.355 10:16:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:27:43.355 10:16:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:43.355 10:16:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=543068 00:27:43.355 10:16:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:27:43.355 10:16:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 543068 /var/tmp/bperf.sock 00:27:43.355 10:16:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 543068 ']' 00:27:43.355 10:16:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:43.355 10:16:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:43.355 10:16:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:43.355 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:43.355 10:16:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:43.355 10:16:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:43.355 [2024-07-25 10:16:28.452425] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:27:43.355 [2024-07-25 10:16:28.452526] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid543068 ] 00:27:43.355 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:43.355 Zero copy mechanism will not be used. 00:27:43.355 EAL: No free 2048 kB hugepages reported on node 1 00:27:43.355 [2024-07-25 10:16:28.520922] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:43.613 [2024-07-25 10:16:28.642972] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:43.613 10:16:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:43.613 10:16:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:27:43.613 10:16:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:43.613 10:16:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:43.613 10:16:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:44.177 10:16:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:44.177 10:16:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:44.742 nvme0n1 00:27:44.742 10:16:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:44.742 10:16:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:44.999 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:44.999 Zero copy mechanism will not be used. 00:27:44.999 Running I/O for 2 seconds... 00:27:46.895 00:27:46.895 Latency(us) 00:27:46.895 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:46.895 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:27:46.895 nvme0n1 : 2.00 2493.18 311.65 0.00 0.00 6413.49 1541.31 9757.58 00:27:46.895 =================================================================================================================== 00:27:46.895 Total : 2493.18 311.65 0.00 0.00 6413.49 1541.31 9757.58 00:27:46.895 0 00:27:46.895 10:16:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:46.895 10:16:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:46.895 10:16:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:46.895 10:16:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:46.895 10:16:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:46.895 | select(.opcode=="crc32c") 00:27:46.895 | "\(.module_name) \(.executed)"' 00:27:47.460 10:16:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:47.460 10:16:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:47.460 10:16:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:47.460 10:16:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:47.460 10:16:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 543068 00:27:47.460 10:16:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 543068 ']' 00:27:47.460 10:16:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 543068 00:27:47.460 10:16:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:27:47.460 10:16:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:47.460 10:16:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 543068 00:27:47.460 10:16:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:47.460 10:16:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:47.460 10:16:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 543068' 00:27:47.460 killing process with pid 543068 00:27:47.460 10:16:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 543068 00:27:47.460 Received shutdown signal, test time was about 2.000000 seconds 00:27:47.460 00:27:47.460 Latency(us) 00:27:47.460 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:47.460 =================================================================================================================== 00:27:47.460 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:47.460 10:16:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 543068 00:27:47.748 10:16:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:27:47.748 10:16:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:47.748 10:16:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:47.748 10:16:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:27:47.748 10:16:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:27:47.748 10:16:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:27:47.748 10:16:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:48.006 10:16:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=543511 00:27:48.006 10:16:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:27:48.006 10:16:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 543511 /var/tmp/bperf.sock 00:27:48.006 10:16:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 543511 ']' 00:27:48.006 10:16:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:48.006 10:16:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:48.006 10:16:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:48.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:48.006 10:16:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:48.006 10:16:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:48.006 [2024-07-25 10:16:32.940201] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:27:48.006 [2024-07-25 10:16:32.940291] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid543511 ] 00:27:48.006 EAL: No free 2048 kB hugepages reported on node 1 00:27:48.006 [2024-07-25 10:16:33.009343] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:48.006 [2024-07-25 10:16:33.135330] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:48.263 10:16:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:48.263 10:16:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:27:48.263 10:16:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:48.263 10:16:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:48.263 10:16:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:48.827 10:16:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:48.827 10:16:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:49.390 nvme0n1 00:27:49.390 10:16:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:49.390 10:16:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:49.390 Running I/O for 2 seconds... 00:27:51.917 00:27:51.917 Latency(us) 00:27:51.917 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:51.917 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:51.917 nvme0n1 : 2.00 20154.77 78.73 0.00 0.00 6343.75 3070.48 10437.21 00:27:51.917 =================================================================================================================== 00:27:51.917 Total : 20154.77 78.73 0.00 0.00 6343.75 3070.48 10437.21 00:27:51.917 0 00:27:51.917 10:16:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:51.917 10:16:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:51.917 10:16:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:51.917 10:16:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:51.917 10:16:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:51.917 | select(.opcode=="crc32c") 00:27:51.917 | "\(.module_name) \(.executed)"' 00:27:51.917 10:16:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:51.917 10:16:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:51.917 10:16:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:51.917 10:16:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:51.917 10:16:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 543511 00:27:51.917 10:16:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 543511 ']' 00:27:51.917 10:16:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 543511 00:27:51.917 10:16:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:27:51.917 10:16:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:51.917 10:16:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 543511 00:27:51.917 10:16:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:51.917 10:16:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:51.917 10:16:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 543511' 00:27:51.917 killing process with pid 543511 00:27:51.917 10:16:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 543511 00:27:51.917 Received shutdown signal, test time was about 2.000000 seconds 00:27:51.917 00:27:51.917 Latency(us) 00:27:51.918 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:51.918 =================================================================================================================== 00:27:51.918 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:51.918 10:16:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 543511 00:27:52.175 10:16:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:27:52.175 10:16:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:27:52.175 10:16:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:27:52.175 10:16:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:27:52.175 10:16:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:27:52.175 10:16:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:27:52.175 10:16:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:27:52.175 10:16:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=544015 00:27:52.175 10:16:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:27:52.175 10:16:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 544015 /var/tmp/bperf.sock 00:27:52.175 10:16:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 544015 ']' 00:27:52.175 10:16:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:52.175 10:16:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:52.175 10:16:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:52.175 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:52.175 10:16:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:52.175 10:16:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:52.175 [2024-07-25 10:16:37.184630] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:27:52.175 [2024-07-25 10:16:37.184730] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid544015 ] 00:27:52.175 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:52.175 Zero copy mechanism will not be used. 00:27:52.175 EAL: No free 2048 kB hugepages reported on node 1 00:27:52.175 [2024-07-25 10:16:37.266000] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:52.432 [2024-07-25 10:16:37.386863] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:52.432 10:16:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:52.432 10:16:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:27:52.432 10:16:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:27:52.432 10:16:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:27:52.432 10:16:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:52.996 10:16:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:52.996 10:16:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:53.561 nvme0n1 00:27:53.561 10:16:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:27:53.561 10:16:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:53.561 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:53.561 Zero copy mechanism will not be used. 00:27:53.561 Running I/O for 2 seconds... 00:27:56.086 00:27:56.086 Latency(us) 00:27:56.086 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:56.086 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:27:56.086 nvme0n1 : 2.01 3043.22 380.40 0.00 0.00 5244.27 3640.89 9223.59 00:27:56.086 =================================================================================================================== 00:27:56.086 Total : 3043.22 380.40 0.00 0.00 5244.27 3640.89 9223.59 00:27:56.086 0 00:27:56.086 10:16:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:27:56.086 10:16:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:27:56.086 10:16:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:27:56.086 10:16:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:27:56.086 | select(.opcode=="crc32c") 00:27:56.086 | "\(.module_name) \(.executed)"' 00:27:56.086 10:16:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:27:56.086 10:16:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:27:56.086 10:16:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:27:56.086 10:16:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:27:56.086 10:16:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:27:56.086 10:16:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 544015 00:27:56.086 10:16:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 544015 ']' 00:27:56.086 10:16:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 544015 00:27:56.086 10:16:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:27:56.086 10:16:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:56.086 10:16:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 544015 00:27:56.086 10:16:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:56.086 10:16:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:56.086 10:16:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 544015' 00:27:56.086 killing process with pid 544015 00:27:56.086 10:16:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 544015 00:27:56.086 Received shutdown signal, test time was about 2.000000 seconds 00:27:56.086 00:27:56.086 Latency(us) 00:27:56.086 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:56.086 =================================================================================================================== 00:27:56.086 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:56.086 10:16:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 544015 00:27:56.344 10:16:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 542390 00:27:56.344 10:16:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 542390 ']' 00:27:56.344 10:16:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 542390 00:27:56.344 10:16:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:27:56.344 10:16:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:56.344 10:16:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 542390 00:27:56.344 10:16:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:56.344 10:16:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:56.344 10:16:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 542390' 00:27:56.344 killing process with pid 542390 00:27:56.344 10:16:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 542390 00:27:56.344 10:16:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 542390 00:27:56.602 00:27:56.602 real 0m18.447s 00:27:56.602 user 0m37.805s 00:27:56.602 sys 0m5.134s 00:27:56.602 10:16:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:56.602 10:16:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:56.602 ************************************ 00:27:56.602 END TEST nvmf_digest_clean 00:27:56.602 ************************************ 00:27:56.602 10:16:41 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:27:56.602 10:16:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:27:56.602 10:16:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:56.602 10:16:41 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:56.602 ************************************ 00:27:56.602 START TEST nvmf_digest_error 00:27:56.602 ************************************ 00:27:56.602 10:16:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1125 -- # run_digest_error 00:27:56.602 10:16:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:27:56.602 10:16:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:56.602 10:16:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:56.602 10:16:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:56.602 10:16:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=544577 00:27:56.602 10:16:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:27:56.603 10:16:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 544577 00:27:56.603 10:16:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 544577 ']' 00:27:56.603 10:16:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:56.603 10:16:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:56.603 10:16:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:56.603 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:56.603 10:16:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:56.603 10:16:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:56.603 [2024-07-25 10:16:41.755709] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:27:56.603 [2024-07-25 10:16:41.755813] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:56.860 EAL: No free 2048 kB hugepages reported on node 1 00:27:56.860 [2024-07-25 10:16:41.831833] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:56.860 [2024-07-25 10:16:41.950898] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:56.860 [2024-07-25 10:16:41.950965] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:56.860 [2024-07-25 10:16:41.950982] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:56.860 [2024-07-25 10:16:41.950997] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:56.860 [2024-07-25 10:16:41.951009] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:56.860 [2024-07-25 10:16:41.951048] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:57.118 10:16:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:57.118 10:16:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:27:57.118 10:16:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:57.118 10:16:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:57.118 10:16:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:57.118 10:16:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:57.118 10:16:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:27:57.118 10:16:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.118 10:16:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:57.118 [2024-07-25 10:16:42.067764] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:27:57.118 10:16:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.118 10:16:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:27:57.118 10:16:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:27:57.118 10:16:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.118 10:16:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:57.118 null0 00:27:57.118 [2024-07-25 10:16:42.189536] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:57.118 [2024-07-25 10:16:42.213787] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:57.118 10:16:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.118 10:16:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:27:57.118 10:16:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:57.118 10:16:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:27:57.118 10:16:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:27:57.118 10:16:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:27:57.118 10:16:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=544703 00:27:57.118 10:16:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 544703 /var/tmp/bperf.sock 00:27:57.118 10:16:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:27:57.118 10:16:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 544703 ']' 00:27:57.118 10:16:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:57.118 10:16:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:57.118 10:16:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:57.118 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:57.118 10:16:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:57.118 10:16:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:57.118 [2024-07-25 10:16:42.265592] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:27:57.118 [2024-07-25 10:16:42.265671] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid544703 ] 00:27:57.376 EAL: No free 2048 kB hugepages reported on node 1 00:27:57.376 [2024-07-25 10:16:42.332889] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:57.376 [2024-07-25 10:16:42.455669] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:57.633 10:16:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:57.633 10:16:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:27:57.633 10:16:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:57.633 10:16:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:57.890 10:16:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:57.890 10:16:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:57.890 10:16:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:57.890 10:16:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:57.890 10:16:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:57.890 10:16:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:58.454 nvme0n1 00:27:58.454 10:16:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:27:58.454 10:16:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:58.454 10:16:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:58.454 10:16:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:58.454 10:16:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:58.454 10:16:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:58.454 Running I/O for 2 seconds... 00:27:58.454 [2024-07-25 10:16:43.552329] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc82f0) 00:27:58.454 [2024-07-25 10:16:43.552379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12739 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.454 [2024-07-25 10:16:43.552400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.454 [2024-07-25 10:16:43.569199] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc82f0) 00:27:58.454 [2024-07-25 10:16:43.569235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:3687 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.454 [2024-07-25 10:16:43.569255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.454 [2024-07-25 10:16:43.581097] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc82f0) 00:27:58.454 [2024-07-25 10:16:43.581131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:964 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.454 [2024-07-25 10:16:43.581150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.454 [2024-07-25 10:16:43.596073] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc82f0) 00:27:58.454 [2024-07-25 10:16:43.596108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:7841 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.454 [2024-07-25 10:16:43.596136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.454 [2024-07-25 10:16:43.610655] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc82f0) 00:27:58.454 [2024-07-25 10:16:43.610690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:15420 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.454 [2024-07-25 10:16:43.610709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.711 [2024-07-25 10:16:43.624272] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc82f0) 00:27:58.711 [2024-07-25 10:16:43.624307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:11275 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.711 [2024-07-25 10:16:43.624326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.712 [2024-07-25 10:16:43.638703] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc82f0) 00:27:58.712 [2024-07-25 10:16:43.638738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:5429 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.712 [2024-07-25 10:16:43.638757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.712 [2024-07-25 10:16:43.650626] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc82f0) 00:27:58.712 [2024-07-25 10:16:43.650660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:17570 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.712 [2024-07-25 10:16:43.650679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.712 [2024-07-25 10:16:43.664314] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc82f0) 00:27:58.712 [2024-07-25 10:16:43.664348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:15245 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.712 [2024-07-25 10:16:43.664367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.712 [2024-07-25 10:16:43.679617] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc82f0) 00:27:58.712 [2024-07-25 10:16:43.679653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6508 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.712 [2024-07-25 10:16:43.679672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.712 [2024-07-25 10:16:43.691843] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc82f0) 00:27:58.712 [2024-07-25 10:16:43.691878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5917 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.712 [2024-07-25 10:16:43.691897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.712 [2024-07-25 10:16:43.705876] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc82f0) 00:27:58.712 [2024-07-25 10:16:43.705909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:5156 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.712 [2024-07-25 10:16:43.705930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.712 [2024-07-25 10:16:43.718974] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc82f0) 00:27:58.712 [2024-07-25 10:16:43.719015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:10736 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.712 [2024-07-25 10:16:43.719035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.712 [2024-07-25 10:16:43.734532] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc82f0) 00:27:58.712 [2024-07-25 10:16:43.734567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:4079 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.712 [2024-07-25 10:16:43.734586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.712 [2024-07-25 10:16:43.747715] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc82f0) 00:27:58.712 [2024-07-25 10:16:43.747761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:5741 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.712 [2024-07-25 10:16:43.747779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.712 [2024-07-25 10:16:43.759869] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc82f0) 00:27:58.712 [2024-07-25 10:16:43.759904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:2309 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.712 [2024-07-25 10:16:43.759923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.712 [2024-07-25 10:16:43.773665] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc82f0) 00:27:58.712 [2024-07-25 10:16:43.773700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:16527 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.712 [2024-07-25 10:16:43.773719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.712 [2024-07-25 10:16:43.789613] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc82f0) 00:27:58.712 [2024-07-25 10:16:43.789647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:5993 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.712 [2024-07-25 10:16:43.789665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.712 [2024-07-25 10:16:43.802245] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc82f0) 00:27:58.712 [2024-07-25 10:16:43.802280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:25097 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.712 [2024-07-25 10:16:43.802299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.712 [2024-07-25 10:16:43.815874] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc82f0) 00:27:58.712 [2024-07-25 10:16:43.815909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:9362 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.712 [2024-07-25 10:16:43.815928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.712 [2024-07-25 10:16:43.830879] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc82f0) 00:27:58.712 [2024-07-25 10:16:43.830913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:17794 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.712 [2024-07-25 10:16:43.830932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.712 [2024-07-25 10:16:43.843136] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc82f0) 00:27:58.712 [2024-07-25 10:16:43.843169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:20512 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.712 [2024-07-25 10:16:43.843188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.712 [2024-07-25 10:16:43.857383] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc82f0) 00:27:58.712 [2024-07-25 10:16:43.857419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:7360 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.712 [2024-07-25 10:16:43.857449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.712 [2024-07-25 10:16:43.870693] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc82f0) 00:27:58.712 [2024-07-25 10:16:43.870727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:14218 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.712 [2024-07-25 10:16:43.870745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.970 [2024-07-25 10:16:43.883322] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc82f0) 00:27:58.970 [2024-07-25 10:16:43.883357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:11733 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.970 [2024-07-25 10:16:43.883375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.970 [2024-07-25 10:16:43.899124] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc82f0) 00:27:58.970 [2024-07-25 10:16:43.899158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:19659 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.970 [2024-07-25 10:16:43.899176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.970 [2024-07-25 10:16:43.912719] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc82f0) 00:27:58.970 [2024-07-25 10:16:43.912752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:16212 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.970 [2024-07-25 10:16:43.912770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.970 [2024-07-25 10:16:43.926187] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc82f0) 00:27:58.970 [2024-07-25 10:16:43.926219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:18456 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.970 [2024-07-25 10:16:43.926238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.970 [2024-07-25 10:16:43.939206] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc82f0) 00:27:58.970 [2024-07-25 10:16:43.939240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:16255 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.970 [2024-07-25 10:16:43.939259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.970 [2024-07-25 10:16:43.952061] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc82f0) 00:27:58.970 [2024-07-25 10:16:43.952094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:7309 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.970 [2024-07-25 10:16:43.952118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.970 [2024-07-25 10:16:43.968520] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc82f0) 00:27:58.970 [2024-07-25 10:16:43.968554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:5751 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.970 [2024-07-25 10:16:43.968574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.970 [2024-07-25 10:16:43.980880] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc82f0) 00:27:58.970 [2024-07-25 10:16:43.980913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:5083 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.970 [2024-07-25 10:16:43.980932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.970 [2024-07-25 10:16:43.994982] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc82f0) 00:27:58.970 [2024-07-25 10:16:43.995015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24427 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.970 [2024-07-25 10:16:43.995034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.970 [2024-07-25 10:16:44.007350] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc82f0) 00:27:58.970 [2024-07-25 10:16:44.007383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:17660 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.970 [2024-07-25 10:16:44.007402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.970 [2024-07-25 10:16:44.022944] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc82f0) 00:27:58.970 [2024-07-25 10:16:44.022979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:20683 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.970 [2024-07-25 10:16:44.022998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.970 [2024-07-25 10:16:44.035289] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc82f0) 00:27:58.970 [2024-07-25 10:16:44.035325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16864 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.970 [2024-07-25 10:16:44.035344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.970 [2024-07-25 10:16:44.049120] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc82f0) 00:27:58.970 [2024-07-25 10:16:44.049154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1907 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.970 [2024-07-25 10:16:44.049173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.970 [2024-07-25 10:16:44.062879] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc82f0) 00:27:58.970 [2024-07-25 10:16:44.062913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:20511 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.970 [2024-07-25 10:16:44.062931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.970 [2024-07-25 10:16:44.077235] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc82f0) 00:27:58.970 [2024-07-25 10:16:44.077284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:20704 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.970 [2024-07-25 10:16:44.077304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.970 [2024-07-25 10:16:44.090905] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc82f0) 00:27:58.970 [2024-07-25 10:16:44.090939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:17574 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.970 [2024-07-25 10:16:44.090959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.970 [2024-07-25 10:16:44.102715] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc82f0) 00:27:58.970 [2024-07-25 10:16:44.102749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:11358 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.970 [2024-07-25 10:16:44.102767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.970 [2024-07-25 10:16:44.119466] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc82f0) 00:27:58.970 [2024-07-25 10:16:44.119501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:12906 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.970 [2024-07-25 10:16:44.119520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:58.970 [2024-07-25 10:16:44.132098] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc82f0) 00:27:58.970 [2024-07-25 10:16:44.132132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:6969 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:58.970 [2024-07-25 10:16:44.132151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.228 [2024-07-25 10:16:44.146348] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc82f0) 00:27:59.228 [2024-07-25 10:16:44.146381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:17431 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.228 [2024-07-25 10:16:44.146400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.228 [2024-07-25 10:16:44.160901] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc82f0) 00:27:59.228 [2024-07-25 10:16:44.160935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:1001 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.228 [2024-07-25 10:16:44.160954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.228 [2024-07-25 10:16:44.173031] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc82f0) 00:27:59.228 [2024-07-25 10:16:44.173065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:7362 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.228 [2024-07-25 10:16:44.173084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.228 [2024-07-25 10:16:44.188005] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc82f0) 00:27:59.228 [2024-07-25 10:16:44.188039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:23623 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.228 [2024-07-25 10:16:44.188064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.228 [2024-07-25 10:16:44.202564] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc82f0) 00:27:59.228 [2024-07-25 10:16:44.202598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10132 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.228 [2024-07-25 10:16:44.202616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.228 [2024-07-25 10:16:44.215362] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc82f0) 00:27:59.228 [2024-07-25 10:16:44.215396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:4794 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.228 [2024-07-25 10:16:44.215414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.228 [2024-07-25 10:16:44.231957] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc82f0) 00:27:59.228 [2024-07-25 10:16:44.231991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:7807 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.228 [2024-07-25 10:16:44.232010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.228 [2024-07-25 10:16:44.247309] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc82f0) 00:27:59.228 [2024-07-25 10:16:44.247343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:9835 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.228 [2024-07-25 10:16:44.247361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.228 [2024-07-25 10:16:44.259821] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc82f0) 00:27:59.228 [2024-07-25 10:16:44.259854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:14314 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.228 [2024-07-25 10:16:44.259872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.228 [2024-07-25 10:16:44.272680] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc82f0) 00:27:59.228 [2024-07-25 10:16:44.272713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:24173 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.229 [2024-07-25 10:16:44.272732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.229 [2024-07-25 10:16:44.287539] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc82f0) 00:27:59.229 [2024-07-25 10:16:44.287572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:16228 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.229 [2024-07-25 10:16:44.287591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.229 [2024-07-25 10:16:44.299422] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc82f0) 00:27:59.229 [2024-07-25 10:16:44.299463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:3062 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.229 [2024-07-25 10:16:44.299483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.229 [2024-07-25 10:16:44.313270] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc82f0) 00:27:59.229 [2024-07-25 10:16:44.313312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14676 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.229 [2024-07-25 10:16:44.313332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.229 [2024-07-25 10:16:44.326802] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc82f0) 00:27:59.229 [2024-07-25 10:16:44.326836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:14218 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.229 [2024-07-25 10:16:44.326854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.229 [2024-07-25 10:16:44.340221] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc82f0) 00:27:59.229 [2024-07-25 10:16:44.340255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2472 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.229 [2024-07-25 10:16:44.340273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.229 [2024-07-25 10:16:44.356936] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc82f0) 00:27:59.229 [2024-07-25 10:16:44.356972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:4396 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.229 [2024-07-25 10:16:44.356991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.229 [2024-07-25 10:16:44.368803] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc82f0) 00:27:59.229 [2024-07-25 10:16:44.368840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:10627 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.229 [2024-07-25 10:16:44.368859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.229 [2024-07-25 10:16:44.384756] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc82f0) 00:27:59.229 [2024-07-25 10:16:44.384791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:8075 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.229 [2024-07-25 10:16:44.384810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.487 [2024-07-25 10:16:44.398849] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc82f0) 00:27:59.487 [2024-07-25 10:16:44.398884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:4080 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.487 [2024-07-25 10:16:44.398903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.487 [2024-07-25 10:16:44.410235] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc82f0) 00:27:59.487 [2024-07-25 10:16:44.410270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:21051 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.487 [2024-07-25 10:16:44.410295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.487 [2024-07-25 10:16:44.426063] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc82f0) 00:27:59.487 [2024-07-25 10:16:44.426098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:21906 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.487 [2024-07-25 10:16:44.426116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.487 [2024-07-25 10:16:44.439748] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc82f0) 00:27:59.487 [2024-07-25 10:16:44.439782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:3802 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.487 [2024-07-25 10:16:44.439800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.487 [2024-07-25 10:16:44.452704] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc82f0) 00:27:59.487 [2024-07-25 10:16:44.452738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:849 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.487 [2024-07-25 10:16:44.452756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.487 [2024-07-25 10:16:44.465482] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc82f0) 00:27:59.487 [2024-07-25 10:16:44.465516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:1352 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.487 [2024-07-25 10:16:44.465535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.487 [2024-07-25 10:16:44.479518] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc82f0) 00:27:59.487 [2024-07-25 10:16:44.479553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:1039 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.487 [2024-07-25 10:16:44.479572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.487 [2024-07-25 10:16:44.493571] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc82f0) 00:27:59.487 [2024-07-25 10:16:44.493605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:19152 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.487 [2024-07-25 10:16:44.493624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.487 [2024-07-25 10:16:44.507918] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc82f0) 00:27:59.487 [2024-07-25 10:16:44.507953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:3409 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.487 [2024-07-25 10:16:44.507972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.487 [2024-07-25 10:16:44.519813] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc82f0) 00:27:59.487 [2024-07-25 10:16:44.519857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:23650 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.487 [2024-07-25 10:16:44.519876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.487 [2024-07-25 10:16:44.534482] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc82f0) 00:27:59.487 [2024-07-25 10:16:44.534517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18168 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.487 [2024-07-25 10:16:44.534535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.487 [2024-07-25 10:16:44.548723] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc82f0) 00:27:59.487 [2024-07-25 10:16:44.548766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:10167 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.487 [2024-07-25 10:16:44.548791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.487 [2024-07-25 10:16:44.560832] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc82f0) 00:27:59.487 [2024-07-25 10:16:44.560866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:5605 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.487 [2024-07-25 10:16:44.560884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.487 [2024-07-25 10:16:44.575208] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc82f0) 00:27:59.487 [2024-07-25 10:16:44.575241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:9099 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.487 [2024-07-25 10:16:44.575260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.487 [2024-07-25 10:16:44.589572] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc82f0) 00:27:59.487 [2024-07-25 10:16:44.589606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:9658 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.487 [2024-07-25 10:16:44.589626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.487 [2024-07-25 10:16:44.602859] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc82f0) 00:27:59.487 [2024-07-25 10:16:44.602893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:21192 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.487 [2024-07-25 10:16:44.602911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.487 [2024-07-25 10:16:44.615771] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc82f0) 00:27:59.487 [2024-07-25 10:16:44.615803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:17780 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.487 [2024-07-25 10:16:44.615821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.487 [2024-07-25 10:16:44.630716] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc82f0) 00:27:59.487 [2024-07-25 10:16:44.630749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9910 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.487 [2024-07-25 10:16:44.630768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.487 [2024-07-25 10:16:44.643350] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc82f0) 00:27:59.487 [2024-07-25 10:16:44.643384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13106 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.487 [2024-07-25 10:16:44.643403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.745 [2024-07-25 10:16:44.656198] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc82f0) 00:27:59.745 [2024-07-25 10:16:44.656230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:12333 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.745 [2024-07-25 10:16:44.656249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.745 [2024-07-25 10:16:44.669288] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc82f0) 00:27:59.745 [2024-07-25 10:16:44.669328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:18130 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.745 [2024-07-25 10:16:44.669347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.745 [2024-07-25 10:16:44.684127] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc82f0) 00:27:59.745 [2024-07-25 10:16:44.684160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:18371 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.745 [2024-07-25 10:16:44.684179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.745 [2024-07-25 10:16:44.697542] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc82f0) 00:27:59.745 [2024-07-25 10:16:44.697576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:21423 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.745 [2024-07-25 10:16:44.697595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.745 [2024-07-25 10:16:44.712261] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc82f0) 00:27:59.745 [2024-07-25 10:16:44.712294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:17663 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.745 [2024-07-25 10:16:44.712311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.745 [2024-07-25 10:16:44.725983] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc82f0) 00:27:59.745 [2024-07-25 10:16:44.726016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23878 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.745 [2024-07-25 10:16:44.726035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.745 [2024-07-25 10:16:44.738331] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc82f0) 00:27:59.745 [2024-07-25 10:16:44.738363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:4228 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.745 [2024-07-25 10:16:44.738383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.745 [2024-07-25 10:16:44.752038] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc82f0) 00:27:59.746 [2024-07-25 10:16:44.752072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:10990 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.746 [2024-07-25 10:16:44.752090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.746 [2024-07-25 10:16:44.768335] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc82f0) 00:27:59.746 [2024-07-25 10:16:44.768369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:12614 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.746 [2024-07-25 10:16:44.768387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.746 [2024-07-25 10:16:44.782251] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc82f0) 00:27:59.746 [2024-07-25 10:16:44.782286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:12932 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.746 [2024-07-25 10:16:44.782305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.746 [2024-07-25 10:16:44.794584] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc82f0) 00:27:59.746 [2024-07-25 10:16:44.794620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11450 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.746 [2024-07-25 10:16:44.794639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.746 [2024-07-25 10:16:44.808669] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc82f0) 00:27:59.746 [2024-07-25 10:16:44.808711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:17948 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.746 [2024-07-25 10:16:44.808730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.746 [2024-07-25 10:16:44.823768] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc82f0) 00:27:59.746 [2024-07-25 10:16:44.823802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:19967 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.746 [2024-07-25 10:16:44.823826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.746 [2024-07-25 10:16:44.836348] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc82f0) 00:27:59.746 [2024-07-25 10:16:44.836392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15538 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.746 [2024-07-25 10:16:44.836412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.746 [2024-07-25 10:16:44.850690] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc82f0) 00:27:59.746 [2024-07-25 10:16:44.850730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:10263 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.746 [2024-07-25 10:16:44.850749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.746 [2024-07-25 10:16:44.864276] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc82f0) 00:27:59.746 [2024-07-25 10:16:44.864309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:6615 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.746 [2024-07-25 10:16:44.864328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.746 [2024-07-25 10:16:44.878219] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc82f0) 00:27:59.746 [2024-07-25 10:16:44.878253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:785 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.746 [2024-07-25 10:16:44.878273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.746 [2024-07-25 10:16:44.892784] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc82f0) 00:27:59.746 [2024-07-25 10:16:44.892818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:9316 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.746 [2024-07-25 10:16:44.892836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:59.746 [2024-07-25 10:16:44.906210] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc82f0) 00:27:59.746 [2024-07-25 10:16:44.906245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:4156 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:59.746 [2024-07-25 10:16:44.906272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:00.004 [2024-07-25 10:16:44.919384] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc82f0) 00:28:00.004 [2024-07-25 10:16:44.919417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:20524 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.004 [2024-07-25 10:16:44.919444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:00.004 [2024-07-25 10:16:44.932216] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc82f0) 00:28:00.004 [2024-07-25 10:16:44.932250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:1366 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.004 [2024-07-25 10:16:44.932268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:00.004 [2024-07-25 10:16:44.947805] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc82f0) 00:28:00.004 [2024-07-25 10:16:44.947840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:9891 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.004 [2024-07-25 10:16:44.947859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:00.004 [2024-07-25 10:16:44.959792] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc82f0) 00:28:00.004 [2024-07-25 10:16:44.959826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7867 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.004 [2024-07-25 10:16:44.959845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:00.004 [2024-07-25 10:16:44.974592] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc82f0) 00:28:00.004 [2024-07-25 10:16:44.974627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:9880 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.004 [2024-07-25 10:16:44.974646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:00.004 [2024-07-25 10:16:44.991217] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc82f0) 00:28:00.004 [2024-07-25 10:16:44.991251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:21655 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.004 [2024-07-25 10:16:44.991270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:00.004 [2024-07-25 10:16:45.004037] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc82f0) 00:28:00.004 [2024-07-25 10:16:45.004072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:1761 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.004 [2024-07-25 10:16:45.004091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:00.004 [2024-07-25 10:16:45.019353] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc82f0) 00:28:00.004 [2024-07-25 10:16:45.019387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:24341 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.004 [2024-07-25 10:16:45.019406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:00.004 [2024-07-25 10:16:45.034898] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc82f0) 00:28:00.005 [2024-07-25 10:16:45.034942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:12298 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.005 [2024-07-25 10:16:45.034962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:00.005 [2024-07-25 10:16:45.046964] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc82f0) 00:28:00.005 [2024-07-25 10:16:45.046997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:21903 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.005 [2024-07-25 10:16:45.047016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:00.005 [2024-07-25 10:16:45.061858] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc82f0) 00:28:00.005 [2024-07-25 10:16:45.061892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:2310 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.005 [2024-07-25 10:16:45.061911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:00.005 [2024-07-25 10:16:45.076453] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc82f0) 00:28:00.005 [2024-07-25 10:16:45.076486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:1212 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.005 [2024-07-25 10:16:45.076505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:00.005 [2024-07-25 10:16:45.089580] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc82f0) 00:28:00.005 [2024-07-25 10:16:45.089615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:17119 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.005 [2024-07-25 10:16:45.089634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:00.005 [2024-07-25 10:16:45.102037] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc82f0) 00:28:00.005 [2024-07-25 10:16:45.102070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:22502 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.005 [2024-07-25 10:16:45.102089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:00.005 [2024-07-25 10:16:45.115493] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc82f0) 00:28:00.005 [2024-07-25 10:16:45.115526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:19303 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.005 [2024-07-25 10:16:45.115544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:00.005 [2024-07-25 10:16:45.129035] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc82f0) 00:28:00.005 [2024-07-25 10:16:45.129068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:11538 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.005 [2024-07-25 10:16:45.129086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:00.005 [2024-07-25 10:16:45.143189] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc82f0) 00:28:00.005 [2024-07-25 10:16:45.143223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:16100 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.005 [2024-07-25 10:16:45.143248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:00.005 [2024-07-25 10:16:45.157537] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc82f0) 00:28:00.005 [2024-07-25 10:16:45.157570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:6199 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.005 [2024-07-25 10:16:45.157589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:00.263 [2024-07-25 10:16:45.170738] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc82f0) 00:28:00.263 [2024-07-25 10:16:45.170772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16583 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.263 [2024-07-25 10:16:45.170790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:00.263 [2024-07-25 10:16:45.185221] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc82f0) 00:28:00.263 [2024-07-25 10:16:45.185254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:360 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.263 [2024-07-25 10:16:45.185273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:00.263 [2024-07-25 10:16:45.197967] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc82f0) 00:28:00.263 [2024-07-25 10:16:45.198000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:6998 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.263 [2024-07-25 10:16:45.198019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:00.263 [2024-07-25 10:16:45.212668] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc82f0) 00:28:00.263 [2024-07-25 10:16:45.212700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:522 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.263 [2024-07-25 10:16:45.212719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:00.263 [2024-07-25 10:16:45.227774] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc82f0) 00:28:00.263 [2024-07-25 10:16:45.227807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:8798 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.263 [2024-07-25 10:16:45.227826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:00.263 [2024-07-25 10:16:45.241638] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc82f0) 00:28:00.263 [2024-07-25 10:16:45.241671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:16639 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.263 [2024-07-25 10:16:45.241690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:00.263 [2024-07-25 10:16:45.255348] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc82f0) 00:28:00.263 [2024-07-25 10:16:45.255382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6373 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.263 [2024-07-25 10:16:45.255400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:00.263 [2024-07-25 10:16:45.269854] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc82f0) 00:28:00.263 [2024-07-25 10:16:45.269895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:14809 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.263 [2024-07-25 10:16:45.269915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:00.263 [2024-07-25 10:16:45.281614] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc82f0) 00:28:00.263 [2024-07-25 10:16:45.281648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:4562 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.263 [2024-07-25 10:16:45.281666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:00.263 [2024-07-25 10:16:45.298694] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc82f0) 00:28:00.263 [2024-07-25 10:16:45.298731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:777 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.263 [2024-07-25 10:16:45.298750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:00.263 [2024-07-25 10:16:45.312210] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc82f0) 00:28:00.263 [2024-07-25 10:16:45.312244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:7083 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.263 [2024-07-25 10:16:45.312263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:00.263 [2024-07-25 10:16:45.325184] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc82f0) 00:28:00.263 [2024-07-25 10:16:45.325218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:16142 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.263 [2024-07-25 10:16:45.325237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:00.263 [2024-07-25 10:16:45.339573] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc82f0) 00:28:00.263 [2024-07-25 10:16:45.339607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:20279 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.263 [2024-07-25 10:16:45.339626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:00.263 [2024-07-25 10:16:45.354170] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc82f0) 00:28:00.263 [2024-07-25 10:16:45.354203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:4263 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.263 [2024-07-25 10:16:45.354222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:00.263 [2024-07-25 10:16:45.365452] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc82f0) 00:28:00.263 [2024-07-25 10:16:45.365485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:13624 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.263 [2024-07-25 10:16:45.365504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:00.263 [2024-07-25 10:16:45.382517] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc82f0) 00:28:00.264 [2024-07-25 10:16:45.382552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5634 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.264 [2024-07-25 10:16:45.382571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:00.264 [2024-07-25 10:16:45.401005] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc82f0) 00:28:00.264 [2024-07-25 10:16:45.401040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:24339 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.264 [2024-07-25 10:16:45.401059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:00.264 [2024-07-25 10:16:45.416984] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc82f0) 00:28:00.264 [2024-07-25 10:16:45.417017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22250 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.264 [2024-07-25 10:16:45.417036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:00.521 [2024-07-25 10:16:45.434214] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc82f0) 00:28:00.521 [2024-07-25 10:16:45.434249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:648 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.521 [2024-07-25 10:16:45.434268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:00.521 [2024-07-25 10:16:45.445599] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc82f0) 00:28:00.521 [2024-07-25 10:16:45.445632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:15870 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.521 [2024-07-25 10:16:45.445651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:00.521 [2024-07-25 10:16:45.460487] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc82f0) 00:28:00.521 [2024-07-25 10:16:45.460520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:12380 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.521 [2024-07-25 10:16:45.460538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:00.521 [2024-07-25 10:16:45.472888] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc82f0) 00:28:00.521 [2024-07-25 10:16:45.472922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:4310 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.521 [2024-07-25 10:16:45.472941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:00.522 [2024-07-25 10:16:45.486761] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc82f0) 00:28:00.522 [2024-07-25 10:16:45.486795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19978 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.522 [2024-07-25 10:16:45.486813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:00.522 [2024-07-25 10:16:45.500827] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc82f0) 00:28:00.522 [2024-07-25 10:16:45.500860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:15165 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.522 [2024-07-25 10:16:45.500879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:00.522 [2024-07-25 10:16:45.513380] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc82f0) 00:28:00.522 [2024-07-25 10:16:45.513414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:5750 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.522 [2024-07-25 10:16:45.513450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:00.522 [2024-07-25 10:16:45.527826] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xcc82f0) 00:28:00.522 [2024-07-25 10:16:45.527859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:13659 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:00.522 [2024-07-25 10:16:45.527877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:00.522 00:28:00.522 Latency(us) 00:28:00.522 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:00.522 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:00.522 nvme0n1 : 2.00 18328.26 71.59 0.00 0.00 6974.50 3373.89 19709.35 00:28:00.522 =================================================================================================================== 00:28:00.522 Total : 18328.26 71.59 0.00 0.00 6974.50 3373.89 19709.35 00:28:00.522 0 00:28:00.522 10:16:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:00.522 10:16:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:00.522 10:16:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:00.522 10:16:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:00.522 | .driver_specific 00:28:00.522 | .nvme_error 00:28:00.522 | .status_code 00:28:00.522 | .command_transient_transport_error' 00:28:01.086 10:16:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 143 > 0 )) 00:28:01.087 10:16:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 544703 00:28:01.087 10:16:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 544703 ']' 00:28:01.087 10:16:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 544703 00:28:01.087 10:16:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:28:01.087 10:16:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:01.087 10:16:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 544703 00:28:01.087 10:16:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:01.087 10:16:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:01.087 10:16:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 544703' 00:28:01.087 killing process with pid 544703 00:28:01.087 10:16:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 544703 00:28:01.087 Received shutdown signal, test time was about 2.000000 seconds 00:28:01.087 00:28:01.087 Latency(us) 00:28:01.087 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:01.087 =================================================================================================================== 00:28:01.087 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:01.087 10:16:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 544703 00:28:01.344 10:16:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:28:01.344 10:16:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:01.344 10:16:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:28:01.344 10:16:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:28:01.344 10:16:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:28:01.344 10:16:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=545130 00:28:01.344 10:16:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:28:01.344 10:16:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 545130 /var/tmp/bperf.sock 00:28:01.344 10:16:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 545130 ']' 00:28:01.344 10:16:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:01.344 10:16:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:01.344 10:16:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:01.344 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:01.344 10:16:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:01.344 10:16:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:01.344 [2024-07-25 10:16:46.427286] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:28:01.344 [2024-07-25 10:16:46.427372] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid545130 ] 00:28:01.344 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:01.344 Zero copy mechanism will not be used. 00:28:01.344 EAL: No free 2048 kB hugepages reported on node 1 00:28:01.344 [2024-07-25 10:16:46.496159] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:01.602 [2024-07-25 10:16:46.622226] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:01.602 10:16:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:01.602 10:16:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:28:01.602 10:16:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:01.602 10:16:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:02.166 10:16:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:02.166 10:16:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:02.166 10:16:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:02.166 10:16:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:02.166 10:16:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:02.166 10:16:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:02.764 nvme0n1 00:28:02.764 10:16:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:28:02.764 10:16:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:02.764 10:16:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:02.764 10:16:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:02.764 10:16:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:02.764 10:16:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:02.764 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:02.764 Zero copy mechanism will not be used. 00:28:02.764 Running I/O for 2 seconds... 00:28:02.764 [2024-07-25 10:16:47.880560] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:02.764 [2024-07-25 10:16:47.880622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.764 [2024-07-25 10:16:47.880644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:02.764 [2024-07-25 10:16:47.890928] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:02.764 [2024-07-25 10:16:47.890964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.764 [2024-07-25 10:16:47.890983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:02.764 [2024-07-25 10:16:47.901617] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:02.764 [2024-07-25 10:16:47.901653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.764 [2024-07-25 10:16:47.901673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:02.764 [2024-07-25 10:16:47.912190] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:02.764 [2024-07-25 10:16:47.912226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.764 [2024-07-25 10:16:47.912245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:02.764 [2024-07-25 10:16:47.922584] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:02.764 [2024-07-25 10:16:47.922619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:02.764 [2024-07-25 10:16:47.922638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:03.023 [2024-07-25 10:16:47.932596] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:03.023 [2024-07-25 10:16:47.932630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.023 [2024-07-25 10:16:47.932649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:03.023 [2024-07-25 10:16:47.942877] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:03.023 [2024-07-25 10:16:47.942911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.023 [2024-07-25 10:16:47.942930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:03.023 [2024-07-25 10:16:47.952994] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:03.023 [2024-07-25 10:16:47.953036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.023 [2024-07-25 10:16:47.953056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.023 [2024-07-25 10:16:47.963481] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:03.023 [2024-07-25 10:16:47.963514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.023 [2024-07-25 10:16:47.963533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:03.023 [2024-07-25 10:16:47.973068] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:03.023 [2024-07-25 10:16:47.973103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.023 [2024-07-25 10:16:47.973122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:03.023 [2024-07-25 10:16:47.983290] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:03.023 [2024-07-25 10:16:47.983324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.023 [2024-07-25 10:16:47.983342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:03.023 [2024-07-25 10:16:47.993456] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:03.023 [2024-07-25 10:16:47.993490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.023 [2024-07-25 10:16:47.993509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.023 [2024-07-25 10:16:48.003658] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:03.023 [2024-07-25 10:16:48.003691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.023 [2024-07-25 10:16:48.003710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:03.023 [2024-07-25 10:16:48.013964] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:03.023 [2024-07-25 10:16:48.013998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.023 [2024-07-25 10:16:48.014017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:03.023 [2024-07-25 10:16:48.024624] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:03.023 [2024-07-25 10:16:48.024658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.023 [2024-07-25 10:16:48.024684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:03.023 [2024-07-25 10:16:48.035889] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:03.023 [2024-07-25 10:16:48.035924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.023 [2024-07-25 10:16:48.035950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.023 [2024-07-25 10:16:48.046911] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:03.023 [2024-07-25 10:16:48.046955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.023 [2024-07-25 10:16:48.046973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:03.023 [2024-07-25 10:16:48.056802] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:03.023 [2024-07-25 10:16:48.056836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.023 [2024-07-25 10:16:48.056855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:03.023 [2024-07-25 10:16:48.067595] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:03.023 [2024-07-25 10:16:48.067629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.023 [2024-07-25 10:16:48.067648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:03.023 [2024-07-25 10:16:48.077329] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:03.023 [2024-07-25 10:16:48.077363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.023 [2024-07-25 10:16:48.077382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.023 [2024-07-25 10:16:48.088347] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:03.023 [2024-07-25 10:16:48.088381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.023 [2024-07-25 10:16:48.088400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:03.023 [2024-07-25 10:16:48.099298] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:03.023 [2024-07-25 10:16:48.099331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.023 [2024-07-25 10:16:48.099350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:03.023 [2024-07-25 10:16:48.110152] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:03.023 [2024-07-25 10:16:48.110186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.023 [2024-07-25 10:16:48.110205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:03.023 [2024-07-25 10:16:48.120736] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:03.023 [2024-07-25 10:16:48.120769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.023 [2024-07-25 10:16:48.120788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.023 [2024-07-25 10:16:48.131535] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:03.023 [2024-07-25 10:16:48.131577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.023 [2024-07-25 10:16:48.131597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:03.023 [2024-07-25 10:16:48.142224] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:03.023 [2024-07-25 10:16:48.142258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.023 [2024-07-25 10:16:48.142276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:03.023 [2024-07-25 10:16:48.152894] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:03.023 [2024-07-25 10:16:48.152928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.024 [2024-07-25 10:16:48.152946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:03.024 [2024-07-25 10:16:48.163824] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:03.024 [2024-07-25 10:16:48.163857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.024 [2024-07-25 10:16:48.163876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.024 [2024-07-25 10:16:48.174649] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:03.024 [2024-07-25 10:16:48.174682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.024 [2024-07-25 10:16:48.174701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:03.024 [2024-07-25 10:16:48.185441] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:03.024 [2024-07-25 10:16:48.185474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.024 [2024-07-25 10:16:48.185493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:03.282 [2024-07-25 10:16:48.196395] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:03.282 [2024-07-25 10:16:48.196436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.282 [2024-07-25 10:16:48.196457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:03.282 [2024-07-25 10:16:48.207517] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:03.282 [2024-07-25 10:16:48.207550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.282 [2024-07-25 10:16:48.207568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.282 [2024-07-25 10:16:48.218336] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:03.282 [2024-07-25 10:16:48.218369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.282 [2024-07-25 10:16:48.218387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:03.282 [2024-07-25 10:16:48.229189] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:03.282 [2024-07-25 10:16:48.229223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.282 [2024-07-25 10:16:48.229242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:03.282 [2024-07-25 10:16:48.240064] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:03.282 [2024-07-25 10:16:48.240098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.282 [2024-07-25 10:16:48.240116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:03.282 [2024-07-25 10:16:48.251379] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:03.282 [2024-07-25 10:16:48.251412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.282 [2024-07-25 10:16:48.251440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.282 [2024-07-25 10:16:48.261285] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:03.282 [2024-07-25 10:16:48.261317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.282 [2024-07-25 10:16:48.261336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:03.282 [2024-07-25 10:16:48.271298] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:03.282 [2024-07-25 10:16:48.271334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.282 [2024-07-25 10:16:48.271353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:03.282 [2024-07-25 10:16:48.282381] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:03.282 [2024-07-25 10:16:48.282415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.282 [2024-07-25 10:16:48.282442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:03.282 [2024-07-25 10:16:48.293963] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:03.282 [2024-07-25 10:16:48.293998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.282 [2024-07-25 10:16:48.294018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.282 [2024-07-25 10:16:48.306575] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:03.282 [2024-07-25 10:16:48.306610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.282 [2024-07-25 10:16:48.306629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:03.282 [2024-07-25 10:16:48.319464] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:03.282 [2024-07-25 10:16:48.319499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.282 [2024-07-25 10:16:48.319526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:03.282 [2024-07-25 10:16:48.331947] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:03.282 [2024-07-25 10:16:48.331983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.282 [2024-07-25 10:16:48.332002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:03.282 [2024-07-25 10:16:48.345452] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:03.282 [2024-07-25 10:16:48.345487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.282 [2024-07-25 10:16:48.345506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.282 [2024-07-25 10:16:48.357144] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:03.282 [2024-07-25 10:16:48.357181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.282 [2024-07-25 10:16:48.357200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:03.282 [2024-07-25 10:16:48.368619] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:03.282 [2024-07-25 10:16:48.368654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.282 [2024-07-25 10:16:48.368674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:03.283 [2024-07-25 10:16:48.379928] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:03.283 [2024-07-25 10:16:48.379964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.283 [2024-07-25 10:16:48.379984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:03.283 [2024-07-25 10:16:48.391420] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:03.283 [2024-07-25 10:16:48.391465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.283 [2024-07-25 10:16:48.391485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.283 [2024-07-25 10:16:48.403137] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:03.283 [2024-07-25 10:16:48.403177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.283 [2024-07-25 10:16:48.403197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:03.283 [2024-07-25 10:16:48.415933] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:03.283 [2024-07-25 10:16:48.415971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.283 [2024-07-25 10:16:48.415990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:03.283 [2024-07-25 10:16:48.428233] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:03.283 [2024-07-25 10:16:48.428278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.283 [2024-07-25 10:16:48.428298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:03.283 [2024-07-25 10:16:48.439773] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:03.283 [2024-07-25 10:16:48.439808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.283 [2024-07-25 10:16:48.439828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.541 [2024-07-25 10:16:48.450851] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:03.541 [2024-07-25 10:16:48.450887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.541 [2024-07-25 10:16:48.450907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:03.541 [2024-07-25 10:16:48.462874] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:03.541 [2024-07-25 10:16:48.462909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.541 [2024-07-25 10:16:48.462928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:03.541 [2024-07-25 10:16:48.474296] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:03.541 [2024-07-25 10:16:48.474332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.541 [2024-07-25 10:16:48.474352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:03.541 [2024-07-25 10:16:48.485035] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:03.541 [2024-07-25 10:16:48.485070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.541 [2024-07-25 10:16:48.485089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.541 [2024-07-25 10:16:48.496537] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:03.541 [2024-07-25 10:16:48.496573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.541 [2024-07-25 10:16:48.496592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:03.541 [2024-07-25 10:16:48.508705] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:03.541 [2024-07-25 10:16:48.508740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.541 [2024-07-25 10:16:48.508758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:03.541 [2024-07-25 10:16:48.520227] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:03.541 [2024-07-25 10:16:48.520265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.541 [2024-07-25 10:16:48.520284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:03.541 [2024-07-25 10:16:48.532099] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:03.541 [2024-07-25 10:16:48.532133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.541 [2024-07-25 10:16:48.532152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.541 [2024-07-25 10:16:48.543499] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:03.541 [2024-07-25 10:16:48.543535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.541 [2024-07-25 10:16:48.543554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:03.541 [2024-07-25 10:16:48.554929] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:03.541 [2024-07-25 10:16:48.554964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.541 [2024-07-25 10:16:48.554983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:03.541 [2024-07-25 10:16:48.567017] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:03.541 [2024-07-25 10:16:48.567052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.541 [2024-07-25 10:16:48.567071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:03.541 [2024-07-25 10:16:48.578300] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:03.541 [2024-07-25 10:16:48.578337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.541 [2024-07-25 10:16:48.578358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.541 [2024-07-25 10:16:48.589519] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:03.541 [2024-07-25 10:16:48.589555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.542 [2024-07-25 10:16:48.589574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:03.542 [2024-07-25 10:16:48.600955] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:03.542 [2024-07-25 10:16:48.600991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.542 [2024-07-25 10:16:48.601010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:03.542 [2024-07-25 10:16:48.612763] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:03.542 [2024-07-25 10:16:48.612798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.542 [2024-07-25 10:16:48.612817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:03.542 [2024-07-25 10:16:48.624329] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:03.542 [2024-07-25 10:16:48.624363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.542 [2024-07-25 10:16:48.624390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.542 [2024-07-25 10:16:48.634721] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:03.542 [2024-07-25 10:16:48.634755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.542 [2024-07-25 10:16:48.634775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:03.542 [2024-07-25 10:16:48.645747] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:03.542 [2024-07-25 10:16:48.645781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.542 [2024-07-25 10:16:48.645800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:03.542 [2024-07-25 10:16:48.656746] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:03.542 [2024-07-25 10:16:48.656779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.542 [2024-07-25 10:16:48.656797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:03.542 [2024-07-25 10:16:48.668023] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:03.542 [2024-07-25 10:16:48.668056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.542 [2024-07-25 10:16:48.668074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.542 [2024-07-25 10:16:48.678951] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:03.542 [2024-07-25 10:16:48.678985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.542 [2024-07-25 10:16:48.679005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:03.542 [2024-07-25 10:16:48.690145] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:03.542 [2024-07-25 10:16:48.690178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.542 [2024-07-25 10:16:48.690197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:03.542 [2024-07-25 10:16:48.701588] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:03.542 [2024-07-25 10:16:48.701621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.542 [2024-07-25 10:16:48.701641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:03.800 [2024-07-25 10:16:48.713031] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:03.800 [2024-07-25 10:16:48.713066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.800 [2024-07-25 10:16:48.713085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.800 [2024-07-25 10:16:48.723162] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:03.800 [2024-07-25 10:16:48.723204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.800 [2024-07-25 10:16:48.723223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:03.800 [2024-07-25 10:16:48.734862] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:03.800 [2024-07-25 10:16:48.734895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.800 [2024-07-25 10:16:48.734914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:03.800 [2024-07-25 10:16:48.746073] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:03.800 [2024-07-25 10:16:48.746106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.800 [2024-07-25 10:16:48.746124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:03.800 [2024-07-25 10:16:48.757445] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:03.800 [2024-07-25 10:16:48.757479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.800 [2024-07-25 10:16:48.757498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.800 [2024-07-25 10:16:48.769165] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:03.800 [2024-07-25 10:16:48.769200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.800 [2024-07-25 10:16:48.769219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:03.800 [2024-07-25 10:16:48.780756] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:03.800 [2024-07-25 10:16:48.780790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.800 [2024-07-25 10:16:48.780809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:03.800 [2024-07-25 10:16:48.791301] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:03.800 [2024-07-25 10:16:48.791335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.800 [2024-07-25 10:16:48.791354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:03.800 [2024-07-25 10:16:48.803049] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:03.800 [2024-07-25 10:16:48.803084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.800 [2024-07-25 10:16:48.803103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.801 [2024-07-25 10:16:48.814602] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:03.801 [2024-07-25 10:16:48.814637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.801 [2024-07-25 10:16:48.814656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:03.801 [2024-07-25 10:16:48.825649] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:03.801 [2024-07-25 10:16:48.825684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.801 [2024-07-25 10:16:48.825702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:03.801 [2024-07-25 10:16:48.837198] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:03.801 [2024-07-25 10:16:48.837233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.801 [2024-07-25 10:16:48.837252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:03.801 [2024-07-25 10:16:48.846876] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:03.801 [2024-07-25 10:16:48.846910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.801 [2024-07-25 10:16:48.846928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.801 [2024-07-25 10:16:48.857887] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:03.801 [2024-07-25 10:16:48.857920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.801 [2024-07-25 10:16:48.857938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:03.801 [2024-07-25 10:16:48.868802] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:03.801 [2024-07-25 10:16:48.868835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.801 [2024-07-25 10:16:48.868854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:03.801 [2024-07-25 10:16:48.879500] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:03.801 [2024-07-25 10:16:48.879533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.801 [2024-07-25 10:16:48.879551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:03.801 [2024-07-25 10:16:48.890340] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:03.801 [2024-07-25 10:16:48.890372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.801 [2024-07-25 10:16:48.890390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.801 [2024-07-25 10:16:48.901655] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:03.801 [2024-07-25 10:16:48.901688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.801 [2024-07-25 10:16:48.901706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:03.801 [2024-07-25 10:16:48.912441] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:03.801 [2024-07-25 10:16:48.912480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.801 [2024-07-25 10:16:48.912507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:03.801 [2024-07-25 10:16:48.922755] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:03.801 [2024-07-25 10:16:48.922789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.801 [2024-07-25 10:16:48.922808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:03.801 [2024-07-25 10:16:48.932760] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:03.801 [2024-07-25 10:16:48.932792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.801 [2024-07-25 10:16:48.932810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:03.801 [2024-07-25 10:16:48.942855] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:03.801 [2024-07-25 10:16:48.942888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.801 [2024-07-25 10:16:48.942906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:03.801 [2024-07-25 10:16:48.953080] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:03.801 [2024-07-25 10:16:48.953112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.801 [2024-07-25 10:16:48.953131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:03.801 [2024-07-25 10:16:48.963396] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:03.801 [2024-07-25 10:16:48.963437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.801 [2024-07-25 10:16:48.963457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:04.059 [2024-07-25 10:16:48.974040] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:04.059 [2024-07-25 10:16:48.974074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.059 [2024-07-25 10:16:48.974093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.059 [2024-07-25 10:16:48.984480] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:04.059 [2024-07-25 10:16:48.984512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.059 [2024-07-25 10:16:48.984530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:04.059 [2024-07-25 10:16:48.993934] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:04.059 [2024-07-25 10:16:48.993966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.059 [2024-07-25 10:16:48.993985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:04.059 [2024-07-25 10:16:49.003793] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:04.059 [2024-07-25 10:16:49.003828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.059 [2024-07-25 10:16:49.003846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:04.059 [2024-07-25 10:16:49.014063] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:04.059 [2024-07-25 10:16:49.014097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.059 [2024-07-25 10:16:49.014116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.059 [2024-07-25 10:16:49.024475] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:04.059 [2024-07-25 10:16:49.024509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.059 [2024-07-25 10:16:49.024528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:04.059 [2024-07-25 10:16:49.034841] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:04.059 [2024-07-25 10:16:49.034879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.059 [2024-07-25 10:16:49.034899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:04.059 [2024-07-25 10:16:49.044898] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:04.059 [2024-07-25 10:16:49.044936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.060 [2024-07-25 10:16:49.044955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:04.060 [2024-07-25 10:16:49.055607] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:04.060 [2024-07-25 10:16:49.055642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.060 [2024-07-25 10:16:49.055660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.060 [2024-07-25 10:16:49.065701] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:04.060 [2024-07-25 10:16:49.065735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.060 [2024-07-25 10:16:49.065753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:04.060 [2024-07-25 10:16:49.075794] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:04.060 [2024-07-25 10:16:49.075828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.060 [2024-07-25 10:16:49.075846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:04.060 [2024-07-25 10:16:49.085846] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:04.060 [2024-07-25 10:16:49.085879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.060 [2024-07-25 10:16:49.085905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:04.060 [2024-07-25 10:16:49.095885] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:04.060 [2024-07-25 10:16:49.095918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.060 [2024-07-25 10:16:49.095936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.060 [2024-07-25 10:16:49.105948] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:04.060 [2024-07-25 10:16:49.105981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.060 [2024-07-25 10:16:49.105999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:04.060 [2024-07-25 10:16:49.115980] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:04.060 [2024-07-25 10:16:49.116014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.060 [2024-07-25 10:16:49.116032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:04.060 [2024-07-25 10:16:49.126112] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:04.060 [2024-07-25 10:16:49.126145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.060 [2024-07-25 10:16:49.126164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:04.060 [2024-07-25 10:16:49.136011] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:04.060 [2024-07-25 10:16:49.136044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.060 [2024-07-25 10:16:49.136062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.060 [2024-07-25 10:16:49.146013] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:04.060 [2024-07-25 10:16:49.146046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.060 [2024-07-25 10:16:49.146064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:04.060 [2024-07-25 10:16:49.156397] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:04.060 [2024-07-25 10:16:49.156438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.060 [2024-07-25 10:16:49.156459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:04.060 [2024-07-25 10:16:49.166419] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:04.060 [2024-07-25 10:16:49.166461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.060 [2024-07-25 10:16:49.166480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:04.060 [2024-07-25 10:16:49.176735] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:04.060 [2024-07-25 10:16:49.176774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.060 [2024-07-25 10:16:49.176794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.060 [2024-07-25 10:16:49.186790] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:04.060 [2024-07-25 10:16:49.186823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.060 [2024-07-25 10:16:49.186841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:04.060 [2024-07-25 10:16:49.197039] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:04.060 [2024-07-25 10:16:49.197071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.060 [2024-07-25 10:16:49.197089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:04.060 [2024-07-25 10:16:49.207221] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:04.060 [2024-07-25 10:16:49.207254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.060 [2024-07-25 10:16:49.207272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:04.060 [2024-07-25 10:16:49.217630] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:04.060 [2024-07-25 10:16:49.217662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.060 [2024-07-25 10:16:49.217680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.320 [2024-07-25 10:16:49.227899] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:04.320 [2024-07-25 10:16:49.227932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.320 [2024-07-25 10:16:49.227951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:04.320 [2024-07-25 10:16:49.238154] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:04.320 [2024-07-25 10:16:49.238187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.320 [2024-07-25 10:16:49.238206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:04.320 [2024-07-25 10:16:49.248413] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:04.320 [2024-07-25 10:16:49.248458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.320 [2024-07-25 10:16:49.248477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:04.320 [2024-07-25 10:16:49.258501] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:04.320 [2024-07-25 10:16:49.258534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.320 [2024-07-25 10:16:49.258552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.320 [2024-07-25 10:16:49.268663] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:04.320 [2024-07-25 10:16:49.268696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.320 [2024-07-25 10:16:49.268714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:04.320 [2024-07-25 10:16:49.278749] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:04.320 [2024-07-25 10:16:49.278781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.320 [2024-07-25 10:16:49.278799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:04.320 [2024-07-25 10:16:49.289316] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:04.320 [2024-07-25 10:16:49.289349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.320 [2024-07-25 10:16:49.289367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:04.320 [2024-07-25 10:16:49.298760] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:04.320 [2024-07-25 10:16:49.298793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.320 [2024-07-25 10:16:49.298811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.320 [2024-07-25 10:16:49.308367] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:04.320 [2024-07-25 10:16:49.308400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.320 [2024-07-25 10:16:49.308418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:04.320 [2024-07-25 10:16:49.319124] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:04.320 [2024-07-25 10:16:49.319158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.320 [2024-07-25 10:16:49.319176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:04.320 [2024-07-25 10:16:49.329605] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:04.320 [2024-07-25 10:16:49.329638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.320 [2024-07-25 10:16:49.329656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:04.320 [2024-07-25 10:16:49.340358] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:04.320 [2024-07-25 10:16:49.340392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.320 [2024-07-25 10:16:49.340410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.320 [2024-07-25 10:16:49.351046] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:04.320 [2024-07-25 10:16:49.351080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.320 [2024-07-25 10:16:49.351105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:04.320 [2024-07-25 10:16:49.361051] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:04.320 [2024-07-25 10:16:49.361085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.320 [2024-07-25 10:16:49.361103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:04.320 [2024-07-25 10:16:49.371240] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:04.320 [2024-07-25 10:16:49.371274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.320 [2024-07-25 10:16:49.371292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:04.320 [2024-07-25 10:16:49.381182] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:04.320 [2024-07-25 10:16:49.381215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.320 [2024-07-25 10:16:49.381234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.320 [2024-07-25 10:16:49.391157] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:04.320 [2024-07-25 10:16:49.391190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.320 [2024-07-25 10:16:49.391208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:04.320 [2024-07-25 10:16:49.401354] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:04.320 [2024-07-25 10:16:49.401386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.320 [2024-07-25 10:16:49.401405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:04.320 [2024-07-25 10:16:49.411419] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:04.320 [2024-07-25 10:16:49.411461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.320 [2024-07-25 10:16:49.411480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:04.320 [2024-07-25 10:16:49.422642] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:04.320 [2024-07-25 10:16:49.422679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.320 [2024-07-25 10:16:49.422699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.321 [2024-07-25 10:16:49.432888] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:04.321 [2024-07-25 10:16:49.432924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.321 [2024-07-25 10:16:49.432943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:04.321 [2024-07-25 10:16:49.442941] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:04.321 [2024-07-25 10:16:49.442981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.321 [2024-07-25 10:16:49.443000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:04.321 [2024-07-25 10:16:49.452933] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:04.321 [2024-07-25 10:16:49.452967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.321 [2024-07-25 10:16:49.452986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:04.321 [2024-07-25 10:16:49.462965] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:04.321 [2024-07-25 10:16:49.462998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.321 [2024-07-25 10:16:49.463016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.321 [2024-07-25 10:16:49.473108] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:04.321 [2024-07-25 10:16:49.473141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.321 [2024-07-25 10:16:49.473159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:04.321 [2024-07-25 10:16:49.483038] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:04.321 [2024-07-25 10:16:49.483070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.321 [2024-07-25 10:16:49.483089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:04.579 [2024-07-25 10:16:49.493080] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:04.579 [2024-07-25 10:16:49.493113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.579 [2024-07-25 10:16:49.493132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:04.579 [2024-07-25 10:16:49.503231] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:04.579 [2024-07-25 10:16:49.503264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.579 [2024-07-25 10:16:49.503282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.579 [2024-07-25 10:16:49.513282] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:04.579 [2024-07-25 10:16:49.513314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.579 [2024-07-25 10:16:49.513335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:04.579 [2024-07-25 10:16:49.523591] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:04.579 [2024-07-25 10:16:49.523625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.579 [2024-07-25 10:16:49.523643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:04.579 [2024-07-25 10:16:49.533784] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:04.579 [2024-07-25 10:16:49.533818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.579 [2024-07-25 10:16:49.533837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:04.579 [2024-07-25 10:16:49.543961] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:04.579 [2024-07-25 10:16:49.543995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.579 [2024-07-25 10:16:49.544014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.579 [2024-07-25 10:16:49.554203] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:04.579 [2024-07-25 10:16:49.554236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.579 [2024-07-25 10:16:49.554254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:04.579 [2024-07-25 10:16:49.564615] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:04.579 [2024-07-25 10:16:49.564649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.579 [2024-07-25 10:16:49.564669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:04.579 [2024-07-25 10:16:49.575280] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:04.579 [2024-07-25 10:16:49.575316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.579 [2024-07-25 10:16:49.575335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:04.579 [2024-07-25 10:16:49.585496] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:04.579 [2024-07-25 10:16:49.585529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.579 [2024-07-25 10:16:49.585547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.579 [2024-07-25 10:16:49.595619] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:04.579 [2024-07-25 10:16:49.595652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.579 [2024-07-25 10:16:49.595670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:04.579 [2024-07-25 10:16:49.605690] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:04.579 [2024-07-25 10:16:49.605724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.579 [2024-07-25 10:16:49.605742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:04.579 [2024-07-25 10:16:49.616395] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:04.579 [2024-07-25 10:16:49.616437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.579 [2024-07-25 10:16:49.616465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:04.579 [2024-07-25 10:16:49.626645] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:04.579 [2024-07-25 10:16:49.626678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.579 [2024-07-25 10:16:49.626697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.579 [2024-07-25 10:16:49.636864] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:04.579 [2024-07-25 10:16:49.636897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.579 [2024-07-25 10:16:49.636915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:04.579 [2024-07-25 10:16:49.647012] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:04.579 [2024-07-25 10:16:49.647045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.579 [2024-07-25 10:16:49.647063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:04.579 [2024-07-25 10:16:49.657129] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:04.579 [2024-07-25 10:16:49.657161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.579 [2024-07-25 10:16:49.657180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:04.579 [2024-07-25 10:16:49.667173] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:04.579 [2024-07-25 10:16:49.667206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.579 [2024-07-25 10:16:49.667225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.579 [2024-07-25 10:16:49.677186] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:04.579 [2024-07-25 10:16:49.677219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.579 [2024-07-25 10:16:49.677238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:04.579 [2024-07-25 10:16:49.687360] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:04.579 [2024-07-25 10:16:49.687392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.579 [2024-07-25 10:16:49.687411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:04.579 [2024-07-25 10:16:49.697541] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:04.579 [2024-07-25 10:16:49.697575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.579 [2024-07-25 10:16:49.697594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:04.579 [2024-07-25 10:16:49.707497] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:04.579 [2024-07-25 10:16:49.707536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.579 [2024-07-25 10:16:49.707555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.579 [2024-07-25 10:16:49.717536] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:04.579 [2024-07-25 10:16:49.717568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.580 [2024-07-25 10:16:49.717586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:04.580 [2024-07-25 10:16:49.728090] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:04.580 [2024-07-25 10:16:49.728124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.580 [2024-07-25 10:16:49.728143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:04.580 [2024-07-25 10:16:49.738477] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:04.580 [2024-07-25 10:16:49.738510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.580 [2024-07-25 10:16:49.738529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:04.837 [2024-07-25 10:16:49.748691] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:04.837 [2024-07-25 10:16:49.748724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.837 [2024-07-25 10:16:49.748743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.837 [2024-07-25 10:16:49.758904] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:04.837 [2024-07-25 10:16:49.758937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.837 [2024-07-25 10:16:49.758956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:04.837 [2024-07-25 10:16:49.769493] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:04.837 [2024-07-25 10:16:49.769525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.837 [2024-07-25 10:16:49.769544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:04.837 [2024-07-25 10:16:49.779496] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:04.837 [2024-07-25 10:16:49.779528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.837 [2024-07-25 10:16:49.779546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:04.837 [2024-07-25 10:16:49.789545] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:04.837 [2024-07-25 10:16:49.789577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.837 [2024-07-25 10:16:49.789595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.837 [2024-07-25 10:16:49.799616] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:04.837 [2024-07-25 10:16:49.799648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.837 [2024-07-25 10:16:49.799666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:04.837 [2024-07-25 10:16:49.809665] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:04.837 [2024-07-25 10:16:49.809697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.837 [2024-07-25 10:16:49.809715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:04.837 [2024-07-25 10:16:49.819721] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:04.837 [2024-07-25 10:16:49.819753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.837 [2024-07-25 10:16:49.819772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:04.837 [2024-07-25 10:16:49.829921] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:04.837 [2024-07-25 10:16:49.829954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.837 [2024-07-25 10:16:49.829972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.837 [2024-07-25 10:16:49.840027] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:04.837 [2024-07-25 10:16:49.840059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.837 [2024-07-25 10:16:49.840077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:04.837 [2024-07-25 10:16:49.850378] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:04.837 [2024-07-25 10:16:49.850411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.837 [2024-07-25 10:16:49.850437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:04.837 [2024-07-25 10:16:49.860706] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:04.837 [2024-07-25 10:16:49.860738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.837 [2024-07-25 10:16:49.860757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:04.837 [2024-07-25 10:16:49.870701] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xd25ec0) 00:28:04.838 [2024-07-25 10:16:49.870733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:04.838 [2024-07-25 10:16:49.870751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:04.838 00:28:04.838 Latency(us) 00:28:04.838 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:04.838 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:28:04.838 nvme0n1 : 2.00 2916.50 364.56 0.00 0.00 5479.85 1474.56 13495.56 00:28:04.838 =================================================================================================================== 00:28:04.838 Total : 2916.50 364.56 0.00 0.00 5479.85 1474.56 13495.56 00:28:04.838 0 00:28:04.838 10:16:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:04.838 10:16:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:04.838 10:16:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:04.838 | .driver_specific 00:28:04.838 | .nvme_error 00:28:04.838 | .status_code 00:28:04.838 | .command_transient_transport_error' 00:28:04.838 10:16:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:05.095 10:16:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 188 > 0 )) 00:28:05.095 10:16:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 545130 00:28:05.095 10:16:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 545130 ']' 00:28:05.095 10:16:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 545130 00:28:05.095 10:16:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:28:05.095 10:16:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:05.095 10:16:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 545130 00:28:05.352 10:16:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:05.352 10:16:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:05.352 10:16:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 545130' 00:28:05.352 killing process with pid 545130 00:28:05.352 10:16:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 545130 00:28:05.352 Received shutdown signal, test time was about 2.000000 seconds 00:28:05.352 00:28:05.352 Latency(us) 00:28:05.352 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:05.352 =================================================================================================================== 00:28:05.352 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:05.352 10:16:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 545130 00:28:05.610 10:16:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:28:05.610 10:16:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:05.610 10:16:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:28:05.610 10:16:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:28:05.610 10:16:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:28:05.610 10:16:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=545663 00:28:05.610 10:16:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:28:05.610 10:16:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 545663 /var/tmp/bperf.sock 00:28:05.610 10:16:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 545663 ']' 00:28:05.610 10:16:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:05.610 10:16:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:05.610 10:16:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:05.610 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:05.610 10:16:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:05.610 10:16:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:05.610 [2024-07-25 10:16:50.613055] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:28:05.610 [2024-07-25 10:16:50.613158] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid545663 ] 00:28:05.610 EAL: No free 2048 kB hugepages reported on node 1 00:28:05.610 [2024-07-25 10:16:50.686943] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:05.868 [2024-07-25 10:16:50.805465] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:05.868 10:16:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:05.868 10:16:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:28:05.868 10:16:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:05.868 10:16:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:06.432 10:16:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:06.432 10:16:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.432 10:16:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:06.432 10:16:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.432 10:16:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:06.432 10:16:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:06.997 nvme0n1 00:28:06.997 10:16:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:28:06.997 10:16:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.997 10:16:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:06.997 10:16:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.997 10:16:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:06.997 10:16:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:06.997 Running I/O for 2 seconds... 00:28:06.997 [2024-07-25 10:16:52.073181] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143b7e0) with pdu=0x2000190fe2e8 00:28:06.997 [2024-07-25 10:16:52.073474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:16563 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.998 [2024-07-25 10:16:52.073516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:06.998 [2024-07-25 10:16:52.087259] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143b7e0) with pdu=0x2000190fe2e8 00:28:06.998 [2024-07-25 10:16:52.087519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:10208 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.998 [2024-07-25 10:16:52.087552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:06.998 [2024-07-25 10:16:52.101265] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143b7e0) with pdu=0x2000190fe2e8 00:28:06.998 [2024-07-25 10:16:52.101548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16076 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.998 [2024-07-25 10:16:52.101579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:06.998 [2024-07-25 10:16:52.115295] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143b7e0) with pdu=0x2000190fe2e8 00:28:06.998 [2024-07-25 10:16:52.115574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4707 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.998 [2024-07-25 10:16:52.115604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:06.998 [2024-07-25 10:16:52.129306] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143b7e0) with pdu=0x2000190fe2e8 00:28:06.998 [2024-07-25 10:16:52.129591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9257 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.998 [2024-07-25 10:16:52.129622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:06.998 [2024-07-25 10:16:52.143308] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143b7e0) with pdu=0x2000190fe2e8 00:28:06.998 [2024-07-25 10:16:52.143595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:11219 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.998 [2024-07-25 10:16:52.143626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:06.998 [2024-07-25 10:16:52.157246] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143b7e0) with pdu=0x2000190fe2e8 00:28:06.998 [2024-07-25 10:16:52.157514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:8246 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:06.998 [2024-07-25 10:16:52.157546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:07.256 [2024-07-25 10:16:52.171178] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143b7e0) with pdu=0x2000190fe2e8 00:28:07.256 [2024-07-25 10:16:52.171450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22743 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.256 [2024-07-25 10:16:52.171481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:07.256 [2024-07-25 10:16:52.185105] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143b7e0) with pdu=0x2000190fe2e8 00:28:07.256 [2024-07-25 10:16:52.185373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24723 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.256 [2024-07-25 10:16:52.185403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:07.256 [2024-07-25 10:16:52.198995] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143b7e0) with pdu=0x2000190fe2e8 00:28:07.256 [2024-07-25 10:16:52.199262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6307 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.256 [2024-07-25 10:16:52.199298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:07.256 [2024-07-25 10:16:52.212981] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143b7e0) with pdu=0x2000190fe2e8 00:28:07.256 [2024-07-25 10:16:52.213283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:10880 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.256 [2024-07-25 10:16:52.213313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:07.256 [2024-07-25 10:16:52.227135] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143b7e0) with pdu=0x2000190fe2e8 00:28:07.256 [2024-07-25 10:16:52.227451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:10722 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.256 [2024-07-25 10:16:52.227482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:07.256 [2024-07-25 10:16:52.241265] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143b7e0) with pdu=0x2000190fe2e8 00:28:07.256 [2024-07-25 10:16:52.241584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11092 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.256 [2024-07-25 10:16:52.241614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:07.256 [2024-07-25 10:16:52.255409] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143b7e0) with pdu=0x2000190fe2e8 00:28:07.256 [2024-07-25 10:16:52.255729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13999 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.256 [2024-07-25 10:16:52.255759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:07.256 [2024-07-25 10:16:52.269670] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143b7e0) with pdu=0x2000190fe2e8 00:28:07.256 [2024-07-25 10:16:52.269987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24550 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.256 [2024-07-25 10:16:52.270016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:07.256 [2024-07-25 10:16:52.283824] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143b7e0) with pdu=0x2000190fe2e8 00:28:07.256 [2024-07-25 10:16:52.284126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:1915 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.256 [2024-07-25 10:16:52.284156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:07.256 [2024-07-25 10:16:52.298009] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143b7e0) with pdu=0x2000190fe2e8 00:28:07.256 [2024-07-25 10:16:52.298307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:20226 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.256 [2024-07-25 10:16:52.298337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:07.256 [2024-07-25 10:16:52.312194] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143b7e0) with pdu=0x2000190fe2e8 00:28:07.256 [2024-07-25 10:16:52.312507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19720 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.256 [2024-07-25 10:16:52.312537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:07.256 [2024-07-25 10:16:52.326406] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143b7e0) with pdu=0x2000190fe2e8 00:28:07.256 [2024-07-25 10:16:52.326746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9940 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.256 [2024-07-25 10:16:52.326775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:07.256 [2024-07-25 10:16:52.340748] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143b7e0) with pdu=0x2000190fe2e8 00:28:07.256 [2024-07-25 10:16:52.341053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9547 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.256 [2024-07-25 10:16:52.341083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:07.256 [2024-07-25 10:16:52.355008] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143b7e0) with pdu=0x2000190fe2e8 00:28:07.256 [2024-07-25 10:16:52.355319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:4446 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.256 [2024-07-25 10:16:52.355348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:07.256 [2024-07-25 10:16:52.369309] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143b7e0) with pdu=0x2000190fe2e8 00:28:07.256 [2024-07-25 10:16:52.369616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:7020 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.256 [2024-07-25 10:16:52.369646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:07.256 [2024-07-25 10:16:52.383423] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143b7e0) with pdu=0x2000190fe2e8 00:28:07.256 [2024-07-25 10:16:52.383745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4069 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.256 [2024-07-25 10:16:52.383774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:07.257 [2024-07-25 10:16:52.397684] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143b7e0) with pdu=0x2000190fe2e8 00:28:07.257 [2024-07-25 10:16:52.397991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22183 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.257 [2024-07-25 10:16:52.398021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:07.257 [2024-07-25 10:16:52.411827] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143b7e0) with pdu=0x2000190fe2e8 00:28:07.257 [2024-07-25 10:16:52.412100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15709 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.257 [2024-07-25 10:16:52.412129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:07.515 [2024-07-25 10:16:52.426163] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143b7e0) with pdu=0x2000190fe2e8 00:28:07.515 [2024-07-25 10:16:52.426470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:7243 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.515 [2024-07-25 10:16:52.426499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:07.515 [2024-07-25 10:16:52.440349] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143b7e0) with pdu=0x2000190fe2e8 00:28:07.515 [2024-07-25 10:16:52.440662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:22500 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.515 [2024-07-25 10:16:52.440691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:07.515 [2024-07-25 10:16:52.454543] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143b7e0) with pdu=0x2000190fe2e8 00:28:07.515 [2024-07-25 10:16:52.454846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19609 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.515 [2024-07-25 10:16:52.454876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:07.515 [2024-07-25 10:16:52.468741] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143b7e0) with pdu=0x2000190fe2e8 00:28:07.515 [2024-07-25 10:16:52.469052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10531 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.515 [2024-07-25 10:16:52.469083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:07.515 [2024-07-25 10:16:52.482937] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143b7e0) with pdu=0x2000190fe2e8 00:28:07.515 [2024-07-25 10:16:52.483257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9924 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.515 [2024-07-25 10:16:52.483287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:07.515 [2024-07-25 10:16:52.497049] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143b7e0) with pdu=0x2000190fe2e8 00:28:07.515 [2024-07-25 10:16:52.497360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:7628 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.515 [2024-07-25 10:16:52.497389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:07.515 [2024-07-25 10:16:52.511145] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143b7e0) with pdu=0x2000190fe2e8 00:28:07.515 [2024-07-25 10:16:52.511452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:18391 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.515 [2024-07-25 10:16:52.511493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:07.515 [2024-07-25 10:16:52.525276] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143b7e0) with pdu=0x2000190fe2e8 00:28:07.515 [2024-07-25 10:16:52.525530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18435 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.515 [2024-07-25 10:16:52.525563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:07.515 [2024-07-25 10:16:52.539356] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143b7e0) with pdu=0x2000190fe2e8 00:28:07.515 [2024-07-25 10:16:52.539595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.515 [2024-07-25 10:16:52.539628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:07.515 [2024-07-25 10:16:52.553462] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143b7e0) with pdu=0x2000190fe2e8 00:28:07.515 [2024-07-25 10:16:52.553691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5323 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.515 [2024-07-25 10:16:52.553733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:07.515 [2024-07-25 10:16:52.567500] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143b7e0) with pdu=0x2000190fe2e8 00:28:07.515 [2024-07-25 10:16:52.567750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:1735 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.515 [2024-07-25 10:16:52.567787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:07.515 [2024-07-25 10:16:52.581496] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143b7e0) with pdu=0x2000190fe2e8 00:28:07.515 [2024-07-25 10:16:52.581763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:17535 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.515 [2024-07-25 10:16:52.581795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:07.515 [2024-07-25 10:16:52.595540] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143b7e0) with pdu=0x2000190fe2e8 00:28:07.515 [2024-07-25 10:16:52.595850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3188 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.515 [2024-07-25 10:16:52.595880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:07.515 [2024-07-25 10:16:52.609597] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143b7e0) with pdu=0x2000190fe2e8 00:28:07.515 [2024-07-25 10:16:52.609835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20026 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.515 [2024-07-25 10:16:52.609865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:07.515 [2024-07-25 10:16:52.623738] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143b7e0) with pdu=0x2000190fe2e8 00:28:07.515 [2024-07-25 10:16:52.624042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24452 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.515 [2024-07-25 10:16:52.624073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:07.515 [2024-07-25 10:16:52.637819] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143b7e0) with pdu=0x2000190fe2e8 00:28:07.515 [2024-07-25 10:16:52.638100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:16971 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.515 [2024-07-25 10:16:52.638130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:07.515 [2024-07-25 10:16:52.651899] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143b7e0) with pdu=0x2000190fe2e8 00:28:07.515 [2024-07-25 10:16:52.652185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:12123 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.515 [2024-07-25 10:16:52.652214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:07.515 [2024-07-25 10:16:52.666007] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143b7e0) with pdu=0x2000190fe2e8 00:28:07.515 [2024-07-25 10:16:52.666257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4006 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.515 [2024-07-25 10:16:52.666287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:07.515 [2024-07-25 10:16:52.680102] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143b7e0) with pdu=0x2000190fe2e8 00:28:07.515 [2024-07-25 10:16:52.680412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1492 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.515 [2024-07-25 10:16:52.680450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:07.773 [2024-07-25 10:16:52.694037] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143b7e0) with pdu=0x2000190fe2e8 00:28:07.773 [2024-07-25 10:16:52.694276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22605 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.773 [2024-07-25 10:16:52.694306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:07.773 [2024-07-25 10:16:52.707945] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143b7e0) with pdu=0x2000190fe2e8 00:28:07.773 [2024-07-25 10:16:52.708179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:10041 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.773 [2024-07-25 10:16:52.708209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:07.773 [2024-07-25 10:16:52.721860] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143b7e0) with pdu=0x2000190fe2e8 00:28:07.773 [2024-07-25 10:16:52.722093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:7370 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.773 [2024-07-25 10:16:52.722123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:07.773 [2024-07-25 10:16:52.735760] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143b7e0) with pdu=0x2000190fe2e8 00:28:07.773 [2024-07-25 10:16:52.735994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8494 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.773 [2024-07-25 10:16:52.736023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:07.773 [2024-07-25 10:16:52.749661] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143b7e0) with pdu=0x2000190fe2e8 00:28:07.773 [2024-07-25 10:16:52.749896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24214 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.773 [2024-07-25 10:16:52.749926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:07.773 [2024-07-25 10:16:52.763571] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143b7e0) with pdu=0x2000190fe2e8 00:28:07.773 [2024-07-25 10:16:52.763806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4256 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.773 [2024-07-25 10:16:52.763836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:07.773 [2024-07-25 10:16:52.777479] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143b7e0) with pdu=0x2000190fe2e8 00:28:07.773 [2024-07-25 10:16:52.777715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:4497 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.773 [2024-07-25 10:16:52.777744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:07.773 [2024-07-25 10:16:52.791596] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143b7e0) with pdu=0x2000190fe2e8 00:28:07.773 [2024-07-25 10:16:52.791831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:23225 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.773 [2024-07-25 10:16:52.791864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:07.773 [2024-07-25 10:16:52.805486] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143b7e0) with pdu=0x2000190fe2e8 00:28:07.773 [2024-07-25 10:16:52.805723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1130 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.773 [2024-07-25 10:16:52.805755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:07.773 [2024-07-25 10:16:52.819376] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143b7e0) with pdu=0x2000190fe2e8 00:28:07.773 [2024-07-25 10:16:52.819621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7374 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.773 [2024-07-25 10:16:52.819651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:07.773 [2024-07-25 10:16:52.833291] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143b7e0) with pdu=0x2000190fe2e8 00:28:07.773 [2024-07-25 10:16:52.833536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22551 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.773 [2024-07-25 10:16:52.833566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:07.773 [2024-07-25 10:16:52.847191] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143b7e0) with pdu=0x2000190fe2e8 00:28:07.773 [2024-07-25 10:16:52.847426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:21326 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.774 [2024-07-25 10:16:52.847463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:07.774 [2024-07-25 10:16:52.861129] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143b7e0) with pdu=0x2000190fe2e8 00:28:07.774 [2024-07-25 10:16:52.861363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:20029 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.774 [2024-07-25 10:16:52.861393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:07.774 [2024-07-25 10:16:52.875050] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143b7e0) with pdu=0x2000190fe2e8 00:28:07.774 [2024-07-25 10:16:52.875284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:12211 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.774 [2024-07-25 10:16:52.875314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:07.774 [2024-07-25 10:16:52.888945] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143b7e0) with pdu=0x2000190fe2e8 00:28:07.774 [2024-07-25 10:16:52.889182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2727 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.774 [2024-07-25 10:16:52.889211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:07.774 [2024-07-25 10:16:52.902827] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143b7e0) with pdu=0x2000190fe2e8 00:28:07.774 [2024-07-25 10:16:52.903065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18804 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.774 [2024-07-25 10:16:52.903094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:07.774 [2024-07-25 10:16:52.916750] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143b7e0) with pdu=0x2000190fe2e8 00:28:07.774 [2024-07-25 10:16:52.916981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:13234 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.774 [2024-07-25 10:16:52.917011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:07.774 [2024-07-25 10:16:52.930655] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143b7e0) with pdu=0x2000190fe2e8 00:28:07.774 [2024-07-25 10:16:52.930890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:12783 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:07.774 [2024-07-25 10:16:52.930925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:08.031 [2024-07-25 10:16:52.944538] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143b7e0) with pdu=0x2000190fe2e8 00:28:08.031 [2024-07-25 10:16:52.944770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:3518 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.031 [2024-07-25 10:16:52.944800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:08.031 [2024-07-25 10:16:52.958400] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143b7e0) with pdu=0x2000190fe2e8 00:28:08.031 [2024-07-25 10:16:52.958645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21346 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.031 [2024-07-25 10:16:52.958676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:08.031 [2024-07-25 10:16:52.972350] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143b7e0) with pdu=0x2000190fe2e8 00:28:08.031 [2024-07-25 10:16:52.972592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7580 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.031 [2024-07-25 10:16:52.972623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:08.031 [2024-07-25 10:16:52.986223] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143b7e0) with pdu=0x2000190fe2e8 00:28:08.031 [2024-07-25 10:16:52.986459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:14034 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.031 [2024-07-25 10:16:52.986489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:08.031 [2024-07-25 10:16:53.000163] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143b7e0) with pdu=0x2000190fe2e8 00:28:08.031 [2024-07-25 10:16:53.000399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:14853 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.031 [2024-07-25 10:16:53.000435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:08.031 [2024-07-25 10:16:53.014025] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143b7e0) with pdu=0x2000190fe2e8 00:28:08.031 [2024-07-25 10:16:53.014263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:255 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.032 [2024-07-25 10:16:53.014292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:08.032 [2024-07-25 10:16:53.028098] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143b7e0) with pdu=0x2000190fe2e8 00:28:08.032 [2024-07-25 10:16:53.028335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5120 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.032 [2024-07-25 10:16:53.028367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:08.032 [2024-07-25 10:16:53.042024] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143b7e0) with pdu=0x2000190fe2e8 00:28:08.032 [2024-07-25 10:16:53.042259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9817 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.032 [2024-07-25 10:16:53.042289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:08.032 [2024-07-25 10:16:53.055950] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143b7e0) with pdu=0x2000190fe2e8 00:28:08.032 [2024-07-25 10:16:53.056190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:16986 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.032 [2024-07-25 10:16:53.056221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:08.032 [2024-07-25 10:16:53.069906] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143b7e0) with pdu=0x2000190fe2e8 00:28:08.032 [2024-07-25 10:16:53.070142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:14821 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.032 [2024-07-25 10:16:53.070173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:08.032 [2024-07-25 10:16:53.083798] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143b7e0) with pdu=0x2000190fe2e8 00:28:08.032 [2024-07-25 10:16:53.084059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:21520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.032 [2024-07-25 10:16:53.084088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:08.032 [2024-07-25 10:16:53.097722] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143b7e0) with pdu=0x2000190fe2e8 00:28:08.032 [2024-07-25 10:16:53.097955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3893 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.032 [2024-07-25 10:16:53.097985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:08.032 [2024-07-25 10:16:53.111618] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143b7e0) with pdu=0x2000190fe2e8 00:28:08.032 [2024-07-25 10:16:53.111944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12742 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.032 [2024-07-25 10:16:53.111974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:08.032 [2024-07-25 10:16:53.125554] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143b7e0) with pdu=0x2000190fe2e8 00:28:08.032 [2024-07-25 10:16:53.125787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:15193 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.032 [2024-07-25 10:16:53.125817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:08.032 [2024-07-25 10:16:53.139449] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143b7e0) with pdu=0x2000190fe2e8 00:28:08.032 [2024-07-25 10:16:53.139719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:9777 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.032 [2024-07-25 10:16:53.139749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:08.032 [2024-07-25 10:16:53.153331] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143b7e0) with pdu=0x2000190fe2e8 00:28:08.032 [2024-07-25 10:16:53.153576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:4294 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.032 [2024-07-25 10:16:53.153606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:08.032 [2024-07-25 10:16:53.167266] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143b7e0) with pdu=0x2000190fe2e8 00:28:08.032 [2024-07-25 10:16:53.167507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1416 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.032 [2024-07-25 10:16:53.167538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:08.032 [2024-07-25 10:16:53.181197] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143b7e0) with pdu=0x2000190fe2e8 00:28:08.032 [2024-07-25 10:16:53.181440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21075 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.032 [2024-07-25 10:16:53.181471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:08.032 [2024-07-25 10:16:53.195117] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143b7e0) with pdu=0x2000190fe2e8 00:28:08.032 [2024-07-25 10:16:53.195348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:8181 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.032 [2024-07-25 10:16:53.195377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:08.290 [2024-07-25 10:16:53.209029] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143b7e0) with pdu=0x2000190fe2e8 00:28:08.290 [2024-07-25 10:16:53.209264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:13596 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.290 [2024-07-25 10:16:53.209294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:08.290 [2024-07-25 10:16:53.222944] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143b7e0) with pdu=0x2000190fe2e8 00:28:08.290 [2024-07-25 10:16:53.223177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.290 [2024-07-25 10:16:53.223208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:08.290 [2024-07-25 10:16:53.236924] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143b7e0) with pdu=0x2000190fe2e8 00:28:08.290 [2024-07-25 10:16:53.237161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3450 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.290 [2024-07-25 10:16:53.237191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:08.290 [2024-07-25 10:16:53.250843] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143b7e0) with pdu=0x2000190fe2e8 00:28:08.290 [2024-07-25 10:16:53.251078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9760 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.290 [2024-07-25 10:16:53.251107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:08.290 [2024-07-25 10:16:53.264807] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143b7e0) with pdu=0x2000190fe2e8 00:28:08.290 [2024-07-25 10:16:53.265041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:13044 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.290 [2024-07-25 10:16:53.265071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:08.290 [2024-07-25 10:16:53.278787] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143b7e0) with pdu=0x2000190fe2e8 00:28:08.290 [2024-07-25 10:16:53.279022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:483 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.290 [2024-07-25 10:16:53.279052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:08.290 [2024-07-25 10:16:53.292737] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143b7e0) with pdu=0x2000190fe2e8 00:28:08.290 [2024-07-25 10:16:53.292973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2856 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.290 [2024-07-25 10:16:53.293003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:08.290 [2024-07-25 10:16:53.306677] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143b7e0) with pdu=0x2000190fe2e8 00:28:08.290 [2024-07-25 10:16:53.306914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20135 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.290 [2024-07-25 10:16:53.306945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:08.290 [2024-07-25 10:16:53.320613] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143b7e0) with pdu=0x2000190fe2e8 00:28:08.290 [2024-07-25 10:16:53.320846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5056 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.290 [2024-07-25 10:16:53.320876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:08.290 [2024-07-25 10:16:53.334570] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143b7e0) with pdu=0x2000190fe2e8 00:28:08.290 [2024-07-25 10:16:53.334803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:8677 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.290 [2024-07-25 10:16:53.334833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:08.290 [2024-07-25 10:16:53.348503] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143b7e0) with pdu=0x2000190fe2e8 00:28:08.290 [2024-07-25 10:16:53.348735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:19298 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.290 [2024-07-25 10:16:53.348767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:08.290 [2024-07-25 10:16:53.362416] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143b7e0) with pdu=0x2000190fe2e8 00:28:08.290 [2024-07-25 10:16:53.362660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:19115 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.290 [2024-07-25 10:16:53.362691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:08.290 [2024-07-25 10:16:53.376364] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143b7e0) with pdu=0x2000190fe2e8 00:28:08.290 [2024-07-25 10:16:53.376609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19870 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.290 [2024-07-25 10:16:53.376639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:08.290 [2024-07-25 10:16:53.390311] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143b7e0) with pdu=0x2000190fe2e8 00:28:08.290 [2024-07-25 10:16:53.390555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.290 [2024-07-25 10:16:53.390585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:08.290 [2024-07-25 10:16:53.404259] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143b7e0) with pdu=0x2000190fe2e8 00:28:08.290 [2024-07-25 10:16:53.404495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:13792 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.290 [2024-07-25 10:16:53.404524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:08.290 [2024-07-25 10:16:53.418166] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143b7e0) with pdu=0x2000190fe2e8 00:28:08.290 [2024-07-25 10:16:53.418399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:1154 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.290 [2024-07-25 10:16:53.418440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:08.290 [2024-07-25 10:16:53.432079] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143b7e0) with pdu=0x2000190fe2e8 00:28:08.290 [2024-07-25 10:16:53.432312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24926 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.290 [2024-07-25 10:16:53.432342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:08.290 [2024-07-25 10:16:53.446000] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143b7e0) with pdu=0x2000190fe2e8 00:28:08.290 [2024-07-25 10:16:53.446330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8570 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.290 [2024-07-25 10:16:53.446360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:08.547 [2024-07-25 10:16:53.459917] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143b7e0) with pdu=0x2000190fe2e8 00:28:08.547 [2024-07-25 10:16:53.460147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18442 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.547 [2024-07-25 10:16:53.460176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:08.547 [2024-07-25 10:16:53.473917] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143b7e0) with pdu=0x2000190fe2e8 00:28:08.547 [2024-07-25 10:16:53.474174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:5548 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.547 [2024-07-25 10:16:53.474204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:08.547 [2024-07-25 10:16:53.487937] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143b7e0) with pdu=0x2000190fe2e8 00:28:08.547 [2024-07-25 10:16:53.488247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:16512 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.547 [2024-07-25 10:16:53.488278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:08.547 [2024-07-25 10:16:53.502000] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143b7e0) with pdu=0x2000190fe2e8 00:28:08.547 [2024-07-25 10:16:53.502235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1940 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.547 [2024-07-25 10:16:53.502265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:08.547 [2024-07-25 10:16:53.516127] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143b7e0) with pdu=0x2000190fe2e8 00:28:08.547 [2024-07-25 10:16:53.516359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1784 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.547 [2024-07-25 10:16:53.516389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:08.547 [2024-07-25 10:16:53.530044] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143b7e0) with pdu=0x2000190fe2e8 00:28:08.547 [2024-07-25 10:16:53.530278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2272 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.547 [2024-07-25 10:16:53.530308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:08.547 [2024-07-25 10:16:53.544010] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143b7e0) with pdu=0x2000190fe2e8 00:28:08.547 [2024-07-25 10:16:53.544251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:7987 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.547 [2024-07-25 10:16:53.544281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:08.547 [2024-07-25 10:16:53.557908] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143b7e0) with pdu=0x2000190fe2e8 00:28:08.547 [2024-07-25 10:16:53.558143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:8163 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.547 [2024-07-25 10:16:53.558173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:08.547 [2024-07-25 10:16:53.571853] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143b7e0) with pdu=0x2000190fe2e8 00:28:08.547 [2024-07-25 10:16:53.572086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:22896 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.547 [2024-07-25 10:16:53.572116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:08.547 [2024-07-25 10:16:53.585747] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143b7e0) with pdu=0x2000190fe2e8 00:28:08.547 [2024-07-25 10:16:53.585988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17408 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.547 [2024-07-25 10:16:53.586018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:08.547 [2024-07-25 10:16:53.599716] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143b7e0) with pdu=0x2000190fe2e8 00:28:08.548 [2024-07-25 10:16:53.599942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12886 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.548 [2024-07-25 10:16:53.599972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:08.548 [2024-07-25 10:16:53.613649] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143b7e0) with pdu=0x2000190fe2e8 00:28:08.548 [2024-07-25 10:16:53.613885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:11553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.548 [2024-07-25 10:16:53.613914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:08.548 [2024-07-25 10:16:53.627587] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143b7e0) with pdu=0x2000190fe2e8 00:28:08.548 [2024-07-25 10:16:53.627821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:15851 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.548 [2024-07-25 10:16:53.627851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:08.548 [2024-07-25 10:16:53.641476] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143b7e0) with pdu=0x2000190fe2e8 00:28:08.548 [2024-07-25 10:16:53.641715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18975 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.548 [2024-07-25 10:16:53.641744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:08.548 [2024-07-25 10:16:53.655496] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143b7e0) with pdu=0x2000190fe2e8 00:28:08.548 [2024-07-25 10:16:53.655763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12666 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.548 [2024-07-25 10:16:53.655792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:08.548 [2024-07-25 10:16:53.669540] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143b7e0) with pdu=0x2000190fe2e8 00:28:08.548 [2024-07-25 10:16:53.669775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2619 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.548 [2024-07-25 10:16:53.669806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:08.548 [2024-07-25 10:16:53.683562] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143b7e0) with pdu=0x2000190fe2e8 00:28:08.548 [2024-07-25 10:16:53.683793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:3214 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.548 [2024-07-25 10:16:53.683823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:08.548 [2024-07-25 10:16:53.697582] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143b7e0) with pdu=0x2000190fe2e8 00:28:08.548 [2024-07-25 10:16:53.697815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:6530 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.548 [2024-07-25 10:16:53.697846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:08.548 [2024-07-25 10:16:53.711570] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143b7e0) with pdu=0x2000190fe2e8 00:28:08.548 [2024-07-25 10:16:53.711848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16605 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.548 [2024-07-25 10:16:53.711879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:08.805 [2024-07-25 10:16:53.725613] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143b7e0) with pdu=0x2000190fe2e8 00:28:08.805 [2024-07-25 10:16:53.725846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11339 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.805 [2024-07-25 10:16:53.725877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:08.805 [2024-07-25 10:16:53.739675] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143b7e0) with pdu=0x2000190fe2e8 00:28:08.805 [2024-07-25 10:16:53.739956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2356 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.805 [2024-07-25 10:16:53.739997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:08.805 [2024-07-25 10:16:53.753730] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143b7e0) with pdu=0x2000190fe2e8 00:28:08.805 [2024-07-25 10:16:53.753965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:13724 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.805 [2024-07-25 10:16:53.753995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:08.805 [2024-07-25 10:16:53.767792] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143b7e0) with pdu=0x2000190fe2e8 00:28:08.805 [2024-07-25 10:16:53.768080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:11119 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.805 [2024-07-25 10:16:53.768110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:08.805 [2024-07-25 10:16:53.781873] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143b7e0) with pdu=0x2000190fe2e8 00:28:08.805 [2024-07-25 10:16:53.782105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:18591 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.805 [2024-07-25 10:16:53.782142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:08.805 [2024-07-25 10:16:53.795888] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143b7e0) with pdu=0x2000190fe2e8 00:28:08.805 [2024-07-25 10:16:53.796120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17763 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.806 [2024-07-25 10:16:53.796150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:08.806 [2024-07-25 10:16:53.810129] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143b7e0) with pdu=0x2000190fe2e8 00:28:08.806 [2024-07-25 10:16:53.810364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12559 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.806 [2024-07-25 10:16:53.810394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:08.806 [2024-07-25 10:16:53.824073] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143b7e0) with pdu=0x2000190fe2e8 00:28:08.806 [2024-07-25 10:16:53.824386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:8400 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.806 [2024-07-25 10:16:53.824416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:08.806 [2024-07-25 10:16:53.838023] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143b7e0) with pdu=0x2000190fe2e8 00:28:08.806 [2024-07-25 10:16:53.838258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:10927 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.806 [2024-07-25 10:16:53.838288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:08.806 [2024-07-25 10:16:53.851959] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143b7e0) with pdu=0x2000190fe2e8 00:28:08.806 [2024-07-25 10:16:53.852193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:2642 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.806 [2024-07-25 10:16:53.852223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:08.806 [2024-07-25 10:16:53.865865] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143b7e0) with pdu=0x2000190fe2e8 00:28:08.806 [2024-07-25 10:16:53.866096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.806 [2024-07-25 10:16:53.866125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:08.806 [2024-07-25 10:16:53.879781] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143b7e0) with pdu=0x2000190fe2e8 00:28:08.806 [2024-07-25 10:16:53.880013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25422 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.806 [2024-07-25 10:16:53.880043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:08.806 [2024-07-25 10:16:53.893726] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143b7e0) with pdu=0x2000190fe2e8 00:28:08.806 [2024-07-25 10:16:53.893960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:22058 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.806 [2024-07-25 10:16:53.893990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:08.806 [2024-07-25 10:16:53.907652] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143b7e0) with pdu=0x2000190fe2e8 00:28:08.806 [2024-07-25 10:16:53.907900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:7912 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.806 [2024-07-25 10:16:53.907929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:08.806 [2024-07-25 10:16:53.921594] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143b7e0) with pdu=0x2000190fe2e8 00:28:08.806 [2024-07-25 10:16:53.921830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8323 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.806 [2024-07-25 10:16:53.921860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:08.806 [2024-07-25 10:16:53.935499] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143b7e0) with pdu=0x2000190fe2e8 00:28:08.806 [2024-07-25 10:16:53.935731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6634 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.806 [2024-07-25 10:16:53.935761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:08.806 [2024-07-25 10:16:53.949398] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143b7e0) with pdu=0x2000190fe2e8 00:28:08.806 [2024-07-25 10:16:53.949644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17641 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.806 [2024-07-25 10:16:53.949674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:08.806 [2024-07-25 10:16:53.963333] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143b7e0) with pdu=0x2000190fe2e8 00:28:08.806 [2024-07-25 10:16:53.963580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:15806 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:08.806 [2024-07-25 10:16:53.963610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:09.064 [2024-07-25 10:16:53.977270] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143b7e0) with pdu=0x2000190fe2e8 00:28:09.064 [2024-07-25 10:16:53.977514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:20411 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:09.064 [2024-07-25 10:16:53.977544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:09.064 [2024-07-25 10:16:53.991239] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143b7e0) with pdu=0x2000190fe2e8 00:28:09.064 [2024-07-25 10:16:53.991474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6740 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:09.064 [2024-07-25 10:16:53.991505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:09.064 [2024-07-25 10:16:54.005205] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143b7e0) with pdu=0x2000190fe2e8 00:28:09.064 [2024-07-25 10:16:54.005447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16633 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:09.064 [2024-07-25 10:16:54.005477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:09.064 [2024-07-25 10:16:54.019126] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143b7e0) with pdu=0x2000190fe2e8 00:28:09.064 [2024-07-25 10:16:54.019360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15130 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:09.064 [2024-07-25 10:16:54.019390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:09.064 [2024-07-25 10:16:54.033074] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143b7e0) with pdu=0x2000190fe2e8 00:28:09.064 [2024-07-25 10:16:54.033315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:13498 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:09.064 [2024-07-25 10:16:54.033356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:09.064 [2024-07-25 10:16:54.046998] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143b7e0) with pdu=0x2000190fe2e8 00:28:09.064 [2024-07-25 10:16:54.047230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:17130 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:09.064 [2024-07-25 10:16:54.047263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:09.064 [2024-07-25 10:16:54.060924] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143b7e0) with pdu=0x2000190fe2e8 00:28:09.064 [2024-07-25 10:16:54.061163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25237 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:09.064 [2024-07-25 10:16:54.061193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:28:09.064 00:28:09.064 Latency(us) 00:28:09.064 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:09.064 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:09.064 nvme0n1 : 2.01 18238.55 71.24 0.00 0.00 7001.33 2961.26 14466.47 00:28:09.064 =================================================================================================================== 00:28:09.064 Total : 18238.55 71.24 0.00 0.00 7001.33 2961.26 14466.47 00:28:09.064 0 00:28:09.064 10:16:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:09.064 10:16:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:09.064 10:16:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:09.064 | .driver_specific 00:28:09.064 | .nvme_error 00:28:09.064 | .status_code 00:28:09.064 | .command_transient_transport_error' 00:28:09.064 10:16:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:09.321 10:16:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 143 > 0 )) 00:28:09.321 10:16:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 545663 00:28:09.321 10:16:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 545663 ']' 00:28:09.321 10:16:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 545663 00:28:09.321 10:16:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:28:09.321 10:16:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:09.321 10:16:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 545663 00:28:09.321 10:16:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:09.321 10:16:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:09.321 10:16:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 545663' 00:28:09.321 killing process with pid 545663 00:28:09.321 10:16:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 545663 00:28:09.321 Received shutdown signal, test time was about 2.000000 seconds 00:28:09.321 00:28:09.321 Latency(us) 00:28:09.321 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:09.321 =================================================================================================================== 00:28:09.321 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:09.321 10:16:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 545663 00:28:09.886 10:16:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:28:09.886 10:16:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:28:09.886 10:16:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:28:09.886 10:16:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:28:09.886 10:16:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:28:09.886 10:16:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=546080 00:28:09.886 10:16:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:28:09.886 10:16:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 546080 /var/tmp/bperf.sock 00:28:09.886 10:16:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 546080 ']' 00:28:09.886 10:16:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:09.886 10:16:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:09.886 10:16:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:09.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:09.886 10:16:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:09.886 10:16:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:09.886 [2024-07-25 10:16:54.847455] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:28:09.886 [2024-07-25 10:16:54.847634] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid546080 ] 00:28:09.886 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:09.886 Zero copy mechanism will not be used. 00:28:09.886 EAL: No free 2048 kB hugepages reported on node 1 00:28:09.886 [2024-07-25 10:16:54.954971] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:10.144 [2024-07-25 10:16:55.079473] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:10.402 10:16:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:10.402 10:16:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:28:10.402 10:16:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:10.402 10:16:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:10.659 10:16:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:10.659 10:16:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.659 10:16:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:10.659 10:16:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.659 10:16:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:10.660 10:16:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:11.225 nvme0n1 00:28:11.225 10:16:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:28:11.225 10:16:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.225 10:16:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:11.225 10:16:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.225 10:16:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:11.225 10:16:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:11.225 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:11.225 Zero copy mechanism will not be used. 00:28:11.225 Running I/O for 2 seconds... 00:28:11.225 [2024-07-25 10:16:56.345007] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:11.225 [2024-07-25 10:16:56.345442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.225 [2024-07-25 10:16:56.345484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:11.225 [2024-07-25 10:16:56.354419] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:11.225 [2024-07-25 10:16:56.354804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.225 [2024-07-25 10:16:56.354837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:11.225 [2024-07-25 10:16:56.364174] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:11.225 [2024-07-25 10:16:56.364543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.225 [2024-07-25 10:16:56.364576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:11.225 [2024-07-25 10:16:56.373687] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:11.225 [2024-07-25 10:16:56.374096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.225 [2024-07-25 10:16:56.374129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.225 [2024-07-25 10:16:56.383729] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:11.225 [2024-07-25 10:16:56.384089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.225 [2024-07-25 10:16:56.384121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:11.483 [2024-07-25 10:16:56.394034] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:11.483 [2024-07-25 10:16:56.394394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.483 [2024-07-25 10:16:56.394446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:11.483 [2024-07-25 10:16:56.404354] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:11.484 [2024-07-25 10:16:56.404743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.484 [2024-07-25 10:16:56.404775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:11.484 [2024-07-25 10:16:56.415209] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:11.484 [2024-07-25 10:16:56.415629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.484 [2024-07-25 10:16:56.415662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.484 [2024-07-25 10:16:56.426790] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:11.484 [2024-07-25 10:16:56.427207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.484 [2024-07-25 10:16:56.427240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:11.484 [2024-07-25 10:16:56.437711] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:11.484 [2024-07-25 10:16:56.438181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.484 [2024-07-25 10:16:56.438213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:11.484 [2024-07-25 10:16:56.449193] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:11.484 [2024-07-25 10:16:56.449621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.484 [2024-07-25 10:16:56.449653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:11.484 [2024-07-25 10:16:56.459474] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:11.484 [2024-07-25 10:16:56.459892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.484 [2024-07-25 10:16:56.459925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.484 [2024-07-25 10:16:56.469847] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:11.484 [2024-07-25 10:16:56.469967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.484 [2024-07-25 10:16:56.469997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:11.484 [2024-07-25 10:16:56.481216] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:11.484 [2024-07-25 10:16:56.481694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.484 [2024-07-25 10:16:56.481727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:11.484 [2024-07-25 10:16:56.490830] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:11.484 [2024-07-25 10:16:56.491246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.484 [2024-07-25 10:16:56.491278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:11.484 [2024-07-25 10:16:56.501082] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:11.484 [2024-07-25 10:16:56.501320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.484 [2024-07-25 10:16:56.501352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.484 [2024-07-25 10:16:56.510977] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:11.484 [2024-07-25 10:16:56.511339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.484 [2024-07-25 10:16:56.511371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:11.484 [2024-07-25 10:16:56.521129] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:11.484 [2024-07-25 10:16:56.521514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.484 [2024-07-25 10:16:56.521547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:11.484 [2024-07-25 10:16:56.531081] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:11.484 [2024-07-25 10:16:56.531205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.484 [2024-07-25 10:16:56.531236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:11.484 [2024-07-25 10:16:56.542507] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:11.484 [2024-07-25 10:16:56.542873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.484 [2024-07-25 10:16:56.542907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.484 [2024-07-25 10:16:56.553784] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:11.484 [2024-07-25 10:16:56.554142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.484 [2024-07-25 10:16:56.554174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:11.484 [2024-07-25 10:16:56.564691] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:11.484 [2024-07-25 10:16:56.565070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.484 [2024-07-25 10:16:56.565102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:11.484 [2024-07-25 10:16:56.575562] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:11.484 [2024-07-25 10:16:56.575940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.484 [2024-07-25 10:16:56.575984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:11.484 [2024-07-25 10:16:56.586288] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:11.484 [2024-07-25 10:16:56.586673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.484 [2024-07-25 10:16:56.586706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.484 [2024-07-25 10:16:56.597063] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:11.484 [2024-07-25 10:16:56.597479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.484 [2024-07-25 10:16:56.597511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:11.484 [2024-07-25 10:16:56.607814] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:11.484 [2024-07-25 10:16:56.608234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.484 [2024-07-25 10:16:56.608266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:11.484 [2024-07-25 10:16:56.618420] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:11.484 [2024-07-25 10:16:56.618913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.484 [2024-07-25 10:16:56.618946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:11.484 [2024-07-25 10:16:56.629160] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:11.484 [2024-07-25 10:16:56.629584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.484 [2024-07-25 10:16:56.629617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.484 [2024-07-25 10:16:56.639154] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:11.484 [2024-07-25 10:16:56.639278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.484 [2024-07-25 10:16:56.639310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:11.484 [2024-07-25 10:16:56.649512] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:11.743 [2024-07-25 10:16:56.649929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.743 [2024-07-25 10:16:56.649961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:11.743 [2024-07-25 10:16:56.659991] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:11.743 [2024-07-25 10:16:56.660452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.743 [2024-07-25 10:16:56.660488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:11.743 [2024-07-25 10:16:56.670752] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:11.743 [2024-07-25 10:16:56.671395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.743 [2024-07-25 10:16:56.671434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.743 [2024-07-25 10:16:56.681266] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:11.743 [2024-07-25 10:16:56.681633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.743 [2024-07-25 10:16:56.681665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:11.743 [2024-07-25 10:16:56.691621] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:11.743 [2024-07-25 10:16:56.692038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.743 [2024-07-25 10:16:56.692070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:11.743 [2024-07-25 10:16:56.701603] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:11.743 [2024-07-25 10:16:56.701965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.743 [2024-07-25 10:16:56.701996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:11.743 [2024-07-25 10:16:56.710642] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:11.743 [2024-07-25 10:16:56.710979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.743 [2024-07-25 10:16:56.711011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.743 [2024-07-25 10:16:56.719599] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:11.743 [2024-07-25 10:16:56.720045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.743 [2024-07-25 10:16:56.720076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:11.743 [2024-07-25 10:16:56.729907] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:11.743 [2024-07-25 10:16:56.730287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.743 [2024-07-25 10:16:56.730318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:11.743 [2024-07-25 10:16:56.739892] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:11.743 [2024-07-25 10:16:56.740224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.743 [2024-07-25 10:16:56.740255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:11.743 [2024-07-25 10:16:56.748241] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:11.743 [2024-07-25 10:16:56.748718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.743 [2024-07-25 10:16:56.748750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.743 [2024-07-25 10:16:56.758272] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:11.743 [2024-07-25 10:16:56.758649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.743 [2024-07-25 10:16:56.758680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:11.743 [2024-07-25 10:16:56.767190] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:11.743 [2024-07-25 10:16:56.767776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.743 [2024-07-25 10:16:56.767809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:11.743 [2024-07-25 10:16:56.777054] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:11.743 [2024-07-25 10:16:56.777398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.743 [2024-07-25 10:16:56.777437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:11.743 [2024-07-25 10:16:56.786644] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:11.743 [2024-07-25 10:16:56.787021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.743 [2024-07-25 10:16:56.787052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.744 [2024-07-25 10:16:56.796866] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:11.744 [2024-07-25 10:16:56.797211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.744 [2024-07-25 10:16:56.797242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:11.744 [2024-07-25 10:16:56.805981] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:11.744 [2024-07-25 10:16:56.806321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.744 [2024-07-25 10:16:56.806352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:11.744 [2024-07-25 10:16:56.815135] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:11.744 [2024-07-25 10:16:56.815528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.744 [2024-07-25 10:16:56.815561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:11.744 [2024-07-25 10:16:56.824846] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:11.744 [2024-07-25 10:16:56.825213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.744 [2024-07-25 10:16:56.825245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.744 [2024-07-25 10:16:56.834337] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:11.744 [2024-07-25 10:16:56.834852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.744 [2024-07-25 10:16:56.834893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:11.744 [2024-07-25 10:16:56.844298] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:11.744 [2024-07-25 10:16:56.844646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.744 [2024-07-25 10:16:56.844676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:11.744 [2024-07-25 10:16:56.853610] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:11.744 [2024-07-25 10:16:56.853950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.744 [2024-07-25 10:16:56.853981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:11.744 [2024-07-25 10:16:56.863284] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:11.744 [2024-07-25 10:16:56.863630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.744 [2024-07-25 10:16:56.863662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:11.744 [2024-07-25 10:16:56.874009] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:11.744 [2024-07-25 10:16:56.874419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.744 [2024-07-25 10:16:56.874459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:11.744 [2024-07-25 10:16:56.883992] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:11.744 [2024-07-25 10:16:56.884336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.744 [2024-07-25 10:16:56.884367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:11.744 [2024-07-25 10:16:56.894032] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:11.744 [2024-07-25 10:16:56.894460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.744 [2024-07-25 10:16:56.894491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:11.744 [2024-07-25 10:16:56.903396] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:11.744 [2024-07-25 10:16:56.903778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:11.744 [2024-07-25 10:16:56.903809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.003 [2024-07-25 10:16:56.912799] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:12.003 [2024-07-25 10:16:56.913142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.003 [2024-07-25 10:16:56.913172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:12.003 [2024-07-25 10:16:56.922346] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:12.003 [2024-07-25 10:16:56.922802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.003 [2024-07-25 10:16:56.922833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:12.003 [2024-07-25 10:16:56.931624] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:12.003 [2024-07-25 10:16:56.932039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.003 [2024-07-25 10:16:56.932070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:12.003 [2024-07-25 10:16:56.941514] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:12.003 [2024-07-25 10:16:56.941863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.003 [2024-07-25 10:16:56.941895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.003 [2024-07-25 10:16:56.950664] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:12.003 [2024-07-25 10:16:56.951049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.003 [2024-07-25 10:16:56.951080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:12.003 [2024-07-25 10:16:56.960691] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:12.003 [2024-07-25 10:16:56.961164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.003 [2024-07-25 10:16:56.961195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:12.003 [2024-07-25 10:16:56.969938] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:12.003 [2024-07-25 10:16:56.970313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.003 [2024-07-25 10:16:56.970344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:12.003 [2024-07-25 10:16:56.978947] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:12.003 [2024-07-25 10:16:56.979289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.003 [2024-07-25 10:16:56.979321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.003 [2024-07-25 10:16:56.987729] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:12.003 [2024-07-25 10:16:56.988106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.003 [2024-07-25 10:16:56.988137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:12.003 [2024-07-25 10:16:56.997645] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:12.003 [2024-07-25 10:16:56.998003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.003 [2024-07-25 10:16:56.998036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:12.003 [2024-07-25 10:16:57.007380] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:12.003 [2024-07-25 10:16:57.007770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.003 [2024-07-25 10:16:57.007802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:12.003 [2024-07-25 10:16:57.016811] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:12.003 [2024-07-25 10:16:57.017244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.003 [2024-07-25 10:16:57.017276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.003 [2024-07-25 10:16:57.026736] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:12.003 [2024-07-25 10:16:57.027133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.003 [2024-07-25 10:16:57.027171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:12.003 [2024-07-25 10:16:57.036823] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:12.003 [2024-07-25 10:16:57.037220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.003 [2024-07-25 10:16:57.037251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:12.003 [2024-07-25 10:16:57.045956] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:12.003 [2024-07-25 10:16:57.046370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.003 [2024-07-25 10:16:57.046400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:12.003 [2024-07-25 10:16:57.055522] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:12.003 [2024-07-25 10:16:57.055912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.003 [2024-07-25 10:16:57.055943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.003 [2024-07-25 10:16:57.064536] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:12.003 [2024-07-25 10:16:57.064959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.003 [2024-07-25 10:16:57.064990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:12.003 [2024-07-25 10:16:57.074475] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:12.003 [2024-07-25 10:16:57.074973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.003 [2024-07-25 10:16:57.075004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:12.003 [2024-07-25 10:16:57.085241] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:12.003 [2024-07-25 10:16:57.085719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.003 [2024-07-25 10:16:57.085762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:12.003 [2024-07-25 10:16:57.095338] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:12.003 [2024-07-25 10:16:57.095861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.003 [2024-07-25 10:16:57.095893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.003 [2024-07-25 10:16:57.105664] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:12.003 [2024-07-25 10:16:57.106143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.003 [2024-07-25 10:16:57.106174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:12.003 [2024-07-25 10:16:57.115211] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:12.003 [2024-07-25 10:16:57.115650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.003 [2024-07-25 10:16:57.115681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:12.003 [2024-07-25 10:16:57.125589] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:12.003 [2024-07-25 10:16:57.126043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.003 [2024-07-25 10:16:57.126075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:12.003 [2024-07-25 10:16:57.135144] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:12.003 [2024-07-25 10:16:57.135599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.003 [2024-07-25 10:16:57.135631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.003 [2024-07-25 10:16:57.145232] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:12.003 [2024-07-25 10:16:57.145717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.003 [2024-07-25 10:16:57.145749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:12.003 [2024-07-25 10:16:57.155178] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:12.003 [2024-07-25 10:16:57.155600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.004 [2024-07-25 10:16:57.155631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:12.004 [2024-07-25 10:16:57.164772] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:12.004 [2024-07-25 10:16:57.165185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.004 [2024-07-25 10:16:57.165216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:12.262 [2024-07-25 10:16:57.174109] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:12.262 [2024-07-25 10:16:57.174569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.262 [2024-07-25 10:16:57.174602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.262 [2024-07-25 10:16:57.183220] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:12.262 [2024-07-25 10:16:57.183622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.262 [2024-07-25 10:16:57.183656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:12.262 [2024-07-25 10:16:57.193033] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:12.262 [2024-07-25 10:16:57.193598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.262 [2024-07-25 10:16:57.193629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:12.262 [2024-07-25 10:16:57.202453] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:12.262 [2024-07-25 10:16:57.202848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.262 [2024-07-25 10:16:57.202880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:12.262 [2024-07-25 10:16:57.211704] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:12.262 [2024-07-25 10:16:57.212120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.262 [2024-07-25 10:16:57.212151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.262 [2024-07-25 10:16:57.221070] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:12.262 [2024-07-25 10:16:57.221532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.262 [2024-07-25 10:16:57.221564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:12.262 [2024-07-25 10:16:57.230485] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:12.262 [2024-07-25 10:16:57.230819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.262 [2024-07-25 10:16:57.230850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:12.262 [2024-07-25 10:16:57.239685] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:12.262 [2024-07-25 10:16:57.240151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.262 [2024-07-25 10:16:57.240188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:12.262 [2024-07-25 10:16:57.249845] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:12.262 [2024-07-25 10:16:57.250201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.262 [2024-07-25 10:16:57.250243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.262 [2024-07-25 10:16:57.258295] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:12.262 [2024-07-25 10:16:57.258633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.262 [2024-07-25 10:16:57.258665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:12.262 [2024-07-25 10:16:57.266565] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:12.262 [2024-07-25 10:16:57.267015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.262 [2024-07-25 10:16:57.267045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:12.262 [2024-07-25 10:16:57.276301] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:12.262 [2024-07-25 10:16:57.276656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.262 [2024-07-25 10:16:57.276686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:12.262 [2024-07-25 10:16:57.286170] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:12.262 [2024-07-25 10:16:57.286575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.262 [2024-07-25 10:16:57.286607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.262 [2024-07-25 10:16:57.295462] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:12.262 [2024-07-25 10:16:57.295805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.262 [2024-07-25 10:16:57.295836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:12.262 [2024-07-25 10:16:57.305266] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:12.262 [2024-07-25 10:16:57.305668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.262 [2024-07-25 10:16:57.305699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:12.262 [2024-07-25 10:16:57.314876] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:12.262 [2024-07-25 10:16:57.315266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.262 [2024-07-25 10:16:57.315297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:12.262 [2024-07-25 10:16:57.324405] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:12.262 [2024-07-25 10:16:57.324754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.262 [2024-07-25 10:16:57.324786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.262 [2024-07-25 10:16:57.333386] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:12.262 [2024-07-25 10:16:57.333752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.262 [2024-07-25 10:16:57.333784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:12.262 [2024-07-25 10:16:57.343366] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:12.262 [2024-07-25 10:16:57.343817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.262 [2024-07-25 10:16:57.343848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:12.262 [2024-07-25 10:16:57.353903] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:12.262 [2024-07-25 10:16:57.354280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.262 [2024-07-25 10:16:57.354309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:12.262 [2024-07-25 10:16:57.364493] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:12.263 [2024-07-25 10:16:57.365000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.263 [2024-07-25 10:16:57.365031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.263 [2024-07-25 10:16:57.375186] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:12.263 [2024-07-25 10:16:57.375614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.263 [2024-07-25 10:16:57.375645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:12.263 [2024-07-25 10:16:57.385253] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:12.263 [2024-07-25 10:16:57.385664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.263 [2024-07-25 10:16:57.385696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:12.263 [2024-07-25 10:16:57.394834] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:12.263 [2024-07-25 10:16:57.395174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.263 [2024-07-25 10:16:57.395205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:12.263 [2024-07-25 10:16:57.403841] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:12.263 [2024-07-25 10:16:57.404180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.263 [2024-07-25 10:16:57.404217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.263 [2024-07-25 10:16:57.412613] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:12.263 [2024-07-25 10:16:57.412967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.263 [2024-07-25 10:16:57.412998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:12.263 [2024-07-25 10:16:57.421593] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:12.263 [2024-07-25 10:16:57.421946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.263 [2024-07-25 10:16:57.421978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:12.521 [2024-07-25 10:16:57.432680] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:12.521 [2024-07-25 10:16:57.433077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.521 [2024-07-25 10:16:57.433108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:12.521 [2024-07-25 10:16:57.444320] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:12.521 [2024-07-25 10:16:57.444729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.521 [2024-07-25 10:16:57.444761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.521 [2024-07-25 10:16:57.454555] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:12.521 [2024-07-25 10:16:57.455087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.521 [2024-07-25 10:16:57.455118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:12.521 [2024-07-25 10:16:57.465354] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:12.521 [2024-07-25 10:16:57.465814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.521 [2024-07-25 10:16:57.465846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:12.521 [2024-07-25 10:16:57.476263] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:12.521 [2024-07-25 10:16:57.476734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.521 [2024-07-25 10:16:57.476766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:12.521 [2024-07-25 10:16:57.487052] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:12.521 [2024-07-25 10:16:57.487462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.522 [2024-07-25 10:16:57.487497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.522 [2024-07-25 10:16:57.498185] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:12.522 [2024-07-25 10:16:57.498554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.522 [2024-07-25 10:16:57.498586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:12.522 [2024-07-25 10:16:57.509678] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:12.522 [2024-07-25 10:16:57.510235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.522 [2024-07-25 10:16:57.510277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:12.522 [2024-07-25 10:16:57.520553] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:12.522 [2024-07-25 10:16:57.520911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.522 [2024-07-25 10:16:57.520942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:12.522 [2024-07-25 10:16:57.530188] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:12.522 [2024-07-25 10:16:57.530528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.522 [2024-07-25 10:16:57.530561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.522 [2024-07-25 10:16:57.539093] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:12.522 [2024-07-25 10:16:57.539598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.522 [2024-07-25 10:16:57.539647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:12.522 [2024-07-25 10:16:57.548442] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:12.522 [2024-07-25 10:16:57.548837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.522 [2024-07-25 10:16:57.548868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:12.522 [2024-07-25 10:16:57.558052] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:12.522 [2024-07-25 10:16:57.558504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.522 [2024-07-25 10:16:57.558536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:12.522 [2024-07-25 10:16:57.568180] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:12.522 [2024-07-25 10:16:57.568528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.522 [2024-07-25 10:16:57.568560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.522 [2024-07-25 10:16:57.578350] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:12.522 [2024-07-25 10:16:57.578734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.522 [2024-07-25 10:16:57.578766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:12.522 [2024-07-25 10:16:57.588481] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:12.522 [2024-07-25 10:16:57.588849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.522 [2024-07-25 10:16:57.588883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:12.522 [2024-07-25 10:16:57.598187] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:12.522 [2024-07-25 10:16:57.598547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.522 [2024-07-25 10:16:57.598581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:12.522 [2024-07-25 10:16:57.607668] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:12.522 [2024-07-25 10:16:57.608055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.522 [2024-07-25 10:16:57.608087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.522 [2024-07-25 10:16:57.617167] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:12.522 [2024-07-25 10:16:57.617644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.522 [2024-07-25 10:16:57.617677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:12.522 [2024-07-25 10:16:57.626795] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:12.522 [2024-07-25 10:16:57.627161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.522 [2024-07-25 10:16:57.627193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:12.522 [2024-07-25 10:16:57.636265] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:12.522 [2024-07-25 10:16:57.636644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.522 [2024-07-25 10:16:57.636677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:12.522 [2024-07-25 10:16:57.646543] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:12.522 [2024-07-25 10:16:57.646883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.522 [2024-07-25 10:16:57.646915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.522 [2024-07-25 10:16:57.656630] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:12.522 [2024-07-25 10:16:57.656993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.522 [2024-07-25 10:16:57.657024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:12.522 [2024-07-25 10:16:57.664823] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:12.522 [2024-07-25 10:16:57.665180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.522 [2024-07-25 10:16:57.665212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:12.522 [2024-07-25 10:16:57.672846] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:12.522 [2024-07-25 10:16:57.673181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.522 [2024-07-25 10:16:57.673213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:12.522 [2024-07-25 10:16:57.681217] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:12.522 [2024-07-25 10:16:57.681556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.522 [2024-07-25 10:16:57.681589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.781 [2024-07-25 10:16:57.689656] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:12.781 [2024-07-25 10:16:57.689987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.781 [2024-07-25 10:16:57.690018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:12.781 [2024-07-25 10:16:57.698087] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:12.781 [2024-07-25 10:16:57.698543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.781 [2024-07-25 10:16:57.698575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:12.781 [2024-07-25 10:16:57.707041] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:12.781 [2024-07-25 10:16:57.707392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.781 [2024-07-25 10:16:57.707424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:12.781 [2024-07-25 10:16:57.716078] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:12.781 [2024-07-25 10:16:57.716419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.781 [2024-07-25 10:16:57.716458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.781 [2024-07-25 10:16:57.726451] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:12.781 [2024-07-25 10:16:57.726867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.781 [2024-07-25 10:16:57.726898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:12.781 [2024-07-25 10:16:57.737327] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:12.781 [2024-07-25 10:16:57.737739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.781 [2024-07-25 10:16:57.737771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:12.781 [2024-07-25 10:16:57.747636] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:12.781 [2024-07-25 10:16:57.748022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.781 [2024-07-25 10:16:57.748054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:12.781 [2024-07-25 10:16:57.757340] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:12.781 [2024-07-25 10:16:57.757785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.781 [2024-07-25 10:16:57.757830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.781 [2024-07-25 10:16:57.768179] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:12.781 [2024-07-25 10:16:57.768548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.781 [2024-07-25 10:16:57.768579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:12.781 [2024-07-25 10:16:57.778636] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:12.781 [2024-07-25 10:16:57.779165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.781 [2024-07-25 10:16:57.779199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:12.781 [2024-07-25 10:16:57.788998] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:12.781 [2024-07-25 10:16:57.789355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.781 [2024-07-25 10:16:57.789389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:12.781 [2024-07-25 10:16:57.797996] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:12.781 [2024-07-25 10:16:57.798411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.781 [2024-07-25 10:16:57.798451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.781 [2024-07-25 10:16:57.807682] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:12.781 [2024-07-25 10:16:57.808124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.781 [2024-07-25 10:16:57.808157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:12.781 [2024-07-25 10:16:57.817253] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:12.781 [2024-07-25 10:16:57.817710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.781 [2024-07-25 10:16:57.817742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:12.781 [2024-07-25 10:16:57.827692] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:12.781 [2024-07-25 10:16:57.828031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.781 [2024-07-25 10:16:57.828063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:12.781 [2024-07-25 10:16:57.837770] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:12.781 [2024-07-25 10:16:57.838253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.781 [2024-07-25 10:16:57.838284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.781 [2024-07-25 10:16:57.848791] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:12.781 [2024-07-25 10:16:57.849207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.781 [2024-07-25 10:16:57.849238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:12.781 [2024-07-25 10:16:57.859395] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:12.781 [2024-07-25 10:16:57.859818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.781 [2024-07-25 10:16:57.859850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:12.781 [2024-07-25 10:16:57.870799] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:12.781 [2024-07-25 10:16:57.871241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.781 [2024-07-25 10:16:57.871273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:12.781 [2024-07-25 10:16:57.880939] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:12.781 [2024-07-25 10:16:57.881345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.781 [2024-07-25 10:16:57.881377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.781 [2024-07-25 10:16:57.890215] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:12.781 [2024-07-25 10:16:57.890637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.781 [2024-07-25 10:16:57.890669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:12.781 [2024-07-25 10:16:57.900200] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:12.781 [2024-07-25 10:16:57.900551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.782 [2024-07-25 10:16:57.900582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:12.782 [2024-07-25 10:16:57.910513] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:12.782 [2024-07-25 10:16:57.910889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.782 [2024-07-25 10:16:57.910920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:12.782 [2024-07-25 10:16:57.920900] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:12.782 [2024-07-25 10:16:57.921320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.782 [2024-07-25 10:16:57.921351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:12.782 [2024-07-25 10:16:57.930690] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:12.782 [2024-07-25 10:16:57.931046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.782 [2024-07-25 10:16:57.931078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:12.782 [2024-07-25 10:16:57.942256] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:12.782 [2024-07-25 10:16:57.942628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:12.782 [2024-07-25 10:16:57.942660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:13.040 [2024-07-25 10:16:57.953191] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:13.040 [2024-07-25 10:16:57.953580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.040 [2024-07-25 10:16:57.953612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:13.040 [2024-07-25 10:16:57.964179] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:13.040 [2024-07-25 10:16:57.964548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.040 [2024-07-25 10:16:57.964580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.040 [2024-07-25 10:16:57.974884] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:13.040 [2024-07-25 10:16:57.975227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.040 [2024-07-25 10:16:57.975258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:13.040 [2024-07-25 10:16:57.985226] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:13.040 [2024-07-25 10:16:57.985738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.040 [2024-07-25 10:16:57.985770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:13.040 [2024-07-25 10:16:57.995833] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:13.040 [2024-07-25 10:16:57.996268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.040 [2024-07-25 10:16:57.996300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:13.040 [2024-07-25 10:16:58.006776] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:13.040 [2024-07-25 10:16:58.007123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.040 [2024-07-25 10:16:58.007155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.040 [2024-07-25 10:16:58.017687] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:13.040 [2024-07-25 10:16:58.018248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.040 [2024-07-25 10:16:58.018280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:13.040 [2024-07-25 10:16:58.027874] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:13.040 [2024-07-25 10:16:58.028279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.040 [2024-07-25 10:16:58.028322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:13.040 [2024-07-25 10:16:58.038263] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:13.040 [2024-07-25 10:16:58.038733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.040 [2024-07-25 10:16:58.038775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:13.040 [2024-07-25 10:16:58.049399] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:13.040 [2024-07-25 10:16:58.049957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.040 [2024-07-25 10:16:58.049988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.040 [2024-07-25 10:16:58.060157] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:13.040 [2024-07-25 10:16:58.060505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.040 [2024-07-25 10:16:58.060537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:13.040 [2024-07-25 10:16:58.070451] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:13.040 [2024-07-25 10:16:58.071019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.040 [2024-07-25 10:16:58.071051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:13.040 [2024-07-25 10:16:58.081096] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:13.040 [2024-07-25 10:16:58.081711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.040 [2024-07-25 10:16:58.081744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:13.040 [2024-07-25 10:16:58.092579] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:13.040 [2024-07-25 10:16:58.092923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.040 [2024-07-25 10:16:58.092954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.040 [2024-07-25 10:16:58.103748] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:13.040 [2024-07-25 10:16:58.104273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.040 [2024-07-25 10:16:58.104304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:13.040 [2024-07-25 10:16:58.114791] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:13.040 [2024-07-25 10:16:58.115160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.040 [2024-07-25 10:16:58.115191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:13.040 [2024-07-25 10:16:58.125509] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:13.040 [2024-07-25 10:16:58.125848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.040 [2024-07-25 10:16:58.125879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:13.040 [2024-07-25 10:16:58.135991] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:13.040 [2024-07-25 10:16:58.136393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.040 [2024-07-25 10:16:58.136425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.041 [2024-07-25 10:16:58.146233] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:13.041 [2024-07-25 10:16:58.146623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.041 [2024-07-25 10:16:58.146655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:13.041 [2024-07-25 10:16:58.157641] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:13.041 [2024-07-25 10:16:58.158000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.041 [2024-07-25 10:16:58.158031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:13.041 [2024-07-25 10:16:58.168380] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:13.041 [2024-07-25 10:16:58.168858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.041 [2024-07-25 10:16:58.168889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:13.041 [2024-07-25 10:16:58.179273] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:13.041 [2024-07-25 10:16:58.179826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.041 [2024-07-25 10:16:58.179858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.041 [2024-07-25 10:16:58.190667] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:13.041 [2024-07-25 10:16:58.191007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.041 [2024-07-25 10:16:58.191038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:13.041 [2024-07-25 10:16:58.201203] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:13.041 [2024-07-25 10:16:58.201709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.041 [2024-07-25 10:16:58.201741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:13.299 [2024-07-25 10:16:58.211870] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:13.299 [2024-07-25 10:16:58.212232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.299 [2024-07-25 10:16:58.212275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:13.299 [2024-07-25 10:16:58.223508] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:13.299 [2024-07-25 10:16:58.223869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.299 [2024-07-25 10:16:58.223901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.299 [2024-07-25 10:16:58.232614] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:13.299 [2024-07-25 10:16:58.233020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.299 [2024-07-25 10:16:58.233051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:13.299 [2024-07-25 10:16:58.242074] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:13.299 [2024-07-25 10:16:58.242494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.299 [2024-07-25 10:16:58.242526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:13.299 [2024-07-25 10:16:58.251856] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:13.299 [2024-07-25 10:16:58.252285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.299 [2024-07-25 10:16:58.252317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:13.299 [2024-07-25 10:16:58.261299] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:13.299 [2024-07-25 10:16:58.261683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.299 [2024-07-25 10:16:58.261715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.299 [2024-07-25 10:16:58.270338] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:13.299 [2024-07-25 10:16:58.270701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.299 [2024-07-25 10:16:58.270734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:13.299 [2024-07-25 10:16:58.279566] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:13.299 [2024-07-25 10:16:58.280007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.299 [2024-07-25 10:16:58.280039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:13.299 [2024-07-25 10:16:58.288894] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:13.299 [2024-07-25 10:16:58.289229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.299 [2024-07-25 10:16:58.289261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:13.299 [2024-07-25 10:16:58.296976] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:13.299 [2024-07-25 10:16:58.297320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.299 [2024-07-25 10:16:58.297352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.299 [2024-07-25 10:16:58.305765] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:13.299 [2024-07-25 10:16:58.306122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.299 [2024-07-25 10:16:58.306153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:13.299 [2024-07-25 10:16:58.315930] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:13.299 [2024-07-25 10:16:58.316270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.299 [2024-07-25 10:16:58.316300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:13.299 [2024-07-25 10:16:58.325913] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:13.299 [2024-07-25 10:16:58.326328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.299 [2024-07-25 10:16:58.326360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:13.299 [2024-07-25 10:16:58.334511] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x143bb20) with pdu=0x2000190fef90 00:28:13.299 [2024-07-25 10:16:58.334799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:13.299 [2024-07-25 10:16:58.334830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:13.299 00:28:13.299 Latency(us) 00:28:13.299 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:13.299 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:28:13.299 nvme0n1 : 2.00 3096.02 387.00 0.00 0.00 5156.49 3616.62 13786.83 00:28:13.299 =================================================================================================================== 00:28:13.299 Total : 3096.02 387.00 0.00 0.00 5156.49 3616.62 13786.83 00:28:13.299 0 00:28:13.299 10:16:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:13.299 10:16:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:13.299 10:16:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:13.299 10:16:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:13.299 | .driver_specific 00:28:13.299 | .nvme_error 00:28:13.299 | .status_code 00:28:13.299 | .command_transient_transport_error' 00:28:13.556 10:16:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 200 > 0 )) 00:28:13.556 10:16:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 546080 00:28:13.556 10:16:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 546080 ']' 00:28:13.556 10:16:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 546080 00:28:13.556 10:16:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:28:13.556 10:16:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:13.556 10:16:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 546080 00:28:13.556 10:16:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:13.556 10:16:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:13.556 10:16:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 546080' 00:28:13.556 killing process with pid 546080 00:28:13.556 10:16:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 546080 00:28:13.556 Received shutdown signal, test time was about 2.000000 seconds 00:28:13.556 00:28:13.556 Latency(us) 00:28:13.556 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:13.556 =================================================================================================================== 00:28:13.556 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:13.556 10:16:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 546080 00:28:13.814 10:16:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 544577 00:28:13.814 10:16:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 544577 ']' 00:28:13.814 10:16:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 544577 00:28:13.814 10:16:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:28:13.814 10:16:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:13.814 10:16:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 544577 00:28:14.071 10:16:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:28:14.071 10:16:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:28:14.071 10:16:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 544577' 00:28:14.071 killing process with pid 544577 00:28:14.071 10:16:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 544577 00:28:14.071 10:16:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 544577 00:28:14.332 00:28:14.332 real 0m17.592s 00:28:14.332 user 0m35.953s 00:28:14.332 sys 0m4.977s 00:28:14.332 10:16:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:14.332 10:16:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:28:14.332 ************************************ 00:28:14.332 END TEST nvmf_digest_error 00:28:14.332 ************************************ 00:28:14.332 10:16:59 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:28:14.332 10:16:59 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:28:14.332 10:16:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:14.332 10:16:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:28:14.332 10:16:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:14.332 10:16:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:28:14.332 10:16:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:14.332 10:16:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:14.332 rmmod nvme_tcp 00:28:14.332 rmmod nvme_fabrics 00:28:14.332 rmmod nvme_keyring 00:28:14.332 10:16:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:14.332 10:16:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:28:14.332 10:16:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:28:14.332 10:16:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 544577 ']' 00:28:14.332 10:16:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 544577 00:28:14.332 10:16:59 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@950 -- # '[' -z 544577 ']' 00:28:14.332 10:16:59 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # kill -0 544577 00:28:14.332 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (544577) - No such process 00:28:14.332 10:16:59 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@977 -- # echo 'Process with pid 544577 is not found' 00:28:14.332 Process with pid 544577 is not found 00:28:14.332 10:16:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:14.332 10:16:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:14.332 10:16:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:14.332 10:16:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:14.332 10:16:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:14.332 10:16:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:14.332 10:16:59 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:14.332 10:16:59 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:16.866 10:17:01 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:16.866 00:28:16.866 real 0m40.889s 00:28:16.866 user 1m14.620s 00:28:16.866 sys 0m12.101s 00:28:16.866 10:17:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:16.866 10:17:01 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:16.866 ************************************ 00:28:16.866 END TEST nvmf_digest 00:28:16.866 ************************************ 00:28:16.866 10:17:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:28:16.866 10:17:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:28:16.866 10:17:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:28:16.866 10:17:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:28:16.866 10:17:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:16.866 10:17:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:16.866 10:17:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.866 ************************************ 00:28:16.866 START TEST nvmf_bdevperf 00:28:16.866 ************************************ 00:28:16.866 10:17:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:28:16.866 * Looking for test storage... 00:28:16.866 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:16.866 10:17:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:16.866 10:17:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:28:16.866 10:17:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:16.866 10:17:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:16.866 10:17:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:16.866 10:17:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:16.866 10:17:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:16.866 10:17:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:16.866 10:17:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:16.866 10:17:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:16.866 10:17:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:16.866 10:17:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:16.866 10:17:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:28:16.866 10:17:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:28:16.866 10:17:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:16.866 10:17:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:16.866 10:17:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:16.867 10:17:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:16.867 10:17:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:16.867 10:17:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:16.867 10:17:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:16.867 10:17:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:16.867 10:17:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:16.867 10:17:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:16.867 10:17:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:16.867 10:17:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:28:16.867 10:17:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:16.867 10:17:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:28:16.867 10:17:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:16.867 10:17:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:16.867 10:17:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:16.867 10:17:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:16.867 10:17:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:16.867 10:17:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:16.867 10:17:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:16.867 10:17:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:16.867 10:17:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:16.867 10:17:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:16.867 10:17:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:28:16.867 10:17:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:16.867 10:17:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:16.867 10:17:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:16.867 10:17:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:16.867 10:17:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:16.867 10:17:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:16.867 10:17:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:16.867 10:17:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:16.867 10:17:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:16.867 10:17:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:16.867 10:17:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:28:16.867 10:17:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:18.766 10:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:18.766 10:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:28:18.766 10:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:18.766 10:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:18.766 10:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:18.766 10:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:18.766 10:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:18.766 10:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:28:18.766 10:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:18.766 10:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:28:18.766 10:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:28:18.766 10:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:28:18.766 10:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:28:18.766 10:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:28:18.766 10:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:28:18.766 10:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:18.766 10:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:18.766 10:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:18.766 10:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:18.766 10:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:18.766 10:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:18.766 10:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:18.766 10:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:18.766 10:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:18.766 10:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:18.766 10:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:18.766 10:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:18.766 10:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:18.766 10:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:18.766 10:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:18.766 10:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:18.766 10:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:18.766 10:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:18.766 10:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:28:18.766 Found 0000:84:00.0 (0x8086 - 0x159b) 00:28:18.766 10:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:18.766 10:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:18.766 10:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:18.766 10:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:18.766 10:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:18.766 10:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:18.766 10:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:28:18.766 Found 0000:84:00.1 (0x8086 - 0x159b) 00:28:18.766 10:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:18.766 10:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:18.766 10:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:18.766 10:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:18.766 10:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:18.766 10:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:18.766 10:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:18.766 10:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:18.766 10:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:18.766 10:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:18.766 10:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:18.766 10:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:18.766 10:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:18.766 10:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:18.766 10:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:18.766 10:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:28:18.766 Found net devices under 0000:84:00.0: cvl_0_0 00:28:18.766 10:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:18.766 10:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:18.766 10:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:18.766 10:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:18.766 10:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:18.766 10:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:18.766 10:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:18.766 10:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:18.766 10:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:28:18.766 Found net devices under 0000:84:00.1: cvl_0_1 00:28:18.766 10:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:18.766 10:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:18.766 10:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:28:18.766 10:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:18.766 10:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:18.766 10:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:18.766 10:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:18.766 10:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:18.766 10:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:18.766 10:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:18.766 10:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:18.766 10:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:18.766 10:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:18.766 10:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:18.766 10:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:18.766 10:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:18.766 10:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:18.766 10:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:18.766 10:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:18.766 10:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:18.766 10:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:18.766 10:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:18.766 10:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:19.024 10:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:19.024 10:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:19.024 10:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:19.024 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:19.024 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.135 ms 00:28:19.024 00:28:19.024 --- 10.0.0.2 ping statistics --- 00:28:19.024 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:19.024 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:28:19.024 10:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:19.024 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:19.024 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.129 ms 00:28:19.024 00:28:19.024 --- 10.0.0.1 ping statistics --- 00:28:19.024 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:19.024 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:28:19.024 10:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:19.024 10:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:28:19.024 10:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:19.024 10:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:19.024 10:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:19.024 10:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:19.024 10:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:19.024 10:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:19.024 10:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:19.024 10:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:28:19.024 10:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:28:19.024 10:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:19.024 10:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:19.024 10:17:03 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:19.024 10:17:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=548573 00:28:19.024 10:17:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:19.024 10:17:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 548573 00:28:19.024 10:17:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 548573 ']' 00:28:19.024 10:17:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:19.024 10:17:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:19.024 10:17:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:19.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:19.024 10:17:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:19.024 10:17:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:19.024 [2024-07-25 10:17:04.055910] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:28:19.024 [2024-07-25 10:17:04.056002] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:19.024 EAL: No free 2048 kB hugepages reported on node 1 00:28:19.024 [2024-07-25 10:17:04.130871] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:19.282 [2024-07-25 10:17:04.253675] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:19.282 [2024-07-25 10:17:04.253740] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:19.282 [2024-07-25 10:17:04.253756] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:19.282 [2024-07-25 10:17:04.253770] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:19.282 [2024-07-25 10:17:04.253781] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:19.282 [2024-07-25 10:17:04.253866] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:19.282 [2024-07-25 10:17:04.253923] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:19.282 [2024-07-25 10:17:04.253927] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:19.282 10:17:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:19.282 10:17:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:28:19.282 10:17:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:19.282 10:17:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:19.282 10:17:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:19.282 10:17:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:19.282 10:17:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:19.282 10:17:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.282 10:17:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:19.282 [2024-07-25 10:17:04.392122] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:19.282 10:17:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.282 10:17:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:19.282 10:17:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.282 10:17:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:19.282 Malloc0 00:28:19.282 10:17:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.282 10:17:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:19.282 10:17:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.282 10:17:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:19.282 10:17:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.282 10:17:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:19.282 10:17:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.282 10:17:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:19.539 10:17:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.539 10:17:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:19.539 10:17:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.539 10:17:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:19.539 [2024-07-25 10:17:04.454592] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:19.539 10:17:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.539 10:17:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:28:19.539 10:17:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:28:19.539 10:17:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:28:19.539 10:17:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:28:19.539 10:17:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:19.539 10:17:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:19.539 { 00:28:19.539 "params": { 00:28:19.539 "name": "Nvme$subsystem", 00:28:19.539 "trtype": "$TEST_TRANSPORT", 00:28:19.539 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:19.539 "adrfam": "ipv4", 00:28:19.539 "trsvcid": "$NVMF_PORT", 00:28:19.539 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:19.539 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:19.539 "hdgst": ${hdgst:-false}, 00:28:19.539 "ddgst": ${ddgst:-false} 00:28:19.539 }, 00:28:19.539 "method": "bdev_nvme_attach_controller" 00:28:19.539 } 00:28:19.539 EOF 00:28:19.539 )") 00:28:19.539 10:17:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:28:19.539 10:17:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:28:19.539 10:17:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:28:19.539 10:17:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:19.539 "params": { 00:28:19.539 "name": "Nvme1", 00:28:19.539 "trtype": "tcp", 00:28:19.539 "traddr": "10.0.0.2", 00:28:19.539 "adrfam": "ipv4", 00:28:19.539 "trsvcid": "4420", 00:28:19.539 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:19.539 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:19.539 "hdgst": false, 00:28:19.539 "ddgst": false 00:28:19.539 }, 00:28:19.539 "method": "bdev_nvme_attach_controller" 00:28:19.539 }' 00:28:19.539 [2024-07-25 10:17:04.506628] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:28:19.539 [2024-07-25 10:17:04.506728] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid548710 ] 00:28:19.539 EAL: No free 2048 kB hugepages reported on node 1 00:28:19.539 [2024-07-25 10:17:04.578178] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:19.539 [2024-07-25 10:17:04.686824] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:19.796 Running I/O for 1 seconds... 00:28:20.729 00:28:20.729 Latency(us) 00:28:20.729 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:20.729 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:20.729 Verification LBA range: start 0x0 length 0x4000 00:28:20.729 Nvme1n1 : 1.01 8822.33 34.46 0.00 0.00 14454.72 2318.03 15437.37 00:28:20.729 =================================================================================================================== 00:28:20.729 Total : 8822.33 34.46 0.00 0.00 14454.72 2318.03 15437.37 00:28:20.987 10:17:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=548857 00:28:20.987 10:17:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:28:20.987 10:17:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:28:20.987 10:17:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:28:20.987 10:17:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:28:20.987 10:17:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:28:20.987 10:17:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:20.987 10:17:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:20.987 { 00:28:20.987 "params": { 00:28:20.987 "name": "Nvme$subsystem", 00:28:20.987 "trtype": "$TEST_TRANSPORT", 00:28:20.987 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:20.987 "adrfam": "ipv4", 00:28:20.987 "trsvcid": "$NVMF_PORT", 00:28:20.987 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:20.987 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:20.987 "hdgst": ${hdgst:-false}, 00:28:20.987 "ddgst": ${ddgst:-false} 00:28:20.987 }, 00:28:20.987 "method": "bdev_nvme_attach_controller" 00:28:20.987 } 00:28:20.987 EOF 00:28:20.987 )") 00:28:20.987 10:17:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:28:20.987 10:17:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:28:20.987 10:17:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:28:20.987 10:17:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:20.987 "params": { 00:28:20.987 "name": "Nvme1", 00:28:20.987 "trtype": "tcp", 00:28:20.987 "traddr": "10.0.0.2", 00:28:20.987 "adrfam": "ipv4", 00:28:20.987 "trsvcid": "4420", 00:28:20.987 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:20.987 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:20.987 "hdgst": false, 00:28:20.987 "ddgst": false 00:28:20.987 }, 00:28:20.987 "method": "bdev_nvme_attach_controller" 00:28:20.987 }' 00:28:21.245 [2024-07-25 10:17:06.183187] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:28:21.245 [2024-07-25 10:17:06.183299] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid548857 ] 00:28:21.245 EAL: No free 2048 kB hugepages reported on node 1 00:28:21.245 [2024-07-25 10:17:06.255542] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:21.245 [2024-07-25 10:17:06.361198] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:21.811 Running I/O for 15 seconds... 00:28:24.344 10:17:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 548573 00:28:24.344 10:17:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:28:24.344 [2024-07-25 10:17:09.146380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:42728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.344 [2024-07-25 10:17:09.146440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:24.344 [2024-07-25 10:17:09.146492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:42736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.344 [2024-07-25 10:17:09.146520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:24.344 [2024-07-25 10:17:09.146540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:42744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.344 [2024-07-25 10:17:09.146555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:24.344 [2024-07-25 10:17:09.146571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:42752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.344 [2024-07-25 10:17:09.146590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:24.344 [2024-07-25 10:17:09.146605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:42760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.344 [2024-07-25 10:17:09.146622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:24.344 [2024-07-25 10:17:09.146640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:42768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.344 [2024-07-25 10:17:09.146655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:24.344 [2024-07-25 10:17:09.146672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:42776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.344 [2024-07-25 10:17:09.146686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:24.344 [2024-07-25 10:17:09.146701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:42784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.344 [2024-07-25 10:17:09.146734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:24.345 [2024-07-25 10:17:09.146753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:42792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.345 [2024-07-25 10:17:09.146770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:24.345 [2024-07-25 10:17:09.146789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:42800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.345 [2024-07-25 10:17:09.146806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:24.345 [2024-07-25 10:17:09.146826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:42808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.345 [2024-07-25 10:17:09.146842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:24.345 [2024-07-25 10:17:09.146860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:42816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.345 [2024-07-25 10:17:09.146876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:24.345 [2024-07-25 10:17:09.146893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:42824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.345 [2024-07-25 10:17:09.146909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:24.345 [2024-07-25 10:17:09.146926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:42832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.345 [2024-07-25 10:17:09.146941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:24.345 [2024-07-25 10:17:09.146963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:42840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.345 [2024-07-25 10:17:09.146979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:24.345 [2024-07-25 10:17:09.146996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:42848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.345 [2024-07-25 10:17:09.147011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:24.345 [2024-07-25 10:17:09.147028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:42856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.345 [2024-07-25 10:17:09.147044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:24.345 [2024-07-25 10:17:09.147062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:42864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.345 [2024-07-25 10:17:09.147078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:24.345 [2024-07-25 10:17:09.147096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:42872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.345 [2024-07-25 10:17:09.147111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:24.345 [2024-07-25 10:17:09.147128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:42880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.345 [2024-07-25 10:17:09.147144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:24.345 [2024-07-25 10:17:09.147161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:42888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.345 [2024-07-25 10:17:09.147177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:24.345 [2024-07-25 10:17:09.147194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:42896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.345 [2024-07-25 10:17:09.147209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:24.345 [2024-07-25 10:17:09.147226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:42904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.345 [2024-07-25 10:17:09.147242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:24.345 [2024-07-25 10:17:09.147259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:42912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.345 [2024-07-25 10:17:09.147275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:24.345 [2024-07-25 10:17:09.147292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:42920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.345 [2024-07-25 10:17:09.147307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:24.345 [2024-07-25 10:17:09.147325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:42928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.345 [2024-07-25 10:17:09.147340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:24.345 [2024-07-25 10:17:09.147357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:42936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.345 [2024-07-25 10:17:09.147376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:24.345 [2024-07-25 10:17:09.147393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:42944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.345 [2024-07-25 10:17:09.147410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:24.345 [2024-07-25 10:17:09.147435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:42952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.345 [2024-07-25 10:17:09.147454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:24.345 [2024-07-25 10:17:09.147487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:42960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.345 [2024-07-25 10:17:09.147502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:24.345 [2024-07-25 10:17:09.147517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:42968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.345 [2024-07-25 10:17:09.147532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:24.345 [2024-07-25 10:17:09.147547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:42976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.345 [2024-07-25 10:17:09.147562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:24.345 [2024-07-25 10:17:09.147577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:42984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.345 [2024-07-25 10:17:09.147591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:24.345 [2024-07-25 10:17:09.147607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:42992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.345 [2024-07-25 10:17:09.147621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:24.345 [2024-07-25 10:17:09.147636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:43000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.345 [2024-07-25 10:17:09.147650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:24.345 [2024-07-25 10:17:09.147666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:43008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.345 [2024-07-25 10:17:09.147680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:24.345 [2024-07-25 10:17:09.147695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:43016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.345 [2024-07-25 10:17:09.147724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:24.345 [2024-07-25 10:17:09.147740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:43024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.345 [2024-07-25 10:17:09.147753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:24.345 [2024-07-25 10:17:09.147768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:43032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.345 [2024-07-25 10:17:09.147800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:24.345 [2024-07-25 10:17:09.147817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:43040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.345 [2024-07-25 10:17:09.147837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:24.345 [2024-07-25 10:17:09.147855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:43048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.345 [2024-07-25 10:17:09.147870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:24.345 [2024-07-25 10:17:09.147889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:43056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.345 [2024-07-25 10:17:09.147905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:24.345 [2024-07-25 10:17:09.147921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:43064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.345 [2024-07-25 10:17:09.147937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:24.345 [2024-07-25 10:17:09.147954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:43072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.345 [2024-07-25 10:17:09.147969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:24.345 [2024-07-25 10:17:09.147986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:43080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.345 [2024-07-25 10:17:09.148001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:24.345 [2024-07-25 10:17:09.148018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:43088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.345 [2024-07-25 10:17:09.148033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:24.345 [2024-07-25 10:17:09.148050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:43096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.345 [2024-07-25 10:17:09.148066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:24.346 [2024-07-25 10:17:09.148083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:42088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.346 [2024-07-25 10:17:09.148098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:24.346 [2024-07-25 10:17:09.148116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:42096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.346 [2024-07-25 10:17:09.148131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:24.346 [2024-07-25 10:17:09.148148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:42104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.346 [2024-07-25 10:17:09.148164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:24.346 [2024-07-25 10:17:09.148182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:42112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.346 [2024-07-25 10:17:09.148197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:24.346 [2024-07-25 10:17:09.148215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:42120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.346 [2024-07-25 10:17:09.148230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:24.346 [2024-07-25 10:17:09.148251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:42128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.346 [2024-07-25 10:17:09.148267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:24.346 [2024-07-25 10:17:09.148284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:42136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.346 [2024-07-25 10:17:09.148300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:24.346 [2024-07-25 10:17:09.148317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:42144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.346 [2024-07-25 10:17:09.148332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:24.346 [2024-07-25 10:17:09.148348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:42152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.346 [2024-07-25 10:17:09.148364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:24.346 [2024-07-25 10:17:09.148381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:42160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.346 [2024-07-25 10:17:09.148396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:24.346 [2024-07-25 10:17:09.148412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:42168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.346 [2024-07-25 10:17:09.148435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:24.346 [2024-07-25 10:17:09.148454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:42176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.346 [2024-07-25 10:17:09.148485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:24.346 [2024-07-25 10:17:09.148501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:42184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.346 [2024-07-25 10:17:09.148515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:24.346 [2024-07-25 10:17:09.148531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:42192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.346 [2024-07-25 10:17:09.148544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:24.346 [2024-07-25 10:17:09.148560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:42200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.346 [2024-07-25 10:17:09.148573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:24.346 [2024-07-25 10:17:09.148588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:42208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.346 [2024-07-25 10:17:09.148602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:24.346 [2024-07-25 10:17:09.148617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:43104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:28:24.346 [2024-07-25 10:17:09.148631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:24.346 [2024-07-25 10:17:09.148646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:42216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.346 [2024-07-25 10:17:09.148664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:24.346 [2024-07-25 10:17:09.148680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:42224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.346 [2024-07-25 10:17:09.148695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:24.346 [2024-07-25 10:17:09.148724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:42232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.346 [2024-07-25 10:17:09.148737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:24.346 [2024-07-25 10:17:09.148751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:42240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.346 [2024-07-25 10:17:09.148763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:24.346 [2024-07-25 10:17:09.148792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:42248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.346 [2024-07-25 10:17:09.148808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:24.346 [2024-07-25 10:17:09.148825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:42256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.346 [2024-07-25 10:17:09.148841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:24.346 [2024-07-25 10:17:09.148857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:42264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.346 [2024-07-25 10:17:09.148873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:24.346 [2024-07-25 10:17:09.148889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:42272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.346 [2024-07-25 10:17:09.148905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:24.346 [2024-07-25 10:17:09.148922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:42280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.346 [2024-07-25 10:17:09.148937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:24.346 [2024-07-25 10:17:09.148954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:42288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.346 [2024-07-25 10:17:09.148969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:24.346 [2024-07-25 10:17:09.148987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:42296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.346 [2024-07-25 10:17:09.149002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:24.346 [2024-07-25 10:17:09.149019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:42304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.346 [2024-07-25 10:17:09.149034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:24.346 [2024-07-25 10:17:09.149051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:42312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.346 [2024-07-25 10:17:09.149066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:24.346 [2024-07-25 10:17:09.149087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:42320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.346 [2024-07-25 10:17:09.149103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:24.346 [2024-07-25 10:17:09.149119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:42328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.346 [2024-07-25 10:17:09.149135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:24.346 [2024-07-25 10:17:09.149152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:42336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.346 [2024-07-25 10:17:09.149167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:24.346 [2024-07-25 10:17:09.149185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:42344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.346 [2024-07-25 10:17:09.149200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:24.346 [2024-07-25 10:17:09.149217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:42352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.346 [2024-07-25 10:17:09.149232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:24.346 [2024-07-25 10:17:09.149249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:42360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.346 [2024-07-25 10:17:09.149265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:24.346 [2024-07-25 10:17:09.149284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:42368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.346 [2024-07-25 10:17:09.149300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:24.346 [2024-07-25 10:17:09.149317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:42376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.346 [2024-07-25 10:17:09.149332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:24.346 [2024-07-25 10:17:09.149350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:42384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.346 [2024-07-25 10:17:09.149365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:24.346 [2024-07-25 10:17:09.149382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:42392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.347 [2024-07-25 10:17:09.149397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:24.347 [2024-07-25 10:17:09.149414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:42400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.347 [2024-07-25 10:17:09.149436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:24.347 [2024-07-25 10:17:09.149455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:42408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.347 [2024-07-25 10:17:09.149485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:24.347 [2024-07-25 10:17:09.149501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:42416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.347 [2024-07-25 10:17:09.149518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:24.347 [2024-07-25 10:17:09.149534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:42424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.347 [2024-07-25 10:17:09.149549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:24.347 [2024-07-25 10:17:09.149567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:42432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.347 [2024-07-25 10:17:09.149581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:24.347 [2024-07-25 10:17:09.149597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:42440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.347 [2024-07-25 10:17:09.149611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:24.347 [2024-07-25 10:17:09.149626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.347 [2024-07-25 10:17:09.149640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:24.347 [2024-07-25 10:17:09.149656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:42456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.347 [2024-07-25 10:17:09.149669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:24.347 [2024-07-25 10:17:09.149685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:42464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.347 [2024-07-25 10:17:09.149699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:24.347 [2024-07-25 10:17:09.149730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:42472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.347 [2024-07-25 10:17:09.149744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:24.347 [2024-07-25 10:17:09.149759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:42480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.347 [2024-07-25 10:17:09.149786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:24.347 [2024-07-25 10:17:09.149805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:42488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.347 [2024-07-25 10:17:09.149821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:24.347 [2024-07-25 10:17:09.149838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:42496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.347 [2024-07-25 10:17:09.149853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:24.347 [2024-07-25 10:17:09.149871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:42504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.347 [2024-07-25 10:17:09.149886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:24.347 [2024-07-25 10:17:09.149904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:42512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.347 [2024-07-25 10:17:09.149920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:24.347 [2024-07-25 10:17:09.149936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:42520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.347 [2024-07-25 10:17:09.149956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:24.347 [2024-07-25 10:17:09.149973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:42528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.347 [2024-07-25 10:17:09.149989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:24.347 [2024-07-25 10:17:09.150006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:42536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.347 [2024-07-25 10:17:09.150021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:24.347 [2024-07-25 10:17:09.150039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:42544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.347 [2024-07-25 10:17:09.150054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:24.347 [2024-07-25 10:17:09.150071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:42552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.347 [2024-07-25 10:17:09.150086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:24.347 [2024-07-25 10:17:09.150103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:42560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.347 [2024-07-25 10:17:09.150119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:24.347 [2024-07-25 10:17:09.150136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:42568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.347 [2024-07-25 10:17:09.150151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:24.347 [2024-07-25 10:17:09.150169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:42576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.347 [2024-07-25 10:17:09.150185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:24.347 [2024-07-25 10:17:09.150202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:42584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.347 [2024-07-25 10:17:09.150217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:24.347 [2024-07-25 10:17:09.150234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:42592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.347 [2024-07-25 10:17:09.150249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:24.347 [2024-07-25 10:17:09.150266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:42600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.347 [2024-07-25 10:17:09.150282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:24.347 [2024-07-25 10:17:09.150299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:42608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.347 [2024-07-25 10:17:09.150314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:24.347 [2024-07-25 10:17:09.150333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:42616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.347 [2024-07-25 10:17:09.150349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:24.347 [2024-07-25 10:17:09.150370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:42624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.347 [2024-07-25 10:17:09.150387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:24.347 [2024-07-25 10:17:09.150404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:42632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.347 [2024-07-25 10:17:09.150420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:24.347 [2024-07-25 10:17:09.150446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:42640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.347 [2024-07-25 10:17:09.150477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:24.347 [2024-07-25 10:17:09.150494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:42648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.347 [2024-07-25 10:17:09.150508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:24.347 [2024-07-25 10:17:09.150524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:42656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.347 [2024-07-25 10:17:09.150538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:24.347 [2024-07-25 10:17:09.150554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:42664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.347 [2024-07-25 10:17:09.150567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:24.347 [2024-07-25 10:17:09.150583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:42672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.347 [2024-07-25 10:17:09.150597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:24.347 [2024-07-25 10:17:09.150613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:42680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.347 [2024-07-25 10:17:09.150627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:24.347 [2024-07-25 10:17:09.150642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:42688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.347 [2024-07-25 10:17:09.150656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:24.347 [2024-07-25 10:17:09.150671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:42696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.347 [2024-07-25 10:17:09.150685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:24.348 [2024-07-25 10:17:09.150700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:42704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.348 [2024-07-25 10:17:09.150730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:24.348 [2024-07-25 10:17:09.150747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:42712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:24.348 [2024-07-25 10:17:09.150773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:24.348 [2024-07-25 10:17:09.150791] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xb748a0 is same with the state(5) to be set 00:28:24.348 [2024-07-25 10:17:09.150813] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:28:24.348 [2024-07-25 10:17:09.150827] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:28:24.348 [2024-07-25 10:17:09.150840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42720 len:8 PRP1 0x0 PRP2 0x0 00:28:24.348 [2024-07-25 10:17:09.150855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:24.348 [2024-07-25 10:17:09.150920] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xb748a0 was disconnected and freed. reset controller. 00:28:24.348 [2024-07-25 10:17:09.154700] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:24.348 [2024-07-25 10:17:09.154791] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:24.348 [2024-07-25 10:17:09.155493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.348 [2024-07-25 10:17:09.155522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:24.348 [2024-07-25 10:17:09.155538] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:24.348 [2024-07-25 10:17:09.155770] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:24.348 [2024-07-25 10:17:09.156013] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:24.348 [2024-07-25 10:17:09.156035] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:24.348 [2024-07-25 10:17:09.156053] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:24.348 [2024-07-25 10:17:09.159621] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:24.348 [2024-07-25 10:17:09.168882] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:24.348 [2024-07-25 10:17:09.169336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.348 [2024-07-25 10:17:09.169368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:24.348 [2024-07-25 10:17:09.169386] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:24.348 [2024-07-25 10:17:09.169635] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:24.348 [2024-07-25 10:17:09.169879] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:24.348 [2024-07-25 10:17:09.169903] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:24.348 [2024-07-25 10:17:09.169919] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:24.348 [2024-07-25 10:17:09.173493] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:24.348 [2024-07-25 10:17:09.182759] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:24.348 [2024-07-25 10:17:09.183220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.348 [2024-07-25 10:17:09.183252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:24.348 [2024-07-25 10:17:09.183270] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:24.348 [2024-07-25 10:17:09.183520] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:24.348 [2024-07-25 10:17:09.183763] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:24.348 [2024-07-25 10:17:09.183793] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:24.348 [2024-07-25 10:17:09.183810] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:24.348 [2024-07-25 10:17:09.187372] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:24.348 [2024-07-25 10:17:09.196641] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:24.348 [2024-07-25 10:17:09.197143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.348 [2024-07-25 10:17:09.197175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:24.348 [2024-07-25 10:17:09.197193] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:24.348 [2024-07-25 10:17:09.197442] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:24.348 [2024-07-25 10:17:09.197693] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:24.348 [2024-07-25 10:17:09.197718] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:24.348 [2024-07-25 10:17:09.197734] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:24.348 [2024-07-25 10:17:09.201298] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:24.348 [2024-07-25 10:17:09.210582] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:24.348 [2024-07-25 10:17:09.211062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.348 [2024-07-25 10:17:09.211093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:24.348 [2024-07-25 10:17:09.211112] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:24.348 [2024-07-25 10:17:09.211350] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:24.348 [2024-07-25 10:17:09.211605] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:24.348 [2024-07-25 10:17:09.211631] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:24.348 [2024-07-25 10:17:09.211647] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:24.348 [2024-07-25 10:17:09.215215] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:24.348 [2024-07-25 10:17:09.224484] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:24.348 [2024-07-25 10:17:09.224987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.348 [2024-07-25 10:17:09.225018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:24.348 [2024-07-25 10:17:09.225036] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:24.348 [2024-07-25 10:17:09.225275] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:24.348 [2024-07-25 10:17:09.225532] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:24.348 [2024-07-25 10:17:09.225559] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:24.348 [2024-07-25 10:17:09.225575] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:24.348 [2024-07-25 10:17:09.229137] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:24.348 [2024-07-25 10:17:09.238389] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:24.348 [2024-07-25 10:17:09.238910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.348 [2024-07-25 10:17:09.238941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:24.348 [2024-07-25 10:17:09.238960] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:24.348 [2024-07-25 10:17:09.239199] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:24.348 [2024-07-25 10:17:09.239455] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:24.348 [2024-07-25 10:17:09.239491] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:24.348 [2024-07-25 10:17:09.239508] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:24.348 [2024-07-25 10:17:09.243074] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:24.348 [2024-07-25 10:17:09.252340] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:24.348 [2024-07-25 10:17:09.252994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.348 [2024-07-25 10:17:09.253040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:24.348 [2024-07-25 10:17:09.253062] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:24.348 [2024-07-25 10:17:09.253311] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:24.348 [2024-07-25 10:17:09.253571] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:24.348 [2024-07-25 10:17:09.253598] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:24.348 [2024-07-25 10:17:09.253614] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:24.348 [2024-07-25 10:17:09.257183] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:24.348 [2024-07-25 10:17:09.266234] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:24.348 [2024-07-25 10:17:09.266890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.348 [2024-07-25 10:17:09.266936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:24.348 [2024-07-25 10:17:09.266957] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:24.348 [2024-07-25 10:17:09.267203] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:24.348 [2024-07-25 10:17:09.267462] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:24.348 [2024-07-25 10:17:09.267488] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:24.348 [2024-07-25 10:17:09.267505] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:24.349 [2024-07-25 10:17:09.271072] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:24.349 [2024-07-25 10:17:09.280134] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:24.349 [2024-07-25 10:17:09.280726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.349 [2024-07-25 10:17:09.280779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:24.349 [2024-07-25 10:17:09.280798] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:24.349 [2024-07-25 10:17:09.281044] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:24.349 [2024-07-25 10:17:09.281287] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:24.349 [2024-07-25 10:17:09.281312] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:24.349 [2024-07-25 10:17:09.281329] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:24.349 [2024-07-25 10:17:09.284906] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:24.349 [2024-07-25 10:17:09.294161] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:24.349 [2024-07-25 10:17:09.294794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.349 [2024-07-25 10:17:09.294844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:24.349 [2024-07-25 10:17:09.294864] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:24.349 [2024-07-25 10:17:09.295110] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:24.349 [2024-07-25 10:17:09.295353] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:24.349 [2024-07-25 10:17:09.295377] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:24.349 [2024-07-25 10:17:09.295394] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:24.349 [2024-07-25 10:17:09.298978] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:24.349 [2024-07-25 10:17:09.308041] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:24.349 [2024-07-25 10:17:09.308544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.349 [2024-07-25 10:17:09.308578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:24.349 [2024-07-25 10:17:09.308597] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:24.349 [2024-07-25 10:17:09.308843] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:24.349 [2024-07-25 10:17:09.309086] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:24.349 [2024-07-25 10:17:09.309111] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:24.349 [2024-07-25 10:17:09.309128] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:24.349 [2024-07-25 10:17:09.312702] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:24.349 [2024-07-25 10:17:09.321959] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:24.349 [2024-07-25 10:17:09.322591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.349 [2024-07-25 10:17:09.322638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:24.349 [2024-07-25 10:17:09.322658] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:24.349 [2024-07-25 10:17:09.322904] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:24.349 [2024-07-25 10:17:09.323147] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:24.349 [2024-07-25 10:17:09.323173] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:24.349 [2024-07-25 10:17:09.323197] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:24.349 [2024-07-25 10:17:09.326783] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:24.349 [2024-07-25 10:17:09.335834] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:24.349 [2024-07-25 10:17:09.336352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.349 [2024-07-25 10:17:09.336386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:24.349 [2024-07-25 10:17:09.336405] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:24.349 [2024-07-25 10:17:09.336668] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:24.349 [2024-07-25 10:17:09.336912] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:24.349 [2024-07-25 10:17:09.336938] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:24.349 [2024-07-25 10:17:09.336954] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:24.349 [2024-07-25 10:17:09.340533] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:24.349 [2024-07-25 10:17:09.349795] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:24.349 [2024-07-25 10:17:09.350270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.349 [2024-07-25 10:17:09.350302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:24.349 [2024-07-25 10:17:09.350321] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:24.349 [2024-07-25 10:17:09.350583] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:24.349 [2024-07-25 10:17:09.350828] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:24.349 [2024-07-25 10:17:09.350854] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:24.350 [2024-07-25 10:17:09.350870] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:24.350 [2024-07-25 10:17:09.354439] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:24.350 [2024-07-25 10:17:09.363704] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:24.350 [2024-07-25 10:17:09.364303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.350 [2024-07-25 10:17:09.364348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:24.350 [2024-07-25 10:17:09.364368] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:24.350 [2024-07-25 10:17:09.364627] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:24.350 [2024-07-25 10:17:09.364872] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:24.350 [2024-07-25 10:17:09.364898] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:24.350 [2024-07-25 10:17:09.364915] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:24.350 [2024-07-25 10:17:09.368494] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:24.350 [2024-07-25 10:17:09.377551] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:24.350 [2024-07-25 10:17:09.378073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.350 [2024-07-25 10:17:09.378112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:24.350 [2024-07-25 10:17:09.378132] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:24.350 [2024-07-25 10:17:09.378371] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:24.350 [2024-07-25 10:17:09.378627] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:24.350 [2024-07-25 10:17:09.378653] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:24.350 [2024-07-25 10:17:09.378670] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:24.350 [2024-07-25 10:17:09.382245] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:24.350 [2024-07-25 10:17:09.391510] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:24.350 [2024-07-25 10:17:09.392142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.350 [2024-07-25 10:17:09.392188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:24.350 [2024-07-25 10:17:09.392209] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:24.350 [2024-07-25 10:17:09.392470] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:24.350 [2024-07-25 10:17:09.392715] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:24.350 [2024-07-25 10:17:09.392740] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:24.350 [2024-07-25 10:17:09.392756] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:24.350 [2024-07-25 10:17:09.396327] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:24.350 [2024-07-25 10:17:09.405397] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:24.350 [2024-07-25 10:17:09.406020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.350 [2024-07-25 10:17:09.406074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:24.350 [2024-07-25 10:17:09.406094] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:24.350 [2024-07-25 10:17:09.406340] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:24.350 [2024-07-25 10:17:09.406609] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:24.350 [2024-07-25 10:17:09.406636] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:24.350 [2024-07-25 10:17:09.406653] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:24.350 [2024-07-25 10:17:09.410222] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:24.350 [2024-07-25 10:17:09.419290] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:24.350 [2024-07-25 10:17:09.419761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.350 [2024-07-25 10:17:09.419797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:24.350 [2024-07-25 10:17:09.419816] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:24.350 [2024-07-25 10:17:09.420054] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:24.350 [2024-07-25 10:17:09.420304] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:24.350 [2024-07-25 10:17:09.420329] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:24.350 [2024-07-25 10:17:09.420346] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:24.350 [2024-07-25 10:17:09.423918] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:24.350 [2024-07-25 10:17:09.433168] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:24.350 [2024-07-25 10:17:09.433802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.350 [2024-07-25 10:17:09.433849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:24.350 [2024-07-25 10:17:09.433869] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:24.350 [2024-07-25 10:17:09.434114] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:24.350 [2024-07-25 10:17:09.434357] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:24.350 [2024-07-25 10:17:09.434382] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:24.350 [2024-07-25 10:17:09.434400] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:24.350 [2024-07-25 10:17:09.437977] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:24.350 [2024-07-25 10:17:09.447021] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:24.350 [2024-07-25 10:17:09.447566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.350 [2024-07-25 10:17:09.447600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:24.350 [2024-07-25 10:17:09.447619] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:24.350 [2024-07-25 10:17:09.447858] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:24.350 [2024-07-25 10:17:09.448102] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:24.350 [2024-07-25 10:17:09.448127] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:24.350 [2024-07-25 10:17:09.448143] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:24.350 [2024-07-25 10:17:09.451719] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:24.350 [2024-07-25 10:17:09.460980] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:24.350 [2024-07-25 10:17:09.461474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.350 [2024-07-25 10:17:09.461507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:24.350 [2024-07-25 10:17:09.461526] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:24.350 [2024-07-25 10:17:09.461765] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:24.350 [2024-07-25 10:17:09.462009] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:24.350 [2024-07-25 10:17:09.462034] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:24.350 [2024-07-25 10:17:09.462050] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:24.350 [2024-07-25 10:17:09.465640] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:24.350 [2024-07-25 10:17:09.474899] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:24.350 [2024-07-25 10:17:09.475426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.350 [2024-07-25 10:17:09.475467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:24.350 [2024-07-25 10:17:09.475495] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:24.350 [2024-07-25 10:17:09.475734] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:24.350 [2024-07-25 10:17:09.475979] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:24.350 [2024-07-25 10:17:09.476004] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:24.350 [2024-07-25 10:17:09.476020] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:24.350 [2024-07-25 10:17:09.479593] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:24.350 [2024-07-25 10:17:09.488849] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:24.350 [2024-07-25 10:17:09.489476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.350 [2024-07-25 10:17:09.489522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:24.350 [2024-07-25 10:17:09.489543] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:24.350 [2024-07-25 10:17:09.489788] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:24.350 [2024-07-25 10:17:09.490032] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:24.350 [2024-07-25 10:17:09.490057] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:24.350 [2024-07-25 10:17:09.490074] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:24.351 [2024-07-25 10:17:09.493655] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:24.351 [2024-07-25 10:17:09.502709] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:24.351 [2024-07-25 10:17:09.503188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.351 [2024-07-25 10:17:09.503221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:24.351 [2024-07-25 10:17:09.503240] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:24.351 [2024-07-25 10:17:09.503495] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:24.351 [2024-07-25 10:17:09.503738] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:24.351 [2024-07-25 10:17:09.503763] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:24.351 [2024-07-25 10:17:09.503779] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:24.610 [2024-07-25 10:17:09.507360] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:24.610 [2024-07-25 10:17:09.516638] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:24.610 [2024-07-25 10:17:09.517146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.610 [2024-07-25 10:17:09.517180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:24.610 [2024-07-25 10:17:09.517205] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:24.610 [2024-07-25 10:17:09.517468] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:24.610 [2024-07-25 10:17:09.517710] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:24.610 [2024-07-25 10:17:09.517736] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:24.610 [2024-07-25 10:17:09.517752] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:24.610 [2024-07-25 10:17:09.521321] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:24.610 [2024-07-25 10:17:09.530606] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:24.610 [2024-07-25 10:17:09.531098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.610 [2024-07-25 10:17:09.531152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:24.610 [2024-07-25 10:17:09.531170] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:24.610 [2024-07-25 10:17:09.531409] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:24.610 [2024-07-25 10:17:09.531663] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:24.610 [2024-07-25 10:17:09.531698] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:24.610 [2024-07-25 10:17:09.531715] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:24.610 [2024-07-25 10:17:09.535289] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:24.610 [2024-07-25 10:17:09.544602] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:24.610 [2024-07-25 10:17:09.545111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.610 [2024-07-25 10:17:09.545161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:24.610 [2024-07-25 10:17:09.545179] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:24.610 [2024-07-25 10:17:09.545418] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:24.610 [2024-07-25 10:17:09.545674] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:24.610 [2024-07-25 10:17:09.545700] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:24.610 [2024-07-25 10:17:09.545716] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:24.610 [2024-07-25 10:17:09.549280] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:24.610 [2024-07-25 10:17:09.558553] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:24.610 [2024-07-25 10:17:09.559055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.610 [2024-07-25 10:17:09.559106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:24.610 [2024-07-25 10:17:09.559125] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:24.610 [2024-07-25 10:17:09.559363] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:24.610 [2024-07-25 10:17:09.559620] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:24.610 [2024-07-25 10:17:09.559652] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:24.610 [2024-07-25 10:17:09.559670] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:24.610 [2024-07-25 10:17:09.563239] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:24.610 [2024-07-25 10:17:09.572514] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:24.610 [2024-07-25 10:17:09.573007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.610 [2024-07-25 10:17:09.573038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:24.610 [2024-07-25 10:17:09.573057] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:24.610 [2024-07-25 10:17:09.573295] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:24.610 [2024-07-25 10:17:09.573554] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:24.610 [2024-07-25 10:17:09.573580] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:24.610 [2024-07-25 10:17:09.573597] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:24.610 [2024-07-25 10:17:09.577161] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:24.610 [2024-07-25 10:17:09.586446] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:24.610 [2024-07-25 10:17:09.586973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.610 [2024-07-25 10:17:09.587007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:24.610 [2024-07-25 10:17:09.587026] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:24.610 [2024-07-25 10:17:09.587266] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:24.610 [2024-07-25 10:17:09.587522] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:24.610 [2024-07-25 10:17:09.587548] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:24.610 [2024-07-25 10:17:09.587565] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:24.610 [2024-07-25 10:17:09.591132] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:24.610 [2024-07-25 10:17:09.600400] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:24.610 [2024-07-25 10:17:09.600916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.610 [2024-07-25 10:17:09.600949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:24.610 [2024-07-25 10:17:09.600967] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:24.610 [2024-07-25 10:17:09.601206] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:24.610 [2024-07-25 10:17:09.601462] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:24.610 [2024-07-25 10:17:09.601490] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:24.610 [2024-07-25 10:17:09.601506] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:24.610 [2024-07-25 10:17:09.605073] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:24.610 [2024-07-25 10:17:09.614375] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:24.610 [2024-07-25 10:17:09.614912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.610 [2024-07-25 10:17:09.614946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:24.610 [2024-07-25 10:17:09.614965] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:24.610 [2024-07-25 10:17:09.615203] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:24.610 [2024-07-25 10:17:09.615457] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:24.610 [2024-07-25 10:17:09.615492] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:24.610 [2024-07-25 10:17:09.615509] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:24.610 [2024-07-25 10:17:09.619083] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:24.610 [2024-07-25 10:17:09.628364] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:24.610 [2024-07-25 10:17:09.628904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.610 [2024-07-25 10:17:09.628954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:24.610 [2024-07-25 10:17:09.628973] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:24.610 [2024-07-25 10:17:09.629212] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:24.610 [2024-07-25 10:17:09.629466] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:24.610 [2024-07-25 10:17:09.629494] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:24.610 [2024-07-25 10:17:09.629511] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:24.610 [2024-07-25 10:17:09.633081] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:24.610 [2024-07-25 10:17:09.642350] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:24.610 [2024-07-25 10:17:09.642827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.610 [2024-07-25 10:17:09.642860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:24.610 [2024-07-25 10:17:09.642878] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:24.610 [2024-07-25 10:17:09.643118] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:24.610 [2024-07-25 10:17:09.643361] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:24.611 [2024-07-25 10:17:09.643386] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:24.611 [2024-07-25 10:17:09.643402] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:24.611 [2024-07-25 10:17:09.646978] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:24.611 [2024-07-25 10:17:09.656248] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:24.611 [2024-07-25 10:17:09.656703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.611 [2024-07-25 10:17:09.656735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:24.611 [2024-07-25 10:17:09.656754] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:24.611 [2024-07-25 10:17:09.656999] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:24.611 [2024-07-25 10:17:09.657244] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:24.611 [2024-07-25 10:17:09.657268] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:24.611 [2024-07-25 10:17:09.657285] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:24.611 [2024-07-25 10:17:09.660865] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:24.611 [2024-07-25 10:17:09.670146] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:24.611 [2024-07-25 10:17:09.670587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.611 [2024-07-25 10:17:09.670619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:24.611 [2024-07-25 10:17:09.670638] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:24.611 [2024-07-25 10:17:09.670876] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:24.611 [2024-07-25 10:17:09.671119] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:24.611 [2024-07-25 10:17:09.671144] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:24.611 [2024-07-25 10:17:09.671161] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:24.611 [2024-07-25 10:17:09.674743] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:24.611 [2024-07-25 10:17:09.684028] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:24.611 [2024-07-25 10:17:09.684461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.611 [2024-07-25 10:17:09.684493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:24.611 [2024-07-25 10:17:09.684512] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:24.611 [2024-07-25 10:17:09.684751] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:24.611 [2024-07-25 10:17:09.684994] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:24.611 [2024-07-25 10:17:09.685019] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:24.611 [2024-07-25 10:17:09.685036] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:24.611 [2024-07-25 10:17:09.688614] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:24.611 [2024-07-25 10:17:09.697891] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:24.611 [2024-07-25 10:17:09.698447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.611 [2024-07-25 10:17:09.698510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:24.611 [2024-07-25 10:17:09.698528] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:24.611 [2024-07-25 10:17:09.698766] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:24.611 [2024-07-25 10:17:09.699008] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:24.611 [2024-07-25 10:17:09.699033] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:24.611 [2024-07-25 10:17:09.699055] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:24.611 [2024-07-25 10:17:09.702630] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:24.611 [2024-07-25 10:17:09.711912] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:24.611 [2024-07-25 10:17:09.712424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.611 [2024-07-25 10:17:09.712501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:24.611 [2024-07-25 10:17:09.712521] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:24.611 [2024-07-25 10:17:09.712760] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:24.611 [2024-07-25 10:17:09.713022] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:24.611 [2024-07-25 10:17:09.713047] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:24.611 [2024-07-25 10:17:09.713063] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:24.611 [2024-07-25 10:17:09.716637] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:24.611 [2024-07-25 10:17:09.725905] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:24.611 [2024-07-25 10:17:09.726478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.611 [2024-07-25 10:17:09.726510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:24.611 [2024-07-25 10:17:09.726528] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:24.611 [2024-07-25 10:17:09.726779] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:24.611 [2024-07-25 10:17:09.727021] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:24.611 [2024-07-25 10:17:09.727046] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:24.611 [2024-07-25 10:17:09.727063] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:24.611 [2024-07-25 10:17:09.730644] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:24.611 [2024-07-25 10:17:09.739913] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:24.611 [2024-07-25 10:17:09.740532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.611 [2024-07-25 10:17:09.740584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:24.611 [2024-07-25 10:17:09.740605] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:24.611 [2024-07-25 10:17:09.740850] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:24.611 [2024-07-25 10:17:09.741094] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:24.611 [2024-07-25 10:17:09.741118] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:24.611 [2024-07-25 10:17:09.741134] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:24.611 [2024-07-25 10:17:09.744716] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:24.611 [2024-07-25 10:17:09.753755] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:24.611 [2024-07-25 10:17:09.754235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.611 [2024-07-25 10:17:09.754294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:24.611 [2024-07-25 10:17:09.754314] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:24.611 [2024-07-25 10:17:09.754564] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:24.611 [2024-07-25 10:17:09.754809] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:24.611 [2024-07-25 10:17:09.754834] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:24.611 [2024-07-25 10:17:09.754850] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:24.611 [2024-07-25 10:17:09.758410] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:24.611 [2024-07-25 10:17:09.767678] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:24.611 [2024-07-25 10:17:09.768306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.611 [2024-07-25 10:17:09.768353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:24.611 [2024-07-25 10:17:09.768373] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:24.611 [2024-07-25 10:17:09.768635] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:24.611 [2024-07-25 10:17:09.768880] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:24.611 [2024-07-25 10:17:09.768906] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:24.611 [2024-07-25 10:17:09.768923] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:24.611 [2024-07-25 10:17:09.772500] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:24.870 [2024-07-25 10:17:09.781569] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:24.870 [2024-07-25 10:17:09.782200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.870 [2024-07-25 10:17:09.782246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:24.871 [2024-07-25 10:17:09.782267] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:24.871 [2024-07-25 10:17:09.782528] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:24.871 [2024-07-25 10:17:09.782772] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:24.871 [2024-07-25 10:17:09.782798] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:24.871 [2024-07-25 10:17:09.782815] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:24.871 [2024-07-25 10:17:09.786381] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:24.871 [2024-07-25 10:17:09.795449] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:24.871 [2024-07-25 10:17:09.796058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.871 [2024-07-25 10:17:09.796110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:24.871 [2024-07-25 10:17:09.796130] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:24.871 [2024-07-25 10:17:09.796375] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:24.871 [2024-07-25 10:17:09.796644] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:24.871 [2024-07-25 10:17:09.796671] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:24.871 [2024-07-25 10:17:09.796688] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:24.871 [2024-07-25 10:17:09.800259] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:24.871 [2024-07-25 10:17:09.809331] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:24.871 [2024-07-25 10:17:09.809836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.871 [2024-07-25 10:17:09.809890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:24.871 [2024-07-25 10:17:09.809909] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:24.871 [2024-07-25 10:17:09.810149] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:24.871 [2024-07-25 10:17:09.810392] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:24.871 [2024-07-25 10:17:09.810416] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:24.871 [2024-07-25 10:17:09.810444] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:24.871 [2024-07-25 10:17:09.814012] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:24.871 [2024-07-25 10:17:09.823278] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:24.871 [2024-07-25 10:17:09.823689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.871 [2024-07-25 10:17:09.823721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:24.871 [2024-07-25 10:17:09.823740] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:24.871 [2024-07-25 10:17:09.823979] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:24.871 [2024-07-25 10:17:09.824224] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:24.871 [2024-07-25 10:17:09.824249] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:24.871 [2024-07-25 10:17:09.824265] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:24.871 [2024-07-25 10:17:09.827848] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:24.871 [2024-07-25 10:17:09.837123] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:24.871 [2024-07-25 10:17:09.837563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.871 [2024-07-25 10:17:09.837596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:24.871 [2024-07-25 10:17:09.837614] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:24.871 [2024-07-25 10:17:09.837853] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:24.871 [2024-07-25 10:17:09.838096] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:24.871 [2024-07-25 10:17:09.838121] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:24.871 [2024-07-25 10:17:09.838138] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:24.871 [2024-07-25 10:17:09.841720] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:24.871 [2024-07-25 10:17:09.850995] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:24.871 [2024-07-25 10:17:09.851445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.871 [2024-07-25 10:17:09.851489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:24.871 [2024-07-25 10:17:09.851507] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:24.871 [2024-07-25 10:17:09.851746] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:24.871 [2024-07-25 10:17:09.851989] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:24.871 [2024-07-25 10:17:09.852013] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:24.871 [2024-07-25 10:17:09.852029] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:24.871 [2024-07-25 10:17:09.855607] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:24.871 [2024-07-25 10:17:09.864876] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:24.871 [2024-07-25 10:17:09.865368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.871 [2024-07-25 10:17:09.865418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:24.871 [2024-07-25 10:17:09.865449] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:24.871 [2024-07-25 10:17:09.865691] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:24.871 [2024-07-25 10:17:09.865935] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:24.871 [2024-07-25 10:17:09.865959] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:24.871 [2024-07-25 10:17:09.865976] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:24.871 [2024-07-25 10:17:09.869552] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:24.871 [2024-07-25 10:17:09.878827] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:24.871 [2024-07-25 10:17:09.879312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.871 [2024-07-25 10:17:09.879344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:24.871 [2024-07-25 10:17:09.879363] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:24.871 [2024-07-25 10:17:09.879614] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:24.871 [2024-07-25 10:17:09.879858] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:24.871 [2024-07-25 10:17:09.879884] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:24.871 [2024-07-25 10:17:09.879899] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:24.871 [2024-07-25 10:17:09.883483] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:24.871 [2024-07-25 10:17:09.892753] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:24.871 [2024-07-25 10:17:09.893251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.871 [2024-07-25 10:17:09.893306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:24.871 [2024-07-25 10:17:09.893331] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:24.871 [2024-07-25 10:17:09.893582] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:24.871 [2024-07-25 10:17:09.893826] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:24.871 [2024-07-25 10:17:09.893851] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:24.871 [2024-07-25 10:17:09.893867] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:24.871 [2024-07-25 10:17:09.897456] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:24.871 [2024-07-25 10:17:09.906759] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:24.871 [2024-07-25 10:17:09.907298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.871 [2024-07-25 10:17:09.907350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:24.871 [2024-07-25 10:17:09.907369] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:24.871 [2024-07-25 10:17:09.907617] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:24.871 [2024-07-25 10:17:09.907861] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:24.871 [2024-07-25 10:17:09.907888] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:24.871 [2024-07-25 10:17:09.907904] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:24.871 [2024-07-25 10:17:09.911497] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:24.871 [2024-07-25 10:17:09.920770] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:24.871 [2024-07-25 10:17:09.921265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.872 [2024-07-25 10:17:09.921317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:24.872 [2024-07-25 10:17:09.921335] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:24.872 [2024-07-25 10:17:09.921585] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:24.872 [2024-07-25 10:17:09.921829] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:24.872 [2024-07-25 10:17:09.921854] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:24.872 [2024-07-25 10:17:09.921871] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:24.872 [2024-07-25 10:17:09.925451] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:24.872 [2024-07-25 10:17:09.934735] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:24.872 [2024-07-25 10:17:09.935207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.872 [2024-07-25 10:17:09.935258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:24.872 [2024-07-25 10:17:09.935277] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:24.872 [2024-07-25 10:17:09.935527] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:24.872 [2024-07-25 10:17:09.935776] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:24.872 [2024-07-25 10:17:09.935808] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:24.872 [2024-07-25 10:17:09.935825] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:24.872 [2024-07-25 10:17:09.939394] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:24.872 [2024-07-25 10:17:09.948679] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:24.872 [2024-07-25 10:17:09.949146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.872 [2024-07-25 10:17:09.949196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:24.872 [2024-07-25 10:17:09.949214] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:24.872 [2024-07-25 10:17:09.949464] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:24.872 [2024-07-25 10:17:09.949717] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:24.872 [2024-07-25 10:17:09.949752] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:24.872 [2024-07-25 10:17:09.949768] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:24.872 [2024-07-25 10:17:09.953334] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:24.872 [2024-07-25 10:17:09.962615] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:24.872 [2024-07-25 10:17:09.963141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.872 [2024-07-25 10:17:09.963191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:24.872 [2024-07-25 10:17:09.963210] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:24.872 [2024-07-25 10:17:09.963461] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:24.872 [2024-07-25 10:17:09.963714] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:24.872 [2024-07-25 10:17:09.963739] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:24.872 [2024-07-25 10:17:09.963754] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:24.872 [2024-07-25 10:17:09.967328] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:24.872 [2024-07-25 10:17:09.976614] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:24.872 [2024-07-25 10:17:09.977070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.872 [2024-07-25 10:17:09.977133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:24.872 [2024-07-25 10:17:09.977151] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:24.872 [2024-07-25 10:17:09.977389] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:24.872 [2024-07-25 10:17:09.977641] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:24.872 [2024-07-25 10:17:09.977667] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:24.872 [2024-07-25 10:17:09.977690] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:24.872 [2024-07-25 10:17:09.981270] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:24.872 [2024-07-25 10:17:09.990596] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:24.872 [2024-07-25 10:17:09.991092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.872 [2024-07-25 10:17:09.991142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:24.872 [2024-07-25 10:17:09.991160] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:24.872 [2024-07-25 10:17:09.991398] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:24.872 [2024-07-25 10:17:09.991653] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:24.872 [2024-07-25 10:17:09.991679] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:24.872 [2024-07-25 10:17:09.991695] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:24.872 [2024-07-25 10:17:09.995268] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:24.872 [2024-07-25 10:17:10.004546] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:24.872 [2024-07-25 10:17:10.004956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.872 [2024-07-25 10:17:10.005015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:24.872 [2024-07-25 10:17:10.005040] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:24.872 [2024-07-25 10:17:10.005334] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:24.872 [2024-07-25 10:17:10.005645] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:24.872 [2024-07-25 10:17:10.005677] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:24.872 [2024-07-25 10:17:10.005698] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:24.872 [2024-07-25 10:17:10.009903] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:24.872 [2024-07-25 10:17:10.018560] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:24.872 [2024-07-25 10:17:10.018993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.872 [2024-07-25 10:17:10.019026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:24.872 [2024-07-25 10:17:10.019045] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:24.872 [2024-07-25 10:17:10.019284] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:24.872 [2024-07-25 10:17:10.019541] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:24.872 [2024-07-25 10:17:10.019566] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:24.872 [2024-07-25 10:17:10.019583] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:24.872 [2024-07-25 10:17:10.023145] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:24.872 [2024-07-25 10:17:10.032394] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:24.872 [2024-07-25 10:17:10.032833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:24.872 [2024-07-25 10:17:10.032866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:24.872 [2024-07-25 10:17:10.032884] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:24.872 [2024-07-25 10:17:10.033128] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:24.872 [2024-07-25 10:17:10.033371] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:24.872 [2024-07-25 10:17:10.033395] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:24.872 [2024-07-25 10:17:10.033411] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.131 [2024-07-25 10:17:10.036979] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.131 [2024-07-25 10:17:10.046224] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.131 [2024-07-25 10:17:10.046681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.131 [2024-07-25 10:17:10.046714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:25.131 [2024-07-25 10:17:10.046732] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:25.131 [2024-07-25 10:17:10.046970] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:25.131 [2024-07-25 10:17:10.047213] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.131 [2024-07-25 10:17:10.047237] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.131 [2024-07-25 10:17:10.047253] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.131 [2024-07-25 10:17:10.050836] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.131 [2024-07-25 10:17:10.060089] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.131 [2024-07-25 10:17:10.060548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.131 [2024-07-25 10:17:10.060580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:25.131 [2024-07-25 10:17:10.060598] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:25.131 [2024-07-25 10:17:10.060837] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:25.131 [2024-07-25 10:17:10.061080] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.131 [2024-07-25 10:17:10.061104] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.131 [2024-07-25 10:17:10.061120] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.131 [2024-07-25 10:17:10.064700] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.131 [2024-07-25 10:17:10.073953] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.131 [2024-07-25 10:17:10.074526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.131 [2024-07-25 10:17:10.074572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:25.131 [2024-07-25 10:17:10.074593] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:25.131 [2024-07-25 10:17:10.074847] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:25.131 [2024-07-25 10:17:10.075100] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.131 [2024-07-25 10:17:10.075128] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.131 [2024-07-25 10:17:10.075151] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.131 [2024-07-25 10:17:10.078730] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.131 [2024-07-25 10:17:10.087787] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.131 [2024-07-25 10:17:10.088327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.131 [2024-07-25 10:17:10.088361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:25.131 [2024-07-25 10:17:10.088380] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:25.131 [2024-07-25 10:17:10.088634] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:25.131 [2024-07-25 10:17:10.088886] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.131 [2024-07-25 10:17:10.088911] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.131 [2024-07-25 10:17:10.088927] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.131 [2024-07-25 10:17:10.092501] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.131 [2024-07-25 10:17:10.101759] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.131 [2024-07-25 10:17:10.102265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.131 [2024-07-25 10:17:10.102306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:25.131 [2024-07-25 10:17:10.102325] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:25.131 [2024-07-25 10:17:10.102576] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:25.131 [2024-07-25 10:17:10.102829] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.132 [2024-07-25 10:17:10.102861] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.132 [2024-07-25 10:17:10.102877] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.132 [2024-07-25 10:17:10.106450] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.132 [2024-07-25 10:17:10.115722] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.132 [2024-07-25 10:17:10.116182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.132 [2024-07-25 10:17:10.116214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:25.132 [2024-07-25 10:17:10.116233] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:25.132 [2024-07-25 10:17:10.116485] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:25.132 [2024-07-25 10:17:10.116730] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.132 [2024-07-25 10:17:10.116754] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.132 [2024-07-25 10:17:10.116770] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.132 [2024-07-25 10:17:10.120335] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.132 [2024-07-25 10:17:10.129609] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.132 [2024-07-25 10:17:10.130114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.132 [2024-07-25 10:17:10.130162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:25.132 [2024-07-25 10:17:10.130182] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:25.132 [2024-07-25 10:17:10.130420] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:25.132 [2024-07-25 10:17:10.130677] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.132 [2024-07-25 10:17:10.130702] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.132 [2024-07-25 10:17:10.130717] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.132 [2024-07-25 10:17:10.134281] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.132 [2024-07-25 10:17:10.143541] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.132 [2024-07-25 10:17:10.144165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.132 [2024-07-25 10:17:10.144211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:25.132 [2024-07-25 10:17:10.144231] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:25.132 [2024-07-25 10:17:10.144493] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:25.132 [2024-07-25 10:17:10.144738] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.132 [2024-07-25 10:17:10.144763] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.132 [2024-07-25 10:17:10.144780] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.132 [2024-07-25 10:17:10.148349] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.132 [2024-07-25 10:17:10.157401] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.132 [2024-07-25 10:17:10.158052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.132 [2024-07-25 10:17:10.158098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:25.132 [2024-07-25 10:17:10.158118] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:25.132 [2024-07-25 10:17:10.158363] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:25.132 [2024-07-25 10:17:10.158622] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.132 [2024-07-25 10:17:10.158647] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.132 [2024-07-25 10:17:10.158664] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.132 [2024-07-25 10:17:10.162234] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.132 [2024-07-25 10:17:10.171286] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.132 [2024-07-25 10:17:10.171787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.132 [2024-07-25 10:17:10.171821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:25.132 [2024-07-25 10:17:10.171840] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:25.132 [2024-07-25 10:17:10.172079] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:25.132 [2024-07-25 10:17:10.172332] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.132 [2024-07-25 10:17:10.172359] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.132 [2024-07-25 10:17:10.172374] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.132 [2024-07-25 10:17:10.175953] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.132 [2024-07-25 10:17:10.185184] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.132 [2024-07-25 10:17:10.185725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.132 [2024-07-25 10:17:10.185761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:25.132 [2024-07-25 10:17:10.185781] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:25.132 [2024-07-25 10:17:10.186021] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:25.132 [2024-07-25 10:17:10.186265] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.132 [2024-07-25 10:17:10.186290] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.132 [2024-07-25 10:17:10.186306] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.132 [2024-07-25 10:17:10.189881] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.132 [2024-07-25 10:17:10.199141] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.132 [2024-07-25 10:17:10.199567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.132 [2024-07-25 10:17:10.199600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:25.132 [2024-07-25 10:17:10.199619] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:25.132 [2024-07-25 10:17:10.199858] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:25.132 [2024-07-25 10:17:10.200102] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.132 [2024-07-25 10:17:10.200126] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.132 [2024-07-25 10:17:10.200142] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.132 [2024-07-25 10:17:10.203721] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.132 [2024-07-25 10:17:10.213003] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.132 [2024-07-25 10:17:10.213550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.132 [2024-07-25 10:17:10.213582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:25.132 [2024-07-25 10:17:10.213601] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:25.132 [2024-07-25 10:17:10.213840] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:25.132 [2024-07-25 10:17:10.214084] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.132 [2024-07-25 10:17:10.214108] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.132 [2024-07-25 10:17:10.214124] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.132 [2024-07-25 10:17:10.217713] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.132 [2024-07-25 10:17:10.226961] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.132 [2024-07-25 10:17:10.227474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.132 [2024-07-25 10:17:10.227516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:25.132 [2024-07-25 10:17:10.227536] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:25.132 [2024-07-25 10:17:10.227775] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:25.132 [2024-07-25 10:17:10.228019] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.132 [2024-07-25 10:17:10.228043] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.132 [2024-07-25 10:17:10.228058] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.132 [2024-07-25 10:17:10.231636] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.132 [2024-07-25 10:17:10.240887] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.132 [2024-07-25 10:17:10.241363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.132 [2024-07-25 10:17:10.241393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:25.132 [2024-07-25 10:17:10.241411] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:25.132 [2024-07-25 10:17:10.241658] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:25.132 [2024-07-25 10:17:10.241902] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.132 [2024-07-25 10:17:10.241926] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.133 [2024-07-25 10:17:10.241942] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.133 [2024-07-25 10:17:10.245551] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.133 [2024-07-25 10:17:10.254814] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.133 [2024-07-25 10:17:10.255333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.133 [2024-07-25 10:17:10.255365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:25.133 [2024-07-25 10:17:10.255383] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:25.133 [2024-07-25 10:17:10.255632] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:25.133 [2024-07-25 10:17:10.255876] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.133 [2024-07-25 10:17:10.255901] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.133 [2024-07-25 10:17:10.255917] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.133 [2024-07-25 10:17:10.259491] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.133 [2024-07-25 10:17:10.268746] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.133 [2024-07-25 10:17:10.269256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.133 [2024-07-25 10:17:10.269289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:25.133 [2024-07-25 10:17:10.269313] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:25.133 [2024-07-25 10:17:10.269564] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:25.133 [2024-07-25 10:17:10.269808] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.133 [2024-07-25 10:17:10.269834] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.133 [2024-07-25 10:17:10.269849] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.133 [2024-07-25 10:17:10.273412] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.133 [2024-07-25 10:17:10.282673] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.133 [2024-07-25 10:17:10.283155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.133 [2024-07-25 10:17:10.283197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:25.133 [2024-07-25 10:17:10.283215] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:25.133 [2024-07-25 10:17:10.283465] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:25.133 [2024-07-25 10:17:10.283709] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.133 [2024-07-25 10:17:10.283734] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.133 [2024-07-25 10:17:10.283751] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.133 [2024-07-25 10:17:10.287313] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.133 [2024-07-25 10:17:10.296572] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.391 [2024-07-25 10:17:10.297144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.391 [2024-07-25 10:17:10.297190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:25.391 [2024-07-25 10:17:10.297211] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:25.391 [2024-07-25 10:17:10.297471] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:25.391 [2024-07-25 10:17:10.297717] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.391 [2024-07-25 10:17:10.297741] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.391 [2024-07-25 10:17:10.297757] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.391 [2024-07-25 10:17:10.301323] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.391 [2024-07-25 10:17:10.310604] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.391 [2024-07-25 10:17:10.311192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.391 [2024-07-25 10:17:10.311238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:25.391 [2024-07-25 10:17:10.311258] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:25.391 [2024-07-25 10:17:10.311518] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:25.391 [2024-07-25 10:17:10.311764] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.391 [2024-07-25 10:17:10.311795] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.391 [2024-07-25 10:17:10.311812] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.391 [2024-07-25 10:17:10.315379] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.391 [2024-07-25 10:17:10.324640] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.391 [2024-07-25 10:17:10.325130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.391 [2024-07-25 10:17:10.325173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:25.391 [2024-07-25 10:17:10.325191] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:25.391 [2024-07-25 10:17:10.325440] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:25.391 [2024-07-25 10:17:10.325685] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.391 [2024-07-25 10:17:10.325709] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.391 [2024-07-25 10:17:10.325725] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.391 [2024-07-25 10:17:10.329289] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.391 [2024-07-25 10:17:10.338542] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.391 [2024-07-25 10:17:10.339140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.391 [2024-07-25 10:17:10.339185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:25.391 [2024-07-25 10:17:10.339206] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:25.391 [2024-07-25 10:17:10.339465] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:25.391 [2024-07-25 10:17:10.339710] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.391 [2024-07-25 10:17:10.339735] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.391 [2024-07-25 10:17:10.339752] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.391 [2024-07-25 10:17:10.343318] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.391 [2024-07-25 10:17:10.352577] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.391 [2024-07-25 10:17:10.353073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.391 [2024-07-25 10:17:10.353106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:25.391 [2024-07-25 10:17:10.353125] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:25.391 [2024-07-25 10:17:10.353364] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:25.391 [2024-07-25 10:17:10.353630] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.391 [2024-07-25 10:17:10.353655] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.391 [2024-07-25 10:17:10.353671] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.391 [2024-07-25 10:17:10.357235] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.391 [2024-07-25 10:17:10.366500] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.391 [2024-07-25 10:17:10.367016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.391 [2024-07-25 10:17:10.367049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:25.391 [2024-07-25 10:17:10.367067] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:25.391 [2024-07-25 10:17:10.367307] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:25.391 [2024-07-25 10:17:10.367568] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.391 [2024-07-25 10:17:10.367593] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.392 [2024-07-25 10:17:10.367609] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.392 [2024-07-25 10:17:10.371172] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.392 [2024-07-25 10:17:10.380464] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.392 [2024-07-25 10:17:10.381058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.392 [2024-07-25 10:17:10.381104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:25.392 [2024-07-25 10:17:10.381125] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:25.392 [2024-07-25 10:17:10.381371] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:25.392 [2024-07-25 10:17:10.381635] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.392 [2024-07-25 10:17:10.381661] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.392 [2024-07-25 10:17:10.381676] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.392 [2024-07-25 10:17:10.385242] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.392 [2024-07-25 10:17:10.394503] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.392 [2024-07-25 10:17:10.395006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.392 [2024-07-25 10:17:10.395039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:25.392 [2024-07-25 10:17:10.395058] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:25.392 [2024-07-25 10:17:10.395297] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:25.392 [2024-07-25 10:17:10.395558] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.392 [2024-07-25 10:17:10.395584] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.392 [2024-07-25 10:17:10.395600] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.392 [2024-07-25 10:17:10.399160] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.392 [2024-07-25 10:17:10.408402] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.392 [2024-07-25 10:17:10.408827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.392 [2024-07-25 10:17:10.408859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:25.392 [2024-07-25 10:17:10.408877] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:25.392 [2024-07-25 10:17:10.409122] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:25.392 [2024-07-25 10:17:10.409365] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.392 [2024-07-25 10:17:10.409389] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.392 [2024-07-25 10:17:10.409407] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.392 [2024-07-25 10:17:10.412993] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.392 [2024-07-25 10:17:10.422243] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.392 [2024-07-25 10:17:10.422736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.392 [2024-07-25 10:17:10.422778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:25.392 [2024-07-25 10:17:10.422797] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:25.392 [2024-07-25 10:17:10.423035] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:25.392 [2024-07-25 10:17:10.423279] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.392 [2024-07-25 10:17:10.423303] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.392 [2024-07-25 10:17:10.423319] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.392 [2024-07-25 10:17:10.426910] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.392 [2024-07-25 10:17:10.436160] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.392 [2024-07-25 10:17:10.436753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.392 [2024-07-25 10:17:10.436786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:25.392 [2024-07-25 10:17:10.436804] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:25.392 [2024-07-25 10:17:10.437044] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:25.392 [2024-07-25 10:17:10.437287] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.392 [2024-07-25 10:17:10.437311] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.392 [2024-07-25 10:17:10.437328] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.392 [2024-07-25 10:17:10.440900] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.392 [2024-07-25 10:17:10.450149] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.392 [2024-07-25 10:17:10.450657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.392 [2024-07-25 10:17:10.450690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:25.392 [2024-07-25 10:17:10.450708] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:25.392 [2024-07-25 10:17:10.450946] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:25.392 [2024-07-25 10:17:10.451190] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.392 [2024-07-25 10:17:10.451215] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.392 [2024-07-25 10:17:10.451237] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.392 [2024-07-25 10:17:10.454812] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.392 [2024-07-25 10:17:10.464061] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.392 [2024-07-25 10:17:10.464565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.392 [2024-07-25 10:17:10.464598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:25.392 [2024-07-25 10:17:10.464616] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:25.392 [2024-07-25 10:17:10.464855] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:25.392 [2024-07-25 10:17:10.465099] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.392 [2024-07-25 10:17:10.465123] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.392 [2024-07-25 10:17:10.465139] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.392 [2024-07-25 10:17:10.468712] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.392 [2024-07-25 10:17:10.477961] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.392 [2024-07-25 10:17:10.478437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.392 [2024-07-25 10:17:10.478470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:25.392 [2024-07-25 10:17:10.478489] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:25.392 [2024-07-25 10:17:10.478728] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:25.392 [2024-07-25 10:17:10.478971] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.392 [2024-07-25 10:17:10.478995] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.392 [2024-07-25 10:17:10.479011] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.392 [2024-07-25 10:17:10.482590] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.392 [2024-07-25 10:17:10.491847] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.392 [2024-07-25 10:17:10.492366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.392 [2024-07-25 10:17:10.492398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:25.392 [2024-07-25 10:17:10.492416] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:25.392 [2024-07-25 10:17:10.492663] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:25.392 [2024-07-25 10:17:10.492915] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.392 [2024-07-25 10:17:10.492939] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.392 [2024-07-25 10:17:10.492955] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.392 [2024-07-25 10:17:10.496530] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.392 [2024-07-25 10:17:10.505781] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.392 [2024-07-25 10:17:10.506200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.392 [2024-07-25 10:17:10.506233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:25.392 [2024-07-25 10:17:10.506252] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:25.392 [2024-07-25 10:17:10.506501] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:25.392 [2024-07-25 10:17:10.506745] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.392 [2024-07-25 10:17:10.506770] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.392 [2024-07-25 10:17:10.506786] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.393 [2024-07-25 10:17:10.510361] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.393 [2024-07-25 10:17:10.519816] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.393 [2024-07-25 10:17:10.520315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.393 [2024-07-25 10:17:10.520346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:25.393 [2024-07-25 10:17:10.520364] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:25.393 [2024-07-25 10:17:10.520613] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:25.393 [2024-07-25 10:17:10.520858] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.393 [2024-07-25 10:17:10.520883] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.393 [2024-07-25 10:17:10.520898] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.393 [2024-07-25 10:17:10.524463] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.393 [2024-07-25 10:17:10.533712] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.393 [2024-07-25 10:17:10.534132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.393 [2024-07-25 10:17:10.534164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:25.393 [2024-07-25 10:17:10.534182] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:25.393 [2024-07-25 10:17:10.534420] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:25.393 [2024-07-25 10:17:10.534674] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.393 [2024-07-25 10:17:10.534698] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.393 [2024-07-25 10:17:10.534715] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.393 [2024-07-25 10:17:10.538277] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.393 [2024-07-25 10:17:10.547738] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.393 [2024-07-25 10:17:10.548313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.393 [2024-07-25 10:17:10.548358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:25.393 [2024-07-25 10:17:10.548380] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:25.393 [2024-07-25 10:17:10.548638] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:25.393 [2024-07-25 10:17:10.548890] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.393 [2024-07-25 10:17:10.548915] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.393 [2024-07-25 10:17:10.548931] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.393 [2024-07-25 10:17:10.552503] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.652 [2024-07-25 10:17:10.561755] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.652 [2024-07-25 10:17:10.562256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.652 [2024-07-25 10:17:10.562290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:25.652 [2024-07-25 10:17:10.562309] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:25.652 [2024-07-25 10:17:10.562561] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:25.652 [2024-07-25 10:17:10.562806] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.652 [2024-07-25 10:17:10.562830] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.652 [2024-07-25 10:17:10.562846] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.652 [2024-07-25 10:17:10.566406] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.652 [2024-07-25 10:17:10.575656] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.652 [2024-07-25 10:17:10.576174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.652 [2024-07-25 10:17:10.576206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:25.652 [2024-07-25 10:17:10.576224] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:25.652 [2024-07-25 10:17:10.576473] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:25.652 [2024-07-25 10:17:10.576717] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.652 [2024-07-25 10:17:10.576742] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.652 [2024-07-25 10:17:10.576757] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.652 [2024-07-25 10:17:10.580318] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.652 [2024-07-25 10:17:10.589583] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.652 [2024-07-25 10:17:10.590116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.652 [2024-07-25 10:17:10.590149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:25.652 [2024-07-25 10:17:10.590166] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:25.652 [2024-07-25 10:17:10.590406] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:25.652 [2024-07-25 10:17:10.590662] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.652 [2024-07-25 10:17:10.590687] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.652 [2024-07-25 10:17:10.590703] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.652 [2024-07-25 10:17:10.594271] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.652 [2024-07-25 10:17:10.603529] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.652 [2024-07-25 10:17:10.604026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.652 [2024-07-25 10:17:10.604058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:25.652 [2024-07-25 10:17:10.604076] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:25.652 [2024-07-25 10:17:10.604315] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:25.652 [2024-07-25 10:17:10.604569] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.652 [2024-07-25 10:17:10.604594] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.652 [2024-07-25 10:17:10.604610] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.652 [2024-07-25 10:17:10.608169] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.652 [2024-07-25 10:17:10.617438] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.652 [2024-07-25 10:17:10.618009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.652 [2024-07-25 10:17:10.618055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:25.652 [2024-07-25 10:17:10.618076] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:25.652 [2024-07-25 10:17:10.618322] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:25.652 [2024-07-25 10:17:10.618596] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.652 [2024-07-25 10:17:10.618622] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.652 [2024-07-25 10:17:10.618639] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.652 [2024-07-25 10:17:10.622205] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.652 [2024-07-25 10:17:10.631461] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.652 [2024-07-25 10:17:10.632061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.652 [2024-07-25 10:17:10.632107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:25.652 [2024-07-25 10:17:10.632128] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:25.652 [2024-07-25 10:17:10.632373] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:25.652 [2024-07-25 10:17:10.632631] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.652 [2024-07-25 10:17:10.632656] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.652 [2024-07-25 10:17:10.632673] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.652 [2024-07-25 10:17:10.636239] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.652 [2024-07-25 10:17:10.645496] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.652 [2024-07-25 10:17:10.646046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.652 [2024-07-25 10:17:10.646082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:25.652 [2024-07-25 10:17:10.646107] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:25.653 [2024-07-25 10:17:10.646348] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:25.653 [2024-07-25 10:17:10.646603] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.653 [2024-07-25 10:17:10.646628] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.653 [2024-07-25 10:17:10.646645] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.653 [2024-07-25 10:17:10.650207] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.653 [2024-07-25 10:17:10.659460] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.653 [2024-07-25 10:17:10.660001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.653 [2024-07-25 10:17:10.660033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:25.653 [2024-07-25 10:17:10.660051] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:25.653 [2024-07-25 10:17:10.660290] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:25.653 [2024-07-25 10:17:10.660543] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.653 [2024-07-25 10:17:10.660568] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.653 [2024-07-25 10:17:10.660584] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.653 [2024-07-25 10:17:10.664144] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.653 [2024-07-25 10:17:10.673389] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.653 [2024-07-25 10:17:10.673968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.653 [2024-07-25 10:17:10.674001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:25.653 [2024-07-25 10:17:10.674019] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:25.653 [2024-07-25 10:17:10.674258] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:25.653 [2024-07-25 10:17:10.674512] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.653 [2024-07-25 10:17:10.674537] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.653 [2024-07-25 10:17:10.674552] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.653 [2024-07-25 10:17:10.678113] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.653 [2024-07-25 10:17:10.687369] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.653 [2024-07-25 10:17:10.687866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.653 [2024-07-25 10:17:10.687899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:25.653 [2024-07-25 10:17:10.687917] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:25.653 [2024-07-25 10:17:10.688155] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:25.653 [2024-07-25 10:17:10.688405] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.653 [2024-07-25 10:17:10.688445] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.653 [2024-07-25 10:17:10.688463] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.653 [2024-07-25 10:17:10.692027] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.653 [2024-07-25 10:17:10.701315] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.653 [2024-07-25 10:17:10.701722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.653 [2024-07-25 10:17:10.701755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:25.653 [2024-07-25 10:17:10.701774] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:25.653 [2024-07-25 10:17:10.702013] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:25.653 [2024-07-25 10:17:10.702256] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.653 [2024-07-25 10:17:10.702282] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.653 [2024-07-25 10:17:10.702301] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.653 [2024-07-25 10:17:10.705880] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.653 [2024-07-25 10:17:10.715352] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.653 [2024-07-25 10:17:10.715858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.653 [2024-07-25 10:17:10.715897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:25.653 [2024-07-25 10:17:10.715916] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:25.653 [2024-07-25 10:17:10.716154] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:25.653 [2024-07-25 10:17:10.716396] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.653 [2024-07-25 10:17:10.716421] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.653 [2024-07-25 10:17:10.716447] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.653 [2024-07-25 10:17:10.720012] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.653 [2024-07-25 10:17:10.729258] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.653 [2024-07-25 10:17:10.729682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.653 [2024-07-25 10:17:10.729713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:25.653 [2024-07-25 10:17:10.729732] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:25.653 [2024-07-25 10:17:10.729970] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:25.653 [2024-07-25 10:17:10.730214] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.653 [2024-07-25 10:17:10.730239] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.653 [2024-07-25 10:17:10.730255] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.653 [2024-07-25 10:17:10.733826] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.653 [2024-07-25 10:17:10.743085] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.653 [2024-07-25 10:17:10.743508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.653 [2024-07-25 10:17:10.743540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:25.653 [2024-07-25 10:17:10.743559] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:25.653 [2024-07-25 10:17:10.743798] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:25.653 [2024-07-25 10:17:10.744042] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.653 [2024-07-25 10:17:10.744066] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.653 [2024-07-25 10:17:10.744082] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.653 [2024-07-25 10:17:10.747653] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.653 [2024-07-25 10:17:10.757114] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.653 [2024-07-25 10:17:10.757528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.653 [2024-07-25 10:17:10.757560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:25.653 [2024-07-25 10:17:10.757578] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:25.653 [2024-07-25 10:17:10.757817] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:25.653 [2024-07-25 10:17:10.758060] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.653 [2024-07-25 10:17:10.758085] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.653 [2024-07-25 10:17:10.758101] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.653 [2024-07-25 10:17:10.761672] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.653 [2024-07-25 10:17:10.771127] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.653 [2024-07-25 10:17:10.771540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.653 [2024-07-25 10:17:10.771572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:25.653 [2024-07-25 10:17:10.771590] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:25.653 [2024-07-25 10:17:10.771829] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:25.653 [2024-07-25 10:17:10.772073] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.653 [2024-07-25 10:17:10.772097] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.653 [2024-07-25 10:17:10.772113] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.653 [2024-07-25 10:17:10.775682] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.653 [2024-07-25 10:17:10.785157] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.653 [2024-07-25 10:17:10.785591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.653 [2024-07-25 10:17:10.785622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:25.653 [2024-07-25 10:17:10.785641] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:25.653 [2024-07-25 10:17:10.785886] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:25.654 [2024-07-25 10:17:10.786130] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.654 [2024-07-25 10:17:10.786154] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.654 [2024-07-25 10:17:10.786171] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.654 [2024-07-25 10:17:10.789742] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.654 [2024-07-25 10:17:10.798998] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.654 [2024-07-25 10:17:10.799538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.654 [2024-07-25 10:17:10.799571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:25.654 [2024-07-25 10:17:10.799589] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:25.654 [2024-07-25 10:17:10.799828] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:25.654 [2024-07-25 10:17:10.800071] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.654 [2024-07-25 10:17:10.800095] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.654 [2024-07-25 10:17:10.800112] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.654 [2024-07-25 10:17:10.803687] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.654 [2024-07-25 10:17:10.812978] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.654 [2024-07-25 10:17:10.813445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.654 [2024-07-25 10:17:10.813489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:25.654 [2024-07-25 10:17:10.813507] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:25.654 [2024-07-25 10:17:10.813746] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:25.654 [2024-07-25 10:17:10.813989] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.654 [2024-07-25 10:17:10.814013] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.654 [2024-07-25 10:17:10.814030] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.913 [2024-07-25 10:17:10.817600] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.913 [2024-07-25 10:17:10.826851] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.913 [2024-07-25 10:17:10.827289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.913 [2024-07-25 10:17:10.827321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:25.913 [2024-07-25 10:17:10.827339] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:25.913 [2024-07-25 10:17:10.827589] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:25.913 [2024-07-25 10:17:10.827833] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.913 [2024-07-25 10:17:10.827859] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.913 [2024-07-25 10:17:10.827881] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.913 [2024-07-25 10:17:10.831452] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.913 [2024-07-25 10:17:10.840706] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.913 [2024-07-25 10:17:10.841237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.913 [2024-07-25 10:17:10.841269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:25.913 [2024-07-25 10:17:10.841287] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:25.913 [2024-07-25 10:17:10.841534] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:25.913 [2024-07-25 10:17:10.841781] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.913 [2024-07-25 10:17:10.841806] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.913 [2024-07-25 10:17:10.841822] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.913 [2024-07-25 10:17:10.845386] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.913 [2024-07-25 10:17:10.854645] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.913 [2024-07-25 10:17:10.855139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.913 [2024-07-25 10:17:10.855170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:25.914 [2024-07-25 10:17:10.855188] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:25.914 [2024-07-25 10:17:10.855437] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:25.914 [2024-07-25 10:17:10.855690] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.914 [2024-07-25 10:17:10.855727] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.914 [2024-07-25 10:17:10.855743] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.914 [2024-07-25 10:17:10.859304] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.914 [2024-07-25 10:17:10.868561] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.914 [2024-07-25 10:17:10.869034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.914 [2024-07-25 10:17:10.869066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:25.914 [2024-07-25 10:17:10.869084] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:25.914 [2024-07-25 10:17:10.869323] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:25.914 [2024-07-25 10:17:10.869575] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.914 [2024-07-25 10:17:10.869600] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.914 [2024-07-25 10:17:10.869617] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.914 [2024-07-25 10:17:10.873182] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.914 [2024-07-25 10:17:10.882448] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.914 [2024-07-25 10:17:10.882915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.914 [2024-07-25 10:17:10.882947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:25.914 [2024-07-25 10:17:10.882965] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:25.914 [2024-07-25 10:17:10.883203] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:25.914 [2024-07-25 10:17:10.883456] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.914 [2024-07-25 10:17:10.883481] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.914 [2024-07-25 10:17:10.883498] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.914 [2024-07-25 10:17:10.887060] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.914 [2024-07-25 10:17:10.896312] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.914 [2024-07-25 10:17:10.896801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.914 [2024-07-25 10:17:10.896833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:25.914 [2024-07-25 10:17:10.896851] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:25.914 [2024-07-25 10:17:10.897089] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:25.914 [2024-07-25 10:17:10.897333] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.914 [2024-07-25 10:17:10.897358] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.914 [2024-07-25 10:17:10.897375] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.914 [2024-07-25 10:17:10.900945] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.914 [2024-07-25 10:17:10.910191] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.914 [2024-07-25 10:17:10.910709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.914 [2024-07-25 10:17:10.910740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:25.914 [2024-07-25 10:17:10.910758] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:25.914 [2024-07-25 10:17:10.910997] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:25.914 [2024-07-25 10:17:10.911239] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.914 [2024-07-25 10:17:10.911263] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.914 [2024-07-25 10:17:10.911280] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.914 [2024-07-25 10:17:10.914864] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.914 [2024-07-25 10:17:10.924115] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.914 [2024-07-25 10:17:10.924590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.914 [2024-07-25 10:17:10.924623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:25.914 [2024-07-25 10:17:10.924641] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:25.914 [2024-07-25 10:17:10.924880] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:25.914 [2024-07-25 10:17:10.925131] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.914 [2024-07-25 10:17:10.925156] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.914 [2024-07-25 10:17:10.925172] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.914 [2024-07-25 10:17:10.928747] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.914 [2024-07-25 10:17:10.937998] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.914 [2024-07-25 10:17:10.938418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.914 [2024-07-25 10:17:10.938458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:25.914 [2024-07-25 10:17:10.938476] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:25.914 [2024-07-25 10:17:10.938715] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:25.914 [2024-07-25 10:17:10.938960] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.914 [2024-07-25 10:17:10.938985] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.914 [2024-07-25 10:17:10.939001] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.914 [2024-07-25 10:17:10.942568] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.914 [2024-07-25 10:17:10.952023] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.914 [2024-07-25 10:17:10.952514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.914 [2024-07-25 10:17:10.952546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:25.914 [2024-07-25 10:17:10.952564] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:25.914 [2024-07-25 10:17:10.952802] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:25.914 [2024-07-25 10:17:10.953045] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.914 [2024-07-25 10:17:10.953069] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.914 [2024-07-25 10:17:10.953084] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.914 [2024-07-25 10:17:10.956657] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.914 [2024-07-25 10:17:10.965933] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.914 [2024-07-25 10:17:10.966421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.914 [2024-07-25 10:17:10.966459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:25.914 [2024-07-25 10:17:10.966478] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:25.914 [2024-07-25 10:17:10.966716] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:25.914 [2024-07-25 10:17:10.966958] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.914 [2024-07-25 10:17:10.966982] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.914 [2024-07-25 10:17:10.966998] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.914 [2024-07-25 10:17:10.970581] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.914 [2024-07-25 10:17:10.979834] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.914 [2024-07-25 10:17:10.980322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.914 [2024-07-25 10:17:10.980364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:25.914 [2024-07-25 10:17:10.980382] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:25.914 [2024-07-25 10:17:10.980630] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:25.914 [2024-07-25 10:17:10.980882] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.914 [2024-07-25 10:17:10.980909] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.914 [2024-07-25 10:17:10.980925] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.914 [2024-07-25 10:17:10.984496] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.914 [2024-07-25 10:17:10.993754] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.914 [2024-07-25 10:17:10.994229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.914 [2024-07-25 10:17:10.994260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:25.914 [2024-07-25 10:17:10.994278] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:25.915 [2024-07-25 10:17:10.994535] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:25.915 [2024-07-25 10:17:10.994781] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.915 [2024-07-25 10:17:10.994806] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.915 [2024-07-25 10:17:10.994822] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.915 [2024-07-25 10:17:10.998386] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.915 [2024-07-25 10:17:11.007642] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.915 [2024-07-25 10:17:11.008095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.915 [2024-07-25 10:17:11.008127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:25.915 [2024-07-25 10:17:11.008144] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:25.915 [2024-07-25 10:17:11.008383] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:25.915 [2024-07-25 10:17:11.008636] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.915 [2024-07-25 10:17:11.008663] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.915 [2024-07-25 10:17:11.008679] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.915 [2024-07-25 10:17:11.012258] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.915 [2024-07-25 10:17:11.021653] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.915 [2024-07-25 10:17:11.022102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.915 [2024-07-25 10:17:11.022134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:25.915 [2024-07-25 10:17:11.022159] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:25.915 [2024-07-25 10:17:11.022398] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:25.915 [2024-07-25 10:17:11.022651] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.915 [2024-07-25 10:17:11.022676] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.915 [2024-07-25 10:17:11.022693] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.915 [2024-07-25 10:17:11.026254] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.915 [2024-07-25 10:17:11.035533] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.915 [2024-07-25 10:17:11.036071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.915 [2024-07-25 10:17:11.036102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:25.915 [2024-07-25 10:17:11.036120] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:25.915 [2024-07-25 10:17:11.036358] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:25.915 [2024-07-25 10:17:11.036610] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.915 [2024-07-25 10:17:11.036635] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.915 [2024-07-25 10:17:11.036652] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.915 [2024-07-25 10:17:11.040214] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.915 [2024-07-25 10:17:11.049494] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.915 [2024-07-25 10:17:11.049905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.915 [2024-07-25 10:17:11.049937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:25.915 [2024-07-25 10:17:11.049955] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:25.915 [2024-07-25 10:17:11.050193] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:25.915 [2024-07-25 10:17:11.050449] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.915 [2024-07-25 10:17:11.050474] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.915 [2024-07-25 10:17:11.050490] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.915 [2024-07-25 10:17:11.054049] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.915 [2024-07-25 10:17:11.063516] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.915 [2024-07-25 10:17:11.064035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.915 [2024-07-25 10:17:11.064067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:25.915 [2024-07-25 10:17:11.064085] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:25.915 [2024-07-25 10:17:11.064324] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:25.915 [2024-07-25 10:17:11.064579] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.915 [2024-07-25 10:17:11.064610] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.915 [2024-07-25 10:17:11.064627] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:25.915 [2024-07-25 10:17:11.068190] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:25.915 [2024-07-25 10:17:11.077455] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:25.915 [2024-07-25 10:17:11.077906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:25.915 [2024-07-25 10:17:11.077938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:25.915 [2024-07-25 10:17:11.077956] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:25.915 [2024-07-25 10:17:11.078194] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:25.915 [2024-07-25 10:17:11.078450] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:25.915 [2024-07-25 10:17:11.078474] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:25.915 [2024-07-25 10:17:11.078491] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.174 [2024-07-25 10:17:11.082062] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.174 [2024-07-25 10:17:11.091329] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.174 [2024-07-25 10:17:11.091742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.174 [2024-07-25 10:17:11.091773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:26.174 [2024-07-25 10:17:11.091791] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:26.174 [2024-07-25 10:17:11.092030] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:26.174 [2024-07-25 10:17:11.092273] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.174 [2024-07-25 10:17:11.092297] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.174 [2024-07-25 10:17:11.092314] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.174 [2024-07-25 10:17:11.095897] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.174 [2024-07-25 10:17:11.105159] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.174 [2024-07-25 10:17:11.105563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.174 [2024-07-25 10:17:11.105594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:26.174 [2024-07-25 10:17:11.105613] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:26.174 [2024-07-25 10:17:11.105851] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:26.174 [2024-07-25 10:17:11.106094] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.174 [2024-07-25 10:17:11.106119] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.174 [2024-07-25 10:17:11.106134] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.174 [2024-07-25 10:17:11.109710] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.174 [2024-07-25 10:17:11.119195] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.174 [2024-07-25 10:17:11.119591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.174 [2024-07-25 10:17:11.119622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:26.174 [2024-07-25 10:17:11.119641] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:26.174 [2024-07-25 10:17:11.119879] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:26.174 [2024-07-25 10:17:11.120122] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.174 [2024-07-25 10:17:11.120146] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.174 [2024-07-25 10:17:11.120162] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.174 [2024-07-25 10:17:11.123738] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.174 [2024-07-25 10:17:11.133207] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.174 [2024-07-25 10:17:11.133584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.174 [2024-07-25 10:17:11.133616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:26.174 [2024-07-25 10:17:11.133634] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:26.174 [2024-07-25 10:17:11.133872] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:26.174 [2024-07-25 10:17:11.134116] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.174 [2024-07-25 10:17:11.134140] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.174 [2024-07-25 10:17:11.134156] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.174 [2024-07-25 10:17:11.137726] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.174 [2024-07-25 10:17:11.147205] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.174 [2024-07-25 10:17:11.147619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.174 [2024-07-25 10:17:11.147651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:26.174 [2024-07-25 10:17:11.147669] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:26.175 [2024-07-25 10:17:11.147907] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:26.175 [2024-07-25 10:17:11.148151] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.175 [2024-07-25 10:17:11.148176] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.175 [2024-07-25 10:17:11.148192] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.175 [2024-07-25 10:17:11.151770] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.175 [2024-07-25 10:17:11.161242] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.175 [2024-07-25 10:17:11.161625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.175 [2024-07-25 10:17:11.161656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:26.175 [2024-07-25 10:17:11.161674] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:26.175 [2024-07-25 10:17:11.161923] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:26.175 [2024-07-25 10:17:11.162166] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.175 [2024-07-25 10:17:11.162190] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.175 [2024-07-25 10:17:11.162207] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.175 [2024-07-25 10:17:11.165780] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.175 [2024-07-25 10:17:11.175256] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.175 [2024-07-25 10:17:11.175757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.175 [2024-07-25 10:17:11.175788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:26.175 [2024-07-25 10:17:11.175806] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:26.175 [2024-07-25 10:17:11.176045] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:26.175 [2024-07-25 10:17:11.176289] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.175 [2024-07-25 10:17:11.176313] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.175 [2024-07-25 10:17:11.176330] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.175 [2024-07-25 10:17:11.179896] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.175 [2024-07-25 10:17:11.189159] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.175 [2024-07-25 10:17:11.189603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.175 [2024-07-25 10:17:11.189636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:26.175 [2024-07-25 10:17:11.189654] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:26.175 [2024-07-25 10:17:11.189892] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:26.175 [2024-07-25 10:17:11.190136] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.175 [2024-07-25 10:17:11.190160] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.175 [2024-07-25 10:17:11.190176] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.175 [2024-07-25 10:17:11.193746] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.175 [2024-07-25 10:17:11.203039] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.175 [2024-07-25 10:17:11.203585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.175 [2024-07-25 10:17:11.203618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:26.175 [2024-07-25 10:17:11.203636] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:26.175 [2024-07-25 10:17:11.203875] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:26.175 [2024-07-25 10:17:11.204117] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.175 [2024-07-25 10:17:11.204142] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.175 [2024-07-25 10:17:11.204167] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.175 [2024-07-25 10:17:11.207744] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.175 [2024-07-25 10:17:11.217008] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.175 [2024-07-25 10:17:11.217485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.175 [2024-07-25 10:17:11.217516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:26.175 [2024-07-25 10:17:11.217535] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:26.175 [2024-07-25 10:17:11.217773] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:26.175 [2024-07-25 10:17:11.218017] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.175 [2024-07-25 10:17:11.218042] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.175 [2024-07-25 10:17:11.218058] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.175 [2024-07-25 10:17:11.221630] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.175 [2024-07-25 10:17:11.230894] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.175 [2024-07-25 10:17:11.231351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.175 [2024-07-25 10:17:11.231382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:26.175 [2024-07-25 10:17:11.231400] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:26.175 [2024-07-25 10:17:11.231649] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:26.175 [2024-07-25 10:17:11.231894] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.175 [2024-07-25 10:17:11.231919] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.175 [2024-07-25 10:17:11.231935] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.175 [2024-07-25 10:17:11.235506] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.175 [2024-07-25 10:17:11.244761] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.175 [2024-07-25 10:17:11.245198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.175 [2024-07-25 10:17:11.245230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:26.175 [2024-07-25 10:17:11.245248] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:26.175 [2024-07-25 10:17:11.245498] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:26.175 [2024-07-25 10:17:11.245742] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.175 [2024-07-25 10:17:11.245767] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.175 [2024-07-25 10:17:11.245782] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.175 [2024-07-25 10:17:11.249341] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.175 [2024-07-25 10:17:11.258594] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.175 [2024-07-25 10:17:11.259043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.175 [2024-07-25 10:17:11.259074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:26.175 [2024-07-25 10:17:11.259092] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:26.175 [2024-07-25 10:17:11.259330] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:26.175 [2024-07-25 10:17:11.259584] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.175 [2024-07-25 10:17:11.259610] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.175 [2024-07-25 10:17:11.259626] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.175 [2024-07-25 10:17:11.263188] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.175 [2024-07-25 10:17:11.272439] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.175 [2024-07-25 10:17:11.272904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.175 [2024-07-25 10:17:11.272936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:26.175 [2024-07-25 10:17:11.272954] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:26.175 [2024-07-25 10:17:11.273193] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:26.175 [2024-07-25 10:17:11.273448] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.175 [2024-07-25 10:17:11.273473] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.175 [2024-07-25 10:17:11.273489] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.175 [2024-07-25 10:17:11.277048] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.175 [2024-07-25 10:17:11.286301] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.175 [2024-07-25 10:17:11.286745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.175 [2024-07-25 10:17:11.286777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:26.175 [2024-07-25 10:17:11.286795] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:26.175 [2024-07-25 10:17:11.287034] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:26.176 [2024-07-25 10:17:11.287277] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.176 [2024-07-25 10:17:11.287301] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.176 [2024-07-25 10:17:11.287317] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.176 [2024-07-25 10:17:11.290888] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.176 [2024-07-25 10:17:11.300139] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.176 [2024-07-25 10:17:11.300548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.176 [2024-07-25 10:17:11.300580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:26.176 [2024-07-25 10:17:11.300598] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:26.176 [2024-07-25 10:17:11.300837] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:26.176 [2024-07-25 10:17:11.301087] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.176 [2024-07-25 10:17:11.301111] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.176 [2024-07-25 10:17:11.301127] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.176 [2024-07-25 10:17:11.304717] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.176 [2024-07-25 10:17:11.313989] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.176 [2024-07-25 10:17:11.314525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.176 [2024-07-25 10:17:11.314582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:26.176 [2024-07-25 10:17:11.314600] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:26.176 [2024-07-25 10:17:11.314838] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:26.176 [2024-07-25 10:17:11.315081] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.176 [2024-07-25 10:17:11.315106] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.176 [2024-07-25 10:17:11.315121] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.176 [2024-07-25 10:17:11.318702] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.176 [2024-07-25 10:17:11.327963] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.176 [2024-07-25 10:17:11.328527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.176 [2024-07-25 10:17:11.328559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:26.176 [2024-07-25 10:17:11.328578] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:26.176 [2024-07-25 10:17:11.328817] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:26.176 [2024-07-25 10:17:11.329061] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.176 [2024-07-25 10:17:11.329086] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.176 [2024-07-25 10:17:11.329102] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.176 [2024-07-25 10:17:11.332680] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.435 [2024-07-25 10:17:11.341934] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.435 [2024-07-25 10:17:11.342475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.435 [2024-07-25 10:17:11.342507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:26.435 [2024-07-25 10:17:11.342525] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:26.435 [2024-07-25 10:17:11.342765] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:26.435 [2024-07-25 10:17:11.343007] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.435 [2024-07-25 10:17:11.343032] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.435 [2024-07-25 10:17:11.343048] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.435 [2024-07-25 10:17:11.346630] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.435 [2024-07-25 10:17:11.355887] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.435 [2024-07-25 10:17:11.356321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.435 [2024-07-25 10:17:11.356353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:26.435 [2024-07-25 10:17:11.356371] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:26.435 [2024-07-25 10:17:11.356629] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:26.435 [2024-07-25 10:17:11.356872] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.435 [2024-07-25 10:17:11.356896] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.435 [2024-07-25 10:17:11.356912] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.435 [2024-07-25 10:17:11.360479] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.435 [2024-07-25 10:17:11.369734] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.435 [2024-07-25 10:17:11.370209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.435 [2024-07-25 10:17:11.370240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:26.435 [2024-07-25 10:17:11.370259] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:26.435 [2024-07-25 10:17:11.370510] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:26.435 [2024-07-25 10:17:11.370756] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.435 [2024-07-25 10:17:11.370780] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.435 [2024-07-25 10:17:11.370796] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.435 [2024-07-25 10:17:11.374366] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.435 [2024-07-25 10:17:11.383649] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.435 [2024-07-25 10:17:11.384198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.435 [2024-07-25 10:17:11.384257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:26.435 [2024-07-25 10:17:11.384276] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:26.435 [2024-07-25 10:17:11.384531] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:26.435 [2024-07-25 10:17:11.384774] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.435 [2024-07-25 10:17:11.384800] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.435 [2024-07-25 10:17:11.384817] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.435 [2024-07-25 10:17:11.388379] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.435 [2024-07-25 10:17:11.397640] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.435 [2024-07-25 10:17:11.398124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.435 [2024-07-25 10:17:11.398155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:26.435 [2024-07-25 10:17:11.398179] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:26.435 [2024-07-25 10:17:11.398418] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:26.435 [2024-07-25 10:17:11.398676] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.435 [2024-07-25 10:17:11.398701] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.435 [2024-07-25 10:17:11.398717] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.435 [2024-07-25 10:17:11.402280] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.435 [2024-07-25 10:17:11.411538] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.435 [2024-07-25 10:17:11.412055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.435 [2024-07-25 10:17:11.412087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:26.435 [2024-07-25 10:17:11.412105] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:26.435 [2024-07-25 10:17:11.412343] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:26.435 [2024-07-25 10:17:11.412596] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.435 [2024-07-25 10:17:11.412622] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.435 [2024-07-25 10:17:11.412639] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.435 [2024-07-25 10:17:11.416216] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.435 [2024-07-25 10:17:11.425481] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.435 [2024-07-25 10:17:11.425932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.435 [2024-07-25 10:17:11.425974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:26.435 [2024-07-25 10:17:11.425992] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:26.435 [2024-07-25 10:17:11.426230] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:26.435 [2024-07-25 10:17:11.426485] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.435 [2024-07-25 10:17:11.426510] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.435 [2024-07-25 10:17:11.426527] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.435 [2024-07-25 10:17:11.430091] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.435 [2024-07-25 10:17:11.439356] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.435 [2024-07-25 10:17:11.439939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.435 [2024-07-25 10:17:11.439989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:26.435 [2024-07-25 10:17:11.440008] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:26.435 [2024-07-25 10:17:11.440257] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:26.435 [2024-07-25 10:17:11.440511] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.435 [2024-07-25 10:17:11.440542] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.435 [2024-07-25 10:17:11.440559] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.435 [2024-07-25 10:17:11.444126] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.435 [2024-07-25 10:17:11.453377] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.435 [2024-07-25 10:17:11.453907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.435 [2024-07-25 10:17:11.453940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:26.435 [2024-07-25 10:17:11.453959] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:26.435 [2024-07-25 10:17:11.454198] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:26.435 [2024-07-25 10:17:11.454453] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.435 [2024-07-25 10:17:11.454479] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.435 [2024-07-25 10:17:11.454496] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.435 [2024-07-25 10:17:11.458054] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.436 [2024-07-25 10:17:11.467325] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.436 [2024-07-25 10:17:11.467834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.436 [2024-07-25 10:17:11.467866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:26.436 [2024-07-25 10:17:11.467884] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:26.436 [2024-07-25 10:17:11.468122] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:26.436 [2024-07-25 10:17:11.468365] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.436 [2024-07-25 10:17:11.468389] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.436 [2024-07-25 10:17:11.468405] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.436 [2024-07-25 10:17:11.471982] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.436 [2024-07-25 10:17:11.481242] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.436 [2024-07-25 10:17:11.481782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.436 [2024-07-25 10:17:11.481835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:26.436 [2024-07-25 10:17:11.481853] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:26.436 [2024-07-25 10:17:11.482092] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:26.436 [2024-07-25 10:17:11.482334] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.436 [2024-07-25 10:17:11.482359] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.436 [2024-07-25 10:17:11.482375] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.436 [2024-07-25 10:17:11.485951] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.436 [2024-07-25 10:17:11.495216] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.436 [2024-07-25 10:17:11.495750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.436 [2024-07-25 10:17:11.495783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:26.436 [2024-07-25 10:17:11.495802] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:26.436 [2024-07-25 10:17:11.496041] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:26.436 [2024-07-25 10:17:11.496283] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.436 [2024-07-25 10:17:11.496312] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.436 [2024-07-25 10:17:11.496328] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.436 [2024-07-25 10:17:11.499902] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.436 [2024-07-25 10:17:11.509162] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.436 [2024-07-25 10:17:11.509633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.436 [2024-07-25 10:17:11.509664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:26.436 [2024-07-25 10:17:11.509682] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:26.436 [2024-07-25 10:17:11.509921] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:26.436 [2024-07-25 10:17:11.510165] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.436 [2024-07-25 10:17:11.510190] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.436 [2024-07-25 10:17:11.510206] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.436 [2024-07-25 10:17:11.513783] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.436 [2024-07-25 10:17:11.523056] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.436 [2024-07-25 10:17:11.523668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.436 [2024-07-25 10:17:11.523715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:26.436 [2024-07-25 10:17:11.523736] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:26.436 [2024-07-25 10:17:11.523992] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:26.436 [2024-07-25 10:17:11.524235] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.436 [2024-07-25 10:17:11.524260] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.436 [2024-07-25 10:17:11.524277] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.436 [2024-07-25 10:17:11.527858] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.436 [2024-07-25 10:17:11.536910] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.436 [2024-07-25 10:17:11.537447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.436 [2024-07-25 10:17:11.537482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:26.436 [2024-07-25 10:17:11.537501] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:26.436 [2024-07-25 10:17:11.537749] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:26.436 [2024-07-25 10:17:11.537992] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.436 [2024-07-25 10:17:11.538016] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.436 [2024-07-25 10:17:11.538032] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.436 [2024-07-25 10:17:11.541601] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.436 [2024-07-25 10:17:11.550856] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.436 [2024-07-25 10:17:11.551344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.436 [2024-07-25 10:17:11.551378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:26.436 [2024-07-25 10:17:11.551397] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:26.436 [2024-07-25 10:17:11.551650] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:26.436 [2024-07-25 10:17:11.551894] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.436 [2024-07-25 10:17:11.551919] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.436 [2024-07-25 10:17:11.551936] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.436 [2024-07-25 10:17:11.555505] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.436 [2024-07-25 10:17:11.564757] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.436 [2024-07-25 10:17:11.565371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.436 [2024-07-25 10:17:11.565423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:26.436 [2024-07-25 10:17:11.565459] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:26.436 [2024-07-25 10:17:11.565711] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:26.436 [2024-07-25 10:17:11.565957] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.436 [2024-07-25 10:17:11.565982] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.436 [2024-07-25 10:17:11.565998] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.436 [2024-07-25 10:17:11.569579] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.436 [2024-07-25 10:17:11.578630] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.436 [2024-07-25 10:17:11.579310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.436 [2024-07-25 10:17:11.579357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:26.436 [2024-07-25 10:17:11.579377] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:26.436 [2024-07-25 10:17:11.579638] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:26.436 [2024-07-25 10:17:11.579884] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.436 [2024-07-25 10:17:11.579909] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.436 [2024-07-25 10:17:11.579934] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.436 [2024-07-25 10:17:11.583529] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.436 [2024-07-25 10:17:11.592598] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.436 [2024-07-25 10:17:11.593115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.436 [2024-07-25 10:17:11.593150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:26.436 [2024-07-25 10:17:11.593168] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:26.436 [2024-07-25 10:17:11.593407] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:26.436 [2024-07-25 10:17:11.593673] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.436 [2024-07-25 10:17:11.593699] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.436 [2024-07-25 10:17:11.593715] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.436 [2024-07-25 10:17:11.597290] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.695 [2024-07-25 10:17:11.606555] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.695 [2024-07-25 10:17:11.607050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.695 [2024-07-25 10:17:11.607083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:26.695 [2024-07-25 10:17:11.607101] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:26.695 [2024-07-25 10:17:11.607340] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:26.695 [2024-07-25 10:17:11.607596] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.695 [2024-07-25 10:17:11.607621] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.695 [2024-07-25 10:17:11.607637] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.695 [2024-07-25 10:17:11.611202] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.695 [2024-07-25 10:17:11.620476] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.695 [2024-07-25 10:17:11.620956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.695 [2024-07-25 10:17:11.620988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:26.695 [2024-07-25 10:17:11.621006] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:26.695 [2024-07-25 10:17:11.621244] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:26.695 [2024-07-25 10:17:11.621502] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.695 [2024-07-25 10:17:11.621528] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.695 [2024-07-25 10:17:11.621544] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.695 [2024-07-25 10:17:11.625109] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.695 [2024-07-25 10:17:11.634361] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.695 [2024-07-25 10:17:11.634832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.695 [2024-07-25 10:17:11.634863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:26.695 [2024-07-25 10:17:11.634882] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:26.695 [2024-07-25 10:17:11.635120] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:26.695 [2024-07-25 10:17:11.635364] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.695 [2024-07-25 10:17:11.635389] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.695 [2024-07-25 10:17:11.635405] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.695 [2024-07-25 10:17:11.638980] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.695 [2024-07-25 10:17:11.648240] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.695 [2024-07-25 10:17:11.648737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.695 [2024-07-25 10:17:11.648787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:26.695 [2024-07-25 10:17:11.648805] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:26.695 [2024-07-25 10:17:11.649045] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:26.695 [2024-07-25 10:17:11.649289] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.695 [2024-07-25 10:17:11.649314] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.695 [2024-07-25 10:17:11.649330] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.695 [2024-07-25 10:17:11.652904] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.696 [2024-07-25 10:17:11.662158] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.696 [2024-07-25 10:17:11.662656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.696 [2024-07-25 10:17:11.662688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:26.696 [2024-07-25 10:17:11.662706] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:26.696 [2024-07-25 10:17:11.662944] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:26.696 [2024-07-25 10:17:11.663188] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.696 [2024-07-25 10:17:11.663213] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.696 [2024-07-25 10:17:11.663229] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.696 [2024-07-25 10:17:11.666800] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.696 [2024-07-25 10:17:11.676054] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.696 [2024-07-25 10:17:11.676645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.696 [2024-07-25 10:17:11.676692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:26.696 [2024-07-25 10:17:11.676714] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:26.696 [2024-07-25 10:17:11.676959] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:26.696 [2024-07-25 10:17:11.677209] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.696 [2024-07-25 10:17:11.677235] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.696 [2024-07-25 10:17:11.677252] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.696 [2024-07-25 10:17:11.680827] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.696 [2024-07-25 10:17:11.690084] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.696 [2024-07-25 10:17:11.690591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.696 [2024-07-25 10:17:11.690626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:26.696 [2024-07-25 10:17:11.690645] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:26.696 [2024-07-25 10:17:11.690895] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:26.696 [2024-07-25 10:17:11.691137] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.696 [2024-07-25 10:17:11.691162] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.696 [2024-07-25 10:17:11.691178] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.696 [2024-07-25 10:17:11.694761] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.696 [2024-07-25 10:17:11.704024] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.696 [2024-07-25 10:17:11.704535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.696 [2024-07-25 10:17:11.704578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:26.696 [2024-07-25 10:17:11.704597] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:26.696 [2024-07-25 10:17:11.704837] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:26.696 [2024-07-25 10:17:11.705081] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.696 [2024-07-25 10:17:11.705106] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.696 [2024-07-25 10:17:11.705123] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.696 [2024-07-25 10:17:11.708701] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.696 [2024-07-25 10:17:11.717998] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.696 [2024-07-25 10:17:11.718511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.696 [2024-07-25 10:17:11.718543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:26.696 [2024-07-25 10:17:11.718562] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:26.696 [2024-07-25 10:17:11.718810] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:26.696 [2024-07-25 10:17:11.719053] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.696 [2024-07-25 10:17:11.719078] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.696 [2024-07-25 10:17:11.719095] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.696 [2024-07-25 10:17:11.722672] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.696 [2024-07-25 10:17:11.731918] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.696 [2024-07-25 10:17:11.732453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.696 [2024-07-25 10:17:11.732497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:26.696 [2024-07-25 10:17:11.732515] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:26.696 [2024-07-25 10:17:11.732765] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:26.696 [2024-07-25 10:17:11.733008] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.696 [2024-07-25 10:17:11.733033] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.696 [2024-07-25 10:17:11.733049] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.696 [2024-07-25 10:17:11.736619] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.696 [2024-07-25 10:17:11.745864] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.696 [2024-07-25 10:17:11.746324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.696 [2024-07-25 10:17:11.746376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:26.696 [2024-07-25 10:17:11.746394] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:26.696 [2024-07-25 10:17:11.746648] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:26.696 [2024-07-25 10:17:11.746891] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.696 [2024-07-25 10:17:11.746915] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.696 [2024-07-25 10:17:11.746931] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.696 [2024-07-25 10:17:11.750500] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.696 [2024-07-25 10:17:11.759750] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.696 [2024-07-25 10:17:11.760255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.696 [2024-07-25 10:17:11.760307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:26.696 [2024-07-25 10:17:11.760325] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:26.696 [2024-07-25 10:17:11.760572] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:26.696 [2024-07-25 10:17:11.760816] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.696 [2024-07-25 10:17:11.760841] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.696 [2024-07-25 10:17:11.760857] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.696 [2024-07-25 10:17:11.764418] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.696 [2024-07-25 10:17:11.773684] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.696 [2024-07-25 10:17:11.774304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.696 [2024-07-25 10:17:11.774349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:26.696 [2024-07-25 10:17:11.774376] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:26.696 [2024-07-25 10:17:11.774646] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:26.696 [2024-07-25 10:17:11.774891] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.696 [2024-07-25 10:17:11.774916] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.696 [2024-07-25 10:17:11.774933] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.696 [2024-07-25 10:17:11.778508] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.696 [2024-07-25 10:17:11.787563] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.696 [2024-07-25 10:17:11.788102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.696 [2024-07-25 10:17:11.788137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:26.696 [2024-07-25 10:17:11.788155] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:26.696 [2024-07-25 10:17:11.788395] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:26.696 [2024-07-25 10:17:11.788653] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.696 [2024-07-25 10:17:11.788679] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.696 [2024-07-25 10:17:11.788696] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.696 [2024-07-25 10:17:11.792261] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.696 [2024-07-25 10:17:11.801534] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.696 [2024-07-25 10:17:11.802135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.697 [2024-07-25 10:17:11.802181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:26.697 [2024-07-25 10:17:11.802201] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:26.697 [2024-07-25 10:17:11.802462] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:26.697 [2024-07-25 10:17:11.802709] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.697 [2024-07-25 10:17:11.802734] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.697 [2024-07-25 10:17:11.802750] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.697 [2024-07-25 10:17:11.806317] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.697 [2024-07-25 10:17:11.815379] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.697 [2024-07-25 10:17:11.815893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.697 [2024-07-25 10:17:11.815927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:26.697 [2024-07-25 10:17:11.815946] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:26.697 [2024-07-25 10:17:11.816185] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:26.697 [2024-07-25 10:17:11.816439] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.697 [2024-07-25 10:17:11.816471] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.697 [2024-07-25 10:17:11.816489] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.697 [2024-07-25 10:17:11.820054] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.697 [2024-07-25 10:17:11.829309] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.697 [2024-07-25 10:17:11.829918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.697 [2024-07-25 10:17:11.829970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:26.697 [2024-07-25 10:17:11.829989] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:26.697 [2024-07-25 10:17:11.830227] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:26.697 [2024-07-25 10:17:11.830483] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.697 [2024-07-25 10:17:11.830509] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.697 [2024-07-25 10:17:11.830525] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.697 [2024-07-25 10:17:11.834092] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.697 [2024-07-25 10:17:11.843136] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.697 [2024-07-25 10:17:11.843834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.697 [2024-07-25 10:17:11.843881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:26.697 [2024-07-25 10:17:11.843902] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:26.697 [2024-07-25 10:17:11.844148] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:26.697 [2024-07-25 10:17:11.844392] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.697 [2024-07-25 10:17:11.844417] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.697 [2024-07-25 10:17:11.844447] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.697 [2024-07-25 10:17:11.848022] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.697 [2024-07-25 10:17:11.857071] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.697 [2024-07-25 10:17:11.857580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.697 [2024-07-25 10:17:11.857614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:26.697 [2024-07-25 10:17:11.857633] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:26.697 [2024-07-25 10:17:11.857873] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:26.697 [2024-07-25 10:17:11.858116] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.697 [2024-07-25 10:17:11.858141] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.697 [2024-07-25 10:17:11.858157] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.956 [2024-07-25 10:17:11.861738] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.956 [2024-07-25 10:17:11.871008] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.956 [2024-07-25 10:17:11.871530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.956 [2024-07-25 10:17:11.871565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:26.956 [2024-07-25 10:17:11.871584] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:26.956 [2024-07-25 10:17:11.871823] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:26.956 [2024-07-25 10:17:11.872068] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.956 [2024-07-25 10:17:11.872093] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.956 [2024-07-25 10:17:11.872111] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.956 [2024-07-25 10:17:11.875688] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.956 [2024-07-25 10:17:11.884953] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.956 [2024-07-25 10:17:11.885455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.956 [2024-07-25 10:17:11.885487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:26.956 [2024-07-25 10:17:11.885505] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:26.956 [2024-07-25 10:17:11.885744] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:26.956 [2024-07-25 10:17:11.885987] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.956 [2024-07-25 10:17:11.886012] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.956 [2024-07-25 10:17:11.886029] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.956 [2024-07-25 10:17:11.889607] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.956 [2024-07-25 10:17:11.898868] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.956 [2024-07-25 10:17:11.899484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.956 [2024-07-25 10:17:11.899529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:26.956 [2024-07-25 10:17:11.899551] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:26.956 [2024-07-25 10:17:11.899796] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:26.956 [2024-07-25 10:17:11.900041] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.956 [2024-07-25 10:17:11.900066] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.956 [2024-07-25 10:17:11.900082] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.956 [2024-07-25 10:17:11.903663] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.956 [2024-07-25 10:17:11.912713] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.956 [2024-07-25 10:17:11.913248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.956 [2024-07-25 10:17:11.913283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:26.956 [2024-07-25 10:17:11.913302] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:26.956 [2024-07-25 10:17:11.913565] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:26.956 [2024-07-25 10:17:11.913809] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.956 [2024-07-25 10:17:11.913834] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.956 [2024-07-25 10:17:11.913851] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.956 [2024-07-25 10:17:11.917435] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.956 [2024-07-25 10:17:11.926699] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.956 [2024-07-25 10:17:11.927230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.956 [2024-07-25 10:17:11.927264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:26.956 [2024-07-25 10:17:11.927283] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:26.956 [2024-07-25 10:17:11.927535] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:26.956 [2024-07-25 10:17:11.927778] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.956 [2024-07-25 10:17:11.927804] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.956 [2024-07-25 10:17:11.927819] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.956 [2024-07-25 10:17:11.931385] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.956 [2024-07-25 10:17:11.940653] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.956 [2024-07-25 10:17:11.941103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.956 [2024-07-25 10:17:11.941135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:26.956 [2024-07-25 10:17:11.941153] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:26.956 [2024-07-25 10:17:11.941392] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:26.956 [2024-07-25 10:17:11.941647] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.956 [2024-07-25 10:17:11.941673] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.956 [2024-07-25 10:17:11.941689] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.956 [2024-07-25 10:17:11.945257] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.956 [2024-07-25 10:17:11.954541] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.956 [2024-07-25 10:17:11.954959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.956 [2024-07-25 10:17:11.954990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:26.956 [2024-07-25 10:17:11.955008] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:26.956 [2024-07-25 10:17:11.955247] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:26.956 [2024-07-25 10:17:11.955502] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.956 [2024-07-25 10:17:11.955528] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.956 [2024-07-25 10:17:11.955551] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.956 [2024-07-25 10:17:11.959117] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.956 [2024-07-25 10:17:11.968372] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.956 [2024-07-25 10:17:11.968796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.956 [2024-07-25 10:17:11.968828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:26.956 [2024-07-25 10:17:11.968847] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:26.956 [2024-07-25 10:17:11.969086] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:26.956 [2024-07-25 10:17:11.969330] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.956 [2024-07-25 10:17:11.969354] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.956 [2024-07-25 10:17:11.969370] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.956 [2024-07-25 10:17:11.972938] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.956 [2024-07-25 10:17:11.982402] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.956 [2024-07-25 10:17:11.982847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.956 [2024-07-25 10:17:11.982879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:26.956 [2024-07-25 10:17:11.982898] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:26.957 [2024-07-25 10:17:11.983137] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:26.957 [2024-07-25 10:17:11.983381] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.957 [2024-07-25 10:17:11.983405] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.957 [2024-07-25 10:17:11.983421] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.957 [2024-07-25 10:17:11.986994] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.957 [2024-07-25 10:17:11.996246] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.957 [2024-07-25 10:17:11.996728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.957 [2024-07-25 10:17:11.996762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:26.957 [2024-07-25 10:17:11.996780] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:26.957 [2024-07-25 10:17:11.997020] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:26.957 [2024-07-25 10:17:11.997264] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.957 [2024-07-25 10:17:11.997288] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.957 [2024-07-25 10:17:11.997304] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.957 [2024-07-25 10:17:12.000880] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.957 [2024-07-25 10:17:12.010125] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.957 [2024-07-25 10:17:12.010565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.957 [2024-07-25 10:17:12.010597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:26.957 [2024-07-25 10:17:12.010615] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:26.957 [2024-07-25 10:17:12.010853] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:26.957 [2024-07-25 10:17:12.011097] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.957 [2024-07-25 10:17:12.011121] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.957 [2024-07-25 10:17:12.011138] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.957 [2024-07-25 10:17:12.014717] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.957 [2024-07-25 10:17:12.023996] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.957 [2024-07-25 10:17:12.024492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.957 [2024-07-25 10:17:12.024525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:26.957 [2024-07-25 10:17:12.024543] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:26.957 [2024-07-25 10:17:12.024782] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:26.957 [2024-07-25 10:17:12.025025] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.957 [2024-07-25 10:17:12.025050] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.957 [2024-07-25 10:17:12.025066] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.957 [2024-07-25 10:17:12.028635] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.957 [2024-07-25 10:17:12.038018] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.957 [2024-07-25 10:17:12.038441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.957 [2024-07-25 10:17:12.038473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:26.957 [2024-07-25 10:17:12.038492] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:26.957 [2024-07-25 10:17:12.038732] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:26.957 [2024-07-25 10:17:12.038975] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.957 [2024-07-25 10:17:12.039000] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.957 [2024-07-25 10:17:12.039016] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.957 [2024-07-25 10:17:12.042589] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.957 [2024-07-25 10:17:12.052042] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.957 [2024-07-25 10:17:12.052457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.957 [2024-07-25 10:17:12.052490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:26.957 [2024-07-25 10:17:12.052509] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:26.957 [2024-07-25 10:17:12.052747] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:26.957 [2024-07-25 10:17:12.052997] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.957 [2024-07-25 10:17:12.053022] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.957 [2024-07-25 10:17:12.053038] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.957 [2024-07-25 10:17:12.056611] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.957 [2024-07-25 10:17:12.066061] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.957 [2024-07-25 10:17:12.066475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.957 [2024-07-25 10:17:12.066507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:26.957 [2024-07-25 10:17:12.066526] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:26.957 [2024-07-25 10:17:12.066765] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:26.957 [2024-07-25 10:17:12.067009] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.957 [2024-07-25 10:17:12.067034] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.957 [2024-07-25 10:17:12.067051] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.957 [2024-07-25 10:17:12.070622] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.957 [2024-07-25 10:17:12.080074] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.957 [2024-07-25 10:17:12.080492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.957 [2024-07-25 10:17:12.080524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:26.957 [2024-07-25 10:17:12.080542] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:26.957 [2024-07-25 10:17:12.080781] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:26.957 [2024-07-25 10:17:12.081024] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.957 [2024-07-25 10:17:12.081049] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.957 [2024-07-25 10:17:12.081065] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.957 [2024-07-25 10:17:12.084645] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.957 [2024-07-25 10:17:12.094103] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.957 [2024-07-25 10:17:12.094614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.957 [2024-07-25 10:17:12.094646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:26.957 [2024-07-25 10:17:12.094664] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:26.957 [2024-07-25 10:17:12.094902] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:26.957 [2024-07-25 10:17:12.095145] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.957 [2024-07-25 10:17:12.095170] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.957 [2024-07-25 10:17:12.095186] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.957 [2024-07-25 10:17:12.098769] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:26.957 [2024-07-25 10:17:12.108042] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:26.957 [2024-07-25 10:17:12.108579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:26.957 [2024-07-25 10:17:12.108612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:26.957 [2024-07-25 10:17:12.108631] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:26.957 [2024-07-25 10:17:12.108881] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:26.957 [2024-07-25 10:17:12.109123] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:26.957 [2024-07-25 10:17:12.109148] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:26.957 [2024-07-25 10:17:12.109164] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:26.957 [2024-07-25 10:17:12.112735] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.217 [2024-07-25 10:17:12.121999] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.217 [2024-07-25 10:17:12.122543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.217 [2024-07-25 10:17:12.122581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:27.217 [2024-07-25 10:17:12.122615] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:27.217 [2024-07-25 10:17:12.122854] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:27.217 [2024-07-25 10:17:12.123096] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.217 [2024-07-25 10:17:12.123121] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.217 [2024-07-25 10:17:12.123137] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.217 [2024-07-25 10:17:12.126717] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.217 [2024-07-25 10:17:12.135969] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.217 [2024-07-25 10:17:12.136382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.217 [2024-07-25 10:17:12.136421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:27.217 [2024-07-25 10:17:12.136452] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:27.217 [2024-07-25 10:17:12.136696] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:27.217 [2024-07-25 10:17:12.136939] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.217 [2024-07-25 10:17:12.136964] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.217 [2024-07-25 10:17:12.136980] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.217 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 548573 Killed "${NVMF_APP[@]}" "$@" 00:28:27.218 10:17:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:28:27.218 10:17:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:28:27.218 10:17:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:27.218 10:17:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:27.218 [2024-07-25 10:17:12.140554] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.218 10:17:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:27.218 10:17:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=549521 00:28:27.218 10:17:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:27.218 10:17:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 549521 00:28:27.218 10:17:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 549521 ']' 00:28:27.218 10:17:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:27.218 10:17:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:27.218 10:17:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:27.218 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:27.218 10:17:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:27.218 10:17:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:27.218 [2024-07-25 10:17:12.149830] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.218 [2024-07-25 10:17:12.150255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.218 [2024-07-25 10:17:12.150306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:27.218 [2024-07-25 10:17:12.150324] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:27.218 [2024-07-25 10:17:12.150573] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:27.218 [2024-07-25 10:17:12.150818] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.218 [2024-07-25 10:17:12.150842] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.218 [2024-07-25 10:17:12.150859] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.218 [2024-07-25 10:17:12.154439] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.218 [2024-07-25 10:17:12.163713] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.218 [2024-07-25 10:17:12.164153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.218 [2024-07-25 10:17:12.164206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:27.218 [2024-07-25 10:17:12.164225] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:27.218 [2024-07-25 10:17:12.164475] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:27.218 [2024-07-25 10:17:12.164719] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.218 [2024-07-25 10:17:12.164743] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.218 [2024-07-25 10:17:12.164759] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.218 [2024-07-25 10:17:12.168328] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.218 [2024-07-25 10:17:12.177599] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.218 [2024-07-25 10:17:12.177984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.218 [2024-07-25 10:17:12.178021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:27.218 [2024-07-25 10:17:12.178040] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:27.218 [2024-07-25 10:17:12.178279] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:27.218 [2024-07-25 10:17:12.178533] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.218 [2024-07-25 10:17:12.178557] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.218 [2024-07-25 10:17:12.178573] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.218 [2024-07-25 10:17:12.182143] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.218 [2024-07-25 10:17:12.191618] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.218 [2024-07-25 10:17:12.192000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.218 [2024-07-25 10:17:12.192031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:27.218 [2024-07-25 10:17:12.192049] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:27.218 [2024-07-25 10:17:12.192287] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:27.218 [2024-07-25 10:17:12.192541] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.218 [2024-07-25 10:17:12.192566] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.218 [2024-07-25 10:17:12.192582] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.218 [2024-07-25 10:17:12.196142] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.218 [2024-07-25 10:17:12.205643] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.218 [2024-07-25 10:17:12.206091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.218 [2024-07-25 10:17:12.206123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:27.218 [2024-07-25 10:17:12.206141] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:27.218 [2024-07-25 10:17:12.206379] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:27.218 [2024-07-25 10:17:12.206558] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:28:27.218 [2024-07-25 10:17:12.206632] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.218 [2024-07-25 10:17:12.206653] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5[2024-07-25 10:17:12.206657] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] contr --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=oller reinitialization failed 00:28:27.218 auto ] 00:28:27.218 [2024-07-25 10:17:12.206681] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.218 [2024-07-25 10:17:12.210243] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.218 [2024-07-25 10:17:12.219512] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.218 [2024-07-25 10:17:12.219943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.218 [2024-07-25 10:17:12.219974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:27.218 [2024-07-25 10:17:12.219999] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:27.218 [2024-07-25 10:17:12.220239] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:27.218 [2024-07-25 10:17:12.220647] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.218 [2024-07-25 10:17:12.220673] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.218 [2024-07-25 10:17:12.220688] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.218 [2024-07-25 10:17:12.224249] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.218 [2024-07-25 10:17:12.233531] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.218 [2024-07-25 10:17:12.233922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.218 [2024-07-25 10:17:12.233953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:27.218 [2024-07-25 10:17:12.233972] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:27.218 [2024-07-25 10:17:12.234211] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:27.218 [2024-07-25 10:17:12.234466] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.218 [2024-07-25 10:17:12.234492] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.218 [2024-07-25 10:17:12.234508] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.218 [2024-07-25 10:17:12.238066] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.218 [2024-07-25 10:17:12.247534] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.218 [2024-07-25 10:17:12.247977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.218 [2024-07-25 10:17:12.248008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:27.218 [2024-07-25 10:17:12.248026] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:27.218 [2024-07-25 10:17:12.248264] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:27.218 [2024-07-25 10:17:12.248518] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.218 [2024-07-25 10:17:12.248544] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.218 [2024-07-25 10:17:12.248559] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.218 [2024-07-25 10:17:12.252120] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.218 EAL: No free 2048 kB hugepages reported on node 1 00:28:27.218 [2024-07-25 10:17:12.261375] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.218 [2024-07-25 10:17:12.261815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.219 [2024-07-25 10:17:12.261847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:27.219 [2024-07-25 10:17:12.261876] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:27.219 [2024-07-25 10:17:12.262114] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:27.219 [2024-07-25 10:17:12.262363] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.219 [2024-07-25 10:17:12.262388] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.219 [2024-07-25 10:17:12.262405] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.219 [2024-07-25 10:17:12.265979] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.219 [2024-07-25 10:17:12.275235] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.219 [2024-07-25 10:17:12.275664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.219 [2024-07-25 10:17:12.275695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:27.219 [2024-07-25 10:17:12.275712] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:27.219 [2024-07-25 10:17:12.275951] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:27.219 [2024-07-25 10:17:12.276194] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.219 [2024-07-25 10:17:12.276218] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.219 [2024-07-25 10:17:12.276235] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.219 [2024-07-25 10:17:12.279809] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.219 [2024-07-25 10:17:12.289076] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.219 [2024-07-25 10:17:12.289513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.219 [2024-07-25 10:17:12.289545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:27.219 [2024-07-25 10:17:12.289564] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:27.219 [2024-07-25 10:17:12.289804] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:27.219 [2024-07-25 10:17:12.290046] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.219 [2024-07-25 10:17:12.290071] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.219 [2024-07-25 10:17:12.290087] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.219 [2024-07-25 10:17:12.291494] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:27.219 [2024-07-25 10:17:12.293660] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.219 [2024-07-25 10:17:12.302967] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.219 [2024-07-25 10:17:12.303546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.219 [2024-07-25 10:17:12.303588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:27.219 [2024-07-25 10:17:12.303609] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:27.219 [2024-07-25 10:17:12.303859] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:27.219 [2024-07-25 10:17:12.304107] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.219 [2024-07-25 10:17:12.304131] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.219 [2024-07-25 10:17:12.304149] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.219 [2024-07-25 10:17:12.307736] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.219 [2024-07-25 10:17:12.317054] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.219 [2024-07-25 10:17:12.317526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.219 [2024-07-25 10:17:12.317560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:27.219 [2024-07-25 10:17:12.317578] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:27.219 [2024-07-25 10:17:12.317819] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:27.219 [2024-07-25 10:17:12.318063] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.219 [2024-07-25 10:17:12.318088] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.219 [2024-07-25 10:17:12.318104] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.219 [2024-07-25 10:17:12.321680] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.219 [2024-07-25 10:17:12.330931] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.219 [2024-07-25 10:17:12.331384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.219 [2024-07-25 10:17:12.331416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:27.219 [2024-07-25 10:17:12.331444] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:27.219 [2024-07-25 10:17:12.331686] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:27.219 [2024-07-25 10:17:12.331939] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.219 [2024-07-25 10:17:12.331964] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.219 [2024-07-25 10:17:12.331980] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.219 [2024-07-25 10:17:12.335549] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.219 [2024-07-25 10:17:12.344799] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.219 [2024-07-25 10:17:12.345194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.219 [2024-07-25 10:17:12.345227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:27.219 [2024-07-25 10:17:12.345246] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:27.219 [2024-07-25 10:17:12.345495] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:27.219 [2024-07-25 10:17:12.345740] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.219 [2024-07-25 10:17:12.345765] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.219 [2024-07-25 10:17:12.345781] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.219 [2024-07-25 10:17:12.349339] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.219 [2024-07-25 10:17:12.358860] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.219 [2024-07-25 10:17:12.359364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.219 [2024-07-25 10:17:12.359401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:27.219 [2024-07-25 10:17:12.359442] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:27.219 [2024-07-25 10:17:12.359693] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:27.219 [2024-07-25 10:17:12.359939] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.219 [2024-07-25 10:17:12.359963] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.219 [2024-07-25 10:17:12.359981] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.219 [2024-07-25 10:17:12.363555] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.219 [2024-07-25 10:17:12.372824] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.219 [2024-07-25 10:17:12.373295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.219 [2024-07-25 10:17:12.373332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:27.219 [2024-07-25 10:17:12.373352] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:27.219 [2024-07-25 10:17:12.373606] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:27.219 [2024-07-25 10:17:12.373852] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.219 [2024-07-25 10:17:12.373879] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.219 [2024-07-25 10:17:12.373896] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.219 [2024-07-25 10:17:12.377471] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.479 [2024-07-25 10:17:12.386732] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.479 [2024-07-25 10:17:12.387176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.479 [2024-07-25 10:17:12.387208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:27.479 [2024-07-25 10:17:12.387226] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:27.479 [2024-07-25 10:17:12.387477] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:27.479 [2024-07-25 10:17:12.387722] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.479 [2024-07-25 10:17:12.387747] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.479 [2024-07-25 10:17:12.387764] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.479 [2024-07-25 10:17:12.391321] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.479 [2024-07-25 10:17:12.400582] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.479 [2024-07-25 10:17:12.401027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.479 [2024-07-25 10:17:12.401058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:27.479 [2024-07-25 10:17:12.401077] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:27.479 [2024-07-25 10:17:12.401316] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:27.479 [2024-07-25 10:17:12.401572] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.479 [2024-07-25 10:17:12.401610] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.479 [2024-07-25 10:17:12.401627] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.479 [2024-07-25 10:17:12.405189] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.479 [2024-07-25 10:17:12.414447] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.479 [2024-07-25 10:17:12.414468] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:27.479 [2024-07-25 10:17:12.414507] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:27.479 [2024-07-25 10:17:12.414524] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:27.479 [2024-07-25 10:17:12.414537] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:27.479 [2024-07-25 10:17:12.414549] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:27.479 [2024-07-25 10:17:12.414607] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:27.479 [2024-07-25 10:17:12.414662] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:27.479 [2024-07-25 10:17:12.414666] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:27.479 [2024-07-25 10:17:12.414873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.479 [2024-07-25 10:17:12.414903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:27.479 [2024-07-25 10:17:12.414922] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:27.479 [2024-07-25 10:17:12.415160] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:27.479 [2024-07-25 10:17:12.415404] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.479 [2024-07-25 10:17:12.415437] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.479 [2024-07-25 10:17:12.415456] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.479 [2024-07-25 10:17:12.419041] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.479 [2024-07-25 10:17:12.428317] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.479 [2024-07-25 10:17:12.428861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.479 [2024-07-25 10:17:12.428905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:27.479 [2024-07-25 10:17:12.428927] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:27.479 [2024-07-25 10:17:12.429176] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:27.479 [2024-07-25 10:17:12.429424] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.479 [2024-07-25 10:17:12.429460] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.479 [2024-07-25 10:17:12.429479] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.479 [2024-07-25 10:17:12.433042] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.479 [2024-07-25 10:17:12.442350] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.479 [2024-07-25 10:17:12.442892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.479 [2024-07-25 10:17:12.442937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:27.479 [2024-07-25 10:17:12.442970] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:27.479 [2024-07-25 10:17:12.443217] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:27.479 [2024-07-25 10:17:12.443475] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.479 [2024-07-25 10:17:12.443501] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.479 [2024-07-25 10:17:12.443519] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.479 [2024-07-25 10:17:12.447078] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.479 [2024-07-25 10:17:12.456355] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.479 [2024-07-25 10:17:12.456907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.479 [2024-07-25 10:17:12.456952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:27.479 [2024-07-25 10:17:12.456974] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:27.479 [2024-07-25 10:17:12.457224] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:27.479 [2024-07-25 10:17:12.457483] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.479 [2024-07-25 10:17:12.457508] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.479 [2024-07-25 10:17:12.457528] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.479 [2024-07-25 10:17:12.461094] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.479 [2024-07-25 10:17:12.470366] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.479 [2024-07-25 10:17:12.470910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.479 [2024-07-25 10:17:12.470952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:27.479 [2024-07-25 10:17:12.470974] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:27.479 [2024-07-25 10:17:12.471223] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:27.480 [2024-07-25 10:17:12.471480] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.480 [2024-07-25 10:17:12.471506] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.480 [2024-07-25 10:17:12.471524] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.480 [2024-07-25 10:17:12.475083] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.480 [2024-07-25 10:17:12.484386] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.480 [2024-07-25 10:17:12.484957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.480 [2024-07-25 10:17:12.485003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:27.480 [2024-07-25 10:17:12.485024] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:27.480 [2024-07-25 10:17:12.485273] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:27.480 [2024-07-25 10:17:12.485530] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.480 [2024-07-25 10:17:12.485569] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.480 [2024-07-25 10:17:12.485588] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.480 [2024-07-25 10:17:12.489152] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.480 [2024-07-25 10:17:12.498443] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.480 [2024-07-25 10:17:12.498972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.480 [2024-07-25 10:17:12.499014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:27.480 [2024-07-25 10:17:12.499035] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:27.480 [2024-07-25 10:17:12.499282] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:27.480 [2024-07-25 10:17:12.499540] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.480 [2024-07-25 10:17:12.499566] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.480 [2024-07-25 10:17:12.499584] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.480 [2024-07-25 10:17:12.503146] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.480 [2024-07-25 10:17:12.512402] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.480 [2024-07-25 10:17:12.512851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.480 [2024-07-25 10:17:12.512882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:27.480 [2024-07-25 10:17:12.512901] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:27.480 [2024-07-25 10:17:12.513140] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:27.480 [2024-07-25 10:17:12.513383] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.480 [2024-07-25 10:17:12.513407] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.480 [2024-07-25 10:17:12.513423] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.480 [2024-07-25 10:17:12.516999] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.480 [2024-07-25 10:17:12.526289] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.480 [2024-07-25 10:17:12.526730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.480 [2024-07-25 10:17:12.526762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:27.480 [2024-07-25 10:17:12.526781] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:27.480 [2024-07-25 10:17:12.527019] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:27.480 [2024-07-25 10:17:12.527262] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.480 [2024-07-25 10:17:12.527287] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.480 [2024-07-25 10:17:12.527303] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.480 [2024-07-25 10:17:12.530873] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.480 10:17:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:27.480 10:17:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:28:27.480 10:17:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:27.480 10:17:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:27.480 10:17:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:27.480 [2024-07-25 10:17:12.540138] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.480 [2024-07-25 10:17:12.540598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.480 [2024-07-25 10:17:12.540630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:27.480 [2024-07-25 10:17:12.540648] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:27.480 [2024-07-25 10:17:12.540887] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:27.480 [2024-07-25 10:17:12.541131] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.480 [2024-07-25 10:17:12.541155] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.480 [2024-07-25 10:17:12.541172] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.480 [2024-07-25 10:17:12.544746] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.480 [2024-07-25 10:17:12.554002] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.480 [2024-07-25 10:17:12.554415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.480 [2024-07-25 10:17:12.554454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:27.480 [2024-07-25 10:17:12.554473] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:27.480 [2024-07-25 10:17:12.554712] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:27.480 [2024-07-25 10:17:12.554956] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.480 [2024-07-25 10:17:12.554981] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.480 [2024-07-25 10:17:12.554997] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.480 10:17:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:27.480 10:17:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:27.480 10:17:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:27.480 [2024-07-25 10:17:12.558566] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.480 10:17:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:27.480 [2024-07-25 10:17:12.561970] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:27.480 [2024-07-25 10:17:12.568029] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.480 [2024-07-25 10:17:12.568445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.480 [2024-07-25 10:17:12.568476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:27.480 [2024-07-25 10:17:12.568495] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:27.480 [2024-07-25 10:17:12.568733] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:27.480 [2024-07-25 10:17:12.568983] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.480 [2024-07-25 10:17:12.569008] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.480 [2024-07-25 10:17:12.569024] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.480 [2024-07-25 10:17:12.572596] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.480 10:17:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:27.480 10:17:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:27.480 10:17:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:27.480 10:17:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:27.480 [2024-07-25 10:17:12.582069] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.480 [2024-07-25 10:17:12.582499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.480 [2024-07-25 10:17:12.582531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:27.480 [2024-07-25 10:17:12.582549] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:27.480 [2024-07-25 10:17:12.582789] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:27.480 [2024-07-25 10:17:12.583033] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.480 [2024-07-25 10:17:12.583057] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.480 [2024-07-25 10:17:12.583073] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.480 [2024-07-25 10:17:12.586649] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.480 [2024-07-25 10:17:12.595928] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.480 [2024-07-25 10:17:12.596457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.480 [2024-07-25 10:17:12.596500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:27.480 [2024-07-25 10:17:12.596523] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:27.481 [2024-07-25 10:17:12.596774] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:27.481 [2024-07-25 10:17:12.597021] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.481 [2024-07-25 10:17:12.597045] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.481 [2024-07-25 10:17:12.597064] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.481 Malloc0 00:28:27.481 [2024-07-25 10:17:12.600650] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.481 10:17:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:27.481 10:17:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:27.481 10:17:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:27.481 10:17:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:27.481 10:17:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:27.481 10:17:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:27.481 10:17:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:27.481 10:17:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:27.481 [2024-07-25 10:17:12.609901] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.481 [2024-07-25 10:17:12.610315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:27.481 [2024-07-25 10:17:12.610346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x944540 with addr=10.0.0.2, port=4420 00:28:27.481 [2024-07-25 10:17:12.610364] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x944540 is same with the state(5) to be set 00:28:27.481 [2024-07-25 10:17:12.610611] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x944540 (9): Bad file descriptor 00:28:27.481 [2024-07-25 10:17:12.610856] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:27.481 [2024-07-25 10:17:12.610880] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:27.481 [2024-07-25 10:17:12.610896] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:27.481 [2024-07-25 10:17:12.614466] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:27.481 10:17:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:27.481 10:17:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:27.481 10:17:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:27.481 10:17:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:27.481 [2024-07-25 10:17:12.620234] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:27.481 [2024-07-25 10:17:12.623728] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:27.481 10:17:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:27.481 10:17:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 548857 00:28:27.739 [2024-07-25 10:17:12.697588] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:37.716 00:28:37.716 Latency(us) 00:28:37.716 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:37.716 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:28:37.716 Verification LBA range: start 0x0 length 0x4000 00:28:37.716 Nvme1n1 : 15.01 6786.72 26.51 8647.96 0.00 8268.41 801.00 20583.16 00:28:37.716 =================================================================================================================== 00:28:37.716 Total : 6786.72 26.51 8647.96 0.00 8268.41 801.00 20583.16 00:28:37.716 10:17:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:28:37.716 10:17:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:37.716 10:17:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:37.716 10:17:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:37.716 10:17:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:37.716 10:17:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:28:37.716 10:17:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:28:37.717 10:17:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:37.717 10:17:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:28:37.717 10:17:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:37.717 10:17:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:28:37.717 10:17:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:37.717 10:17:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:37.717 rmmod nvme_tcp 00:28:37.717 rmmod nvme_fabrics 00:28:37.717 rmmod nvme_keyring 00:28:37.717 10:17:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:37.717 10:17:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:28:37.717 10:17:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:28:37.717 10:17:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 549521 ']' 00:28:37.717 10:17:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 549521 00:28:37.717 10:17:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@950 -- # '[' -z 549521 ']' 00:28:37.717 10:17:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # kill -0 549521 00:28:37.717 10:17:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # uname 00:28:37.717 10:17:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:37.717 10:17:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 549521 00:28:37.717 10:17:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:28:37.717 10:17:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:28:37.717 10:17:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 549521' 00:28:37.717 killing process with pid 549521 00:28:37.717 10:17:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@969 -- # kill 549521 00:28:37.717 10:17:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@974 -- # wait 549521 00:28:37.717 10:17:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:37.717 10:17:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:37.717 10:17:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:37.717 10:17:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:37.717 10:17:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:37.717 10:17:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:37.717 10:17:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:37.717 10:17:22 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:39.616 10:17:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:39.616 00:28:39.616 real 0m23.015s 00:28:39.616 user 1m0.669s 00:28:39.616 sys 0m4.873s 00:28:39.616 10:17:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:39.616 10:17:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:28:39.617 ************************************ 00:28:39.617 END TEST nvmf_bdevperf 00:28:39.617 ************************************ 00:28:39.617 10:17:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:28:39.617 10:17:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:39.617 10:17:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:39.617 10:17:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.617 ************************************ 00:28:39.617 START TEST nvmf_target_disconnect 00:28:39.617 ************************************ 00:28:39.617 10:17:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:28:39.617 * Looking for test storage... 00:28:39.617 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:39.617 10:17:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:39.617 10:17:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:28:39.617 10:17:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:39.617 10:17:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:39.617 10:17:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:39.617 10:17:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:39.617 10:17:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:39.617 10:17:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:39.617 10:17:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:39.617 10:17:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:39.617 10:17:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:39.617 10:17:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:39.617 10:17:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:28:39.617 10:17:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:28:39.617 10:17:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:39.617 10:17:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:39.617 10:17:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:39.617 10:17:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:39.617 10:17:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:39.617 10:17:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:39.617 10:17:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:39.617 10:17:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:39.617 10:17:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:39.617 10:17:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:39.617 10:17:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:39.617 10:17:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:28:39.617 10:17:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:39.617 10:17:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:28:39.617 10:17:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:39.617 10:17:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:39.617 10:17:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:39.617 10:17:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:39.617 10:17:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:39.617 10:17:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:39.617 10:17:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:39.617 10:17:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:39.617 10:17:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:28:39.617 10:17:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:28:39.617 10:17:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:28:39.617 10:17:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:28:39.617 10:17:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:39.617 10:17:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:39.617 10:17:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:39.617 10:17:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:39.617 10:17:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:39.617 10:17:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:39.617 10:17:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:39.617 10:17:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:39.617 10:17:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:39.617 10:17:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:39.617 10:17:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:28:39.617 10:17:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:28:42.148 10:17:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:42.148 10:17:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:28:42.148 10:17:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:42.148 10:17:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:42.148 10:17:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:42.148 10:17:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:42.148 10:17:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:42.148 10:17:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:28:42.148 10:17:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:42.148 10:17:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:28:42.148 10:17:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:28:42.148 10:17:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:28:42.148 10:17:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:28:42.148 10:17:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:28:42.148 10:17:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:28:42.148 10:17:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:42.148 10:17:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:42.148 10:17:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:42.148 10:17:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:42.148 10:17:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:42.148 10:17:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:42.148 10:17:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:42.148 10:17:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:42.148 10:17:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:42.148 10:17:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:42.148 10:17:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:42.148 10:17:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:42.148 10:17:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:42.148 10:17:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:42.148 10:17:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:42.148 10:17:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:42.149 10:17:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:42.149 10:17:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:42.149 10:17:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:28:42.149 Found 0000:84:00.0 (0x8086 - 0x159b) 00:28:42.149 10:17:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:42.149 10:17:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:42.149 10:17:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:42.149 10:17:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:42.149 10:17:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:42.149 10:17:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:42.149 10:17:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:28:42.149 Found 0000:84:00.1 (0x8086 - 0x159b) 00:28:42.149 10:17:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:42.149 10:17:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:42.149 10:17:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:42.149 10:17:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:42.149 10:17:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:42.149 10:17:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:42.149 10:17:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:42.149 10:17:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:42.149 10:17:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:42.149 10:17:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:42.149 10:17:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:42.149 10:17:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:42.149 10:17:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:42.149 10:17:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:42.149 10:17:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:42.149 10:17:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:28:42.149 Found net devices under 0000:84:00.0: cvl_0_0 00:28:42.149 10:17:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:42.149 10:17:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:42.149 10:17:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:42.149 10:17:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:42.149 10:17:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:42.149 10:17:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:42.149 10:17:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:42.149 10:17:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:42.149 10:17:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:28:42.149 Found net devices under 0000:84:00.1: cvl_0_1 00:28:42.149 10:17:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:42.149 10:17:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:42.149 10:17:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:28:42.149 10:17:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:42.149 10:17:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:42.149 10:17:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:42.149 10:17:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:42.149 10:17:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:42.149 10:17:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:42.149 10:17:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:42.149 10:17:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:42.149 10:17:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:42.149 10:17:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:42.149 10:17:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:42.149 10:17:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:42.149 10:17:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:42.149 10:17:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:42.149 10:17:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:42.149 10:17:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:42.149 10:17:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:42.149 10:17:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:42.149 10:17:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:42.149 10:17:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:42.149 10:17:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:42.149 10:17:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:42.149 10:17:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:42.149 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:42.149 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.229 ms 00:28:42.149 00:28:42.149 --- 10.0.0.2 ping statistics --- 00:28:42.149 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:42.149 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:28:42.149 10:17:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:42.149 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:42.149 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.156 ms 00:28:42.149 00:28:42.149 --- 10.0.0.1 ping statistics --- 00:28:42.149 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:42.149 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:28:42.149 10:17:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:42.149 10:17:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:28:42.149 10:17:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:42.149 10:17:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:42.149 10:17:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:42.149 10:17:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:42.149 10:17:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:42.149 10:17:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:42.149 10:17:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:42.408 10:17:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:28:42.408 10:17:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:28:42.408 10:17:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:42.408 10:17:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:28:42.408 ************************************ 00:28:42.408 START TEST nvmf_target_disconnect_tc1 00:28:42.408 ************************************ 00:28:42.408 10:17:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc1 00:28:42.408 10:17:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:42.408 10:17:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # local es=0 00:28:42.408 10:17:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:42.408 10:17:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:28:42.408 10:17:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:42.408 10:17:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:28:42.408 10:17:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:42.408 10:17:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:28:42.408 10:17:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:42.408 10:17:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:28:42.408 10:17:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:28:42.408 10:17:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:42.408 EAL: No free 2048 kB hugepages reported on node 1 00:28:42.408 [2024-07-25 10:17:27.480784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:42.408 [2024-07-25 10:17:27.480875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15bc790 with addr=10.0.0.2, port=4420 00:28:42.408 [2024-07-25 10:17:27.480914] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:28:42.408 [2024-07-25 10:17:27.480942] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:28:42.408 [2024-07-25 10:17:27.480957] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:28:42.408 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:28:42.408 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:28:42.408 Initializing NVMe Controllers 00:28:42.408 10:17:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # es=1 00:28:42.408 10:17:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:42.408 10:17:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:42.408 10:17:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:42.408 00:28:42.408 real 0m0.134s 00:28:42.408 user 0m0.054s 00:28:42.408 sys 0m0.080s 00:28:42.408 10:17:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:42.408 10:17:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:42.408 ************************************ 00:28:42.408 END TEST nvmf_target_disconnect_tc1 00:28:42.408 ************************************ 00:28:42.408 10:17:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:28:42.408 10:17:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:28:42.408 10:17:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:42.409 10:17:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:28:42.409 ************************************ 00:28:42.409 START TEST nvmf_target_disconnect_tc2 00:28:42.409 ************************************ 00:28:42.409 10:17:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc2 00:28:42.409 10:17:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:28:42.409 10:17:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:28:42.409 10:17:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:42.409 10:17:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:42.409 10:17:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:42.409 10:17:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=552804 00:28:42.409 10:17:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:28:42.409 10:17:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 552804 00:28:42.409 10:17:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 552804 ']' 00:28:42.409 10:17:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:42.409 10:17:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:42.409 10:17:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:42.409 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:42.409 10:17:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:42.409 10:17:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:42.667 [2024-07-25 10:17:27.636330] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:28:42.667 [2024-07-25 10:17:27.636511] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:42.667 EAL: No free 2048 kB hugepages reported on node 1 00:28:42.667 [2024-07-25 10:17:27.723311] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:42.926 [2024-07-25 10:17:27.918519] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:42.926 [2024-07-25 10:17:27.918578] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:42.926 [2024-07-25 10:17:27.918596] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:42.926 [2024-07-25 10:17:27.918609] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:42.926 [2024-07-25 10:17:27.918622] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:42.926 [2024-07-25 10:17:27.918706] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:28:42.926 [2024-07-25 10:17:27.918770] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:28:42.926 [2024-07-25 10:17:27.918820] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:28:42.926 [2024-07-25 10:17:27.918823] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:28:42.926 10:17:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:42.926 10:17:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:28:42.926 10:17:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:42.926 10:17:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:42.926 10:17:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:42.926 10:17:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:42.926 10:17:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:42.926 10:17:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:42.926 10:17:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:43.184 Malloc0 00:28:43.184 10:17:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:43.184 10:17:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:28:43.184 10:17:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:43.184 10:17:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:43.184 [2024-07-25 10:17:28.114506] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:43.184 10:17:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:43.184 10:17:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:43.184 10:17:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:43.184 10:17:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:43.184 10:17:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:43.184 10:17:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:43.184 10:17:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:43.184 10:17:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:43.184 10:17:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:43.184 10:17:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:43.184 10:17:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:43.184 10:17:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:43.184 [2024-07-25 10:17:28.142823] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:43.184 10:17:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:43.184 10:17:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:43.184 10:17:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:43.184 10:17:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:43.184 10:17:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:43.184 10:17:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=552838 00:28:43.184 10:17:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:28:43.184 10:17:28 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:43.184 EAL: No free 2048 kB hugepages reported on node 1 00:28:45.089 10:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 552804 00:28:45.089 10:17:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:28:45.089 Read completed with error (sct=0, sc=8) 00:28:45.089 starting I/O failed 00:28:45.089 Read completed with error (sct=0, sc=8) 00:28:45.089 starting I/O failed 00:28:45.089 Read completed with error (sct=0, sc=8) 00:28:45.089 starting I/O failed 00:28:45.089 Read completed with error (sct=0, sc=8) 00:28:45.089 starting I/O failed 00:28:45.089 Read completed with error (sct=0, sc=8) 00:28:45.089 starting I/O failed 00:28:45.089 Read completed with error (sct=0, sc=8) 00:28:45.089 starting I/O failed 00:28:45.089 Read completed with error (sct=0, sc=8) 00:28:45.089 starting I/O failed 00:28:45.089 Read completed with error (sct=0, sc=8) 00:28:45.089 starting I/O failed 00:28:45.089 Read completed with error (sct=0, sc=8) 00:28:45.089 starting I/O failed 00:28:45.089 Write completed with error (sct=0, sc=8) 00:28:45.089 starting I/O failed 00:28:45.089 Read completed with error (sct=0, sc=8) 00:28:45.089 starting I/O failed 00:28:45.089 Write completed with error (sct=0, sc=8) 00:28:45.089 starting I/O failed 00:28:45.089 Read completed with error (sct=0, sc=8) 00:28:45.089 starting I/O failed 00:28:45.089 Read completed with error (sct=0, sc=8) 00:28:45.089 starting I/O failed 00:28:45.089 Write completed with error (sct=0, sc=8) 00:28:45.089 starting I/O failed 00:28:45.089 Read completed with error (sct=0, sc=8) 00:28:45.089 starting I/O failed 00:28:45.089 Read completed with error (sct=0, sc=8) 00:28:45.089 starting I/O failed 00:28:45.089 Read completed with error (sct=0, sc=8) 00:28:45.089 starting I/O failed 00:28:45.089 Read completed with error (sct=0, sc=8) 00:28:45.089 starting I/O failed 00:28:45.089 Read completed with error (sct=0, sc=8) 00:28:45.089 starting I/O failed 00:28:45.089 Read completed with error (sct=0, sc=8) 00:28:45.089 starting I/O failed 00:28:45.089 Write completed with error (sct=0, sc=8) 00:28:45.089 starting I/O failed 00:28:45.089 Read completed with error (sct=0, sc=8) 00:28:45.089 starting I/O failed 00:28:45.089 Write completed with error (sct=0, sc=8) 00:28:45.089 starting I/O failed 00:28:45.089 Read completed with error (sct=0, sc=8) 00:28:45.089 starting I/O failed 00:28:45.089 Read completed with error (sct=0, sc=8) 00:28:45.089 starting I/O failed 00:28:45.089 Write completed with error (sct=0, sc=8) 00:28:45.089 starting I/O failed 00:28:45.089 Write completed with error (sct=0, sc=8) 00:28:45.089 starting I/O failed 00:28:45.089 Read completed with error (sct=0, sc=8) 00:28:45.089 starting I/O failed 00:28:45.089 Write completed with error (sct=0, sc=8) 00:28:45.089 starting I/O failed 00:28:45.089 Read completed with error (sct=0, sc=8) 00:28:45.089 starting I/O failed 00:28:45.089 Read completed with error (sct=0, sc=8) 00:28:45.089 starting I/O failed 00:28:45.089 Read completed with error (sct=0, sc=8) 00:28:45.089 starting I/O failed 00:28:45.089 [2024-07-25 10:17:30.168747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:45.089 Read completed with error (sct=0, sc=8) 00:28:45.089 starting I/O failed 00:28:45.089 Read completed with error (sct=0, sc=8) 00:28:45.089 starting I/O failed 00:28:45.089 Read completed with error (sct=0, sc=8) 00:28:45.089 starting I/O failed 00:28:45.089 Read completed with error (sct=0, sc=8) 00:28:45.089 starting I/O failed 00:28:45.089 Read completed with error (sct=0, sc=8) 00:28:45.089 starting I/O failed 00:28:45.089 Read completed with error (sct=0, sc=8) 00:28:45.089 starting I/O failed 00:28:45.089 Read completed with error (sct=0, sc=8) 00:28:45.089 starting I/O failed 00:28:45.089 Read completed with error (sct=0, sc=8) 00:28:45.089 starting I/O failed 00:28:45.089 Read completed with error (sct=0, sc=8) 00:28:45.089 starting I/O failed 00:28:45.089 Write completed with error (sct=0, sc=8) 00:28:45.089 starting I/O failed 00:28:45.089 Write completed with error (sct=0, sc=8) 00:28:45.089 starting I/O failed 00:28:45.089 Write completed with error (sct=0, sc=8) 00:28:45.089 starting I/O failed 00:28:45.089 Read completed with error (sct=0, sc=8) 00:28:45.089 starting I/O failed 00:28:45.089 Read completed with error (sct=0, sc=8) 00:28:45.089 starting I/O failed 00:28:45.089 Read completed with error (sct=0, sc=8) 00:28:45.089 starting I/O failed 00:28:45.090 Read completed with error (sct=0, sc=8) 00:28:45.090 starting I/O failed 00:28:45.090 Write completed with error (sct=0, sc=8) 00:28:45.090 starting I/O failed 00:28:45.090 Write completed with error (sct=0, sc=8) 00:28:45.090 starting I/O failed 00:28:45.090 Write completed with error (sct=0, sc=8) 00:28:45.090 starting I/O failed 00:28:45.090 Read completed with error (sct=0, sc=8) 00:28:45.090 starting I/O failed 00:28:45.090 Read completed with error (sct=0, sc=8) 00:28:45.090 starting I/O failed 00:28:45.090 Read completed with error (sct=0, sc=8) 00:28:45.090 starting I/O failed 00:28:45.090 Read completed with error (sct=0, sc=8) 00:28:45.090 starting I/O failed 00:28:45.090 Write completed with error (sct=0, sc=8) 00:28:45.090 starting I/O failed 00:28:45.090 Write completed with error (sct=0, sc=8) 00:28:45.090 starting I/O failed 00:28:45.090 Read completed with error (sct=0, sc=8) 00:28:45.090 starting I/O failed 00:28:45.090 Read completed with error (sct=0, sc=8) 00:28:45.090 starting I/O failed 00:28:45.090 Write completed with error (sct=0, sc=8) 00:28:45.090 starting I/O failed 00:28:45.090 Read completed with error (sct=0, sc=8) 00:28:45.090 starting I/O failed 00:28:45.090 Write completed with error (sct=0, sc=8) 00:28:45.090 starting I/O failed 00:28:45.090 Write completed with error (sct=0, sc=8) 00:28:45.090 starting I/O failed 00:28:45.090 [2024-07-25 10:17:30.169135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:45.090 [2024-07-25 10:17:30.169374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.090 [2024-07-25 10:17:30.169425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.090 qpair failed and we were unable to recover it. 00:28:45.090 [2024-07-25 10:17:30.169603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.090 [2024-07-25 10:17:30.169631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.090 qpair failed and we were unable to recover it. 00:28:45.090 [2024-07-25 10:17:30.169789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.090 [2024-07-25 10:17:30.169829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.090 qpair failed and we were unable to recover it. 00:28:45.090 [2024-07-25 10:17:30.169984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.090 [2024-07-25 10:17:30.170010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.090 qpair failed and we were unable to recover it. 00:28:45.090 [2024-07-25 10:17:30.170183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.090 [2024-07-25 10:17:30.170226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.090 qpair failed and we were unable to recover it. 00:28:45.090 [2024-07-25 10:17:30.170438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.090 [2024-07-25 10:17:30.170477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.090 qpair failed and we were unable to recover it. 00:28:45.090 [2024-07-25 10:17:30.170643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.090 [2024-07-25 10:17:30.170669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.090 qpair failed and we were unable to recover it. 00:28:45.090 [2024-07-25 10:17:30.170847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.090 [2024-07-25 10:17:30.170872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.090 qpair failed and we were unable to recover it. 00:28:45.090 [2024-07-25 10:17:30.171042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.090 [2024-07-25 10:17:30.171067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.090 qpair failed and we were unable to recover it. 00:28:45.090 [2024-07-25 10:17:30.171233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.090 [2024-07-25 10:17:30.171259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.090 qpair failed and we were unable to recover it. 00:28:45.090 [2024-07-25 10:17:30.171445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.090 [2024-07-25 10:17:30.171501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.090 qpair failed and we were unable to recover it. 00:28:45.090 [2024-07-25 10:17:30.171647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.090 [2024-07-25 10:17:30.171681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.090 qpair failed and we were unable to recover it. 00:28:45.090 [2024-07-25 10:17:30.171870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.090 [2024-07-25 10:17:30.171912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.090 qpair failed and we were unable to recover it. 00:28:45.090 [2024-07-25 10:17:30.172105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.090 [2024-07-25 10:17:30.172147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.090 qpair failed and we were unable to recover it. 00:28:45.090 [2024-07-25 10:17:30.172338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.090 [2024-07-25 10:17:30.172378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.090 qpair failed and we were unable to recover it. 00:28:45.090 [2024-07-25 10:17:30.172557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.090 [2024-07-25 10:17:30.172585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.090 qpair failed and we were unable to recover it. 00:28:45.090 [2024-07-25 10:17:30.172781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.090 [2024-07-25 10:17:30.172824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.090 qpair failed and we were unable to recover it. 00:28:45.090 [2024-07-25 10:17:30.173059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.090 [2024-07-25 10:17:30.173103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.090 qpair failed and we were unable to recover it. 00:28:45.090 [2024-07-25 10:17:30.173280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.090 [2024-07-25 10:17:30.173306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.090 qpair failed and we were unable to recover it. 00:28:45.090 [2024-07-25 10:17:30.173515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.090 [2024-07-25 10:17:30.173543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.090 qpair failed and we were unable to recover it. 00:28:45.090 [2024-07-25 10:17:30.173686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.090 [2024-07-25 10:17:30.173731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.090 qpair failed and we were unable to recover it. 00:28:45.090 [2024-07-25 10:17:30.173898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.090 [2024-07-25 10:17:30.173940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.090 qpair failed and we were unable to recover it. 00:28:45.090 [2024-07-25 10:17:30.174175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.090 [2024-07-25 10:17:30.174217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.090 qpair failed and we were unable to recover it. 00:28:45.090 [2024-07-25 10:17:30.174408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.090 [2024-07-25 10:17:30.174456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.090 qpair failed and we were unable to recover it. 00:28:45.090 [2024-07-25 10:17:30.174667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.090 [2024-07-25 10:17:30.174717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.090 qpair failed and we were unable to recover it. 00:28:45.090 [2024-07-25 10:17:30.174913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.090 [2024-07-25 10:17:30.174942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.090 qpair failed and we were unable to recover it. 00:28:45.090 [2024-07-25 10:17:30.175182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.090 [2024-07-25 10:17:30.175240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.090 qpair failed and we were unable to recover it. 00:28:45.090 [2024-07-25 10:17:30.175485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.090 [2024-07-25 10:17:30.175515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.090 qpair failed and we were unable to recover it. 00:28:45.090 [2024-07-25 10:17:30.175707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.090 [2024-07-25 10:17:30.175751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.090 qpair failed and we were unable to recover it. 00:28:45.090 [2024-07-25 10:17:30.175935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.090 [2024-07-25 10:17:30.175978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.090 qpair failed and we were unable to recover it. 00:28:45.090 [2024-07-25 10:17:30.176168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.090 [2024-07-25 10:17:30.176211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.090 qpair failed and we were unable to recover it. 00:28:45.090 [2024-07-25 10:17:30.176379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.091 [2024-07-25 10:17:30.176404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.091 qpair failed and we were unable to recover it. 00:28:45.091 [2024-07-25 10:17:30.176608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.091 [2024-07-25 10:17:30.176634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.091 qpair failed and we were unable to recover it. 00:28:45.091 [2024-07-25 10:17:30.176845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.091 [2024-07-25 10:17:30.176888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.091 qpair failed and we were unable to recover it. 00:28:45.091 [2024-07-25 10:17:30.177063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.091 [2024-07-25 10:17:30.177107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.091 qpair failed and we were unable to recover it. 00:28:45.091 [2024-07-25 10:17:30.177300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.091 [2024-07-25 10:17:30.177325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.091 qpair failed and we were unable to recover it. 00:28:45.091 [2024-07-25 10:17:30.177530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.091 [2024-07-25 10:17:30.177555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.091 qpair failed and we were unable to recover it. 00:28:45.091 [2024-07-25 10:17:30.177752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.091 [2024-07-25 10:17:30.177795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.091 qpair failed and we were unable to recover it. 00:28:45.091 [2024-07-25 10:17:30.177974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.091 [2024-07-25 10:17:30.178016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.091 qpair failed and we were unable to recover it. 00:28:45.091 [2024-07-25 10:17:30.178216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.091 [2024-07-25 10:17:30.178258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.091 qpair failed and we were unable to recover it. 00:28:45.091 [2024-07-25 10:17:30.178394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.091 [2024-07-25 10:17:30.178418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.091 qpair failed and we were unable to recover it. 00:28:45.091 [2024-07-25 10:17:30.178612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.091 [2024-07-25 10:17:30.178655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.091 qpair failed and we were unable to recover it. 00:28:45.091 [2024-07-25 10:17:30.178838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.091 [2024-07-25 10:17:30.178882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.091 qpair failed and we were unable to recover it. 00:28:45.091 [2024-07-25 10:17:30.179044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.091 [2024-07-25 10:17:30.179087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.091 qpair failed and we were unable to recover it. 00:28:45.091 [2024-07-25 10:17:30.179286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.091 [2024-07-25 10:17:30.179310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.091 qpair failed and we were unable to recover it. 00:28:45.091 [2024-07-25 10:17:30.179552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.091 [2024-07-25 10:17:30.179596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.091 qpair failed and we were unable to recover it. 00:28:45.091 [2024-07-25 10:17:30.179777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.091 [2024-07-25 10:17:30.179819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.091 qpair failed and we were unable to recover it. 00:28:45.091 [2024-07-25 10:17:30.180053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.091 [2024-07-25 10:17:30.180096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.091 qpair failed and we were unable to recover it. 00:28:45.091 [2024-07-25 10:17:30.180285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.091 [2024-07-25 10:17:30.180310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.091 qpair failed and we were unable to recover it. 00:28:45.091 [2024-07-25 10:17:30.180490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.091 [2024-07-25 10:17:30.180516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.091 qpair failed and we were unable to recover it. 00:28:45.091 [2024-07-25 10:17:30.180659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.091 [2024-07-25 10:17:30.180705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.091 qpair failed and we were unable to recover it. 00:28:45.091 [2024-07-25 10:17:30.180933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.091 [2024-07-25 10:17:30.180976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.091 qpair failed and we were unable to recover it. 00:28:45.091 [2024-07-25 10:17:30.181190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.091 [2024-07-25 10:17:30.181234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.091 qpair failed and we were unable to recover it. 00:28:45.091 [2024-07-25 10:17:30.181426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.091 [2024-07-25 10:17:30.181482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.091 qpair failed and we were unable to recover it. 00:28:45.091 [2024-07-25 10:17:30.181649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.091 [2024-07-25 10:17:30.181700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.091 qpair failed and we were unable to recover it. 00:28:45.091 [2024-07-25 10:17:30.181851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.091 [2024-07-25 10:17:30.181894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.091 qpair failed and we were unable to recover it. 00:28:45.091 [2024-07-25 10:17:30.182104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.091 [2024-07-25 10:17:30.182153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.091 qpair failed and we were unable to recover it. 00:28:45.091 [2024-07-25 10:17:30.182353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.091 [2024-07-25 10:17:30.182378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.091 qpair failed and we were unable to recover it. 00:28:45.091 [2024-07-25 10:17:30.182547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.091 [2024-07-25 10:17:30.182571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.091 qpair failed and we were unable to recover it. 00:28:45.091 [2024-07-25 10:17:30.182800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.091 [2024-07-25 10:17:30.182843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.091 qpair failed and we were unable to recover it. 00:28:45.091 [2024-07-25 10:17:30.182993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.091 [2024-07-25 10:17:30.183019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.091 qpair failed and we were unable to recover it. 00:28:45.091 [2024-07-25 10:17:30.183214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.091 [2024-07-25 10:17:30.183256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.091 qpair failed and we were unable to recover it. 00:28:45.091 [2024-07-25 10:17:30.183439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.091 [2024-07-25 10:17:30.183474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.091 qpair failed and we were unable to recover it. 00:28:45.091 [2024-07-25 10:17:30.183638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.091 [2024-07-25 10:17:30.183663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.091 qpair failed and we were unable to recover it. 00:28:45.091 [2024-07-25 10:17:30.183874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.091 [2024-07-25 10:17:30.183917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.091 qpair failed and we were unable to recover it. 00:28:45.091 [2024-07-25 10:17:30.184057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.091 [2024-07-25 10:17:30.184100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.091 qpair failed and we were unable to recover it. 00:28:45.091 [2024-07-25 10:17:30.184278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.091 [2024-07-25 10:17:30.184303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.091 qpair failed and we were unable to recover it. 00:28:45.091 [2024-07-25 10:17:30.184506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.091 [2024-07-25 10:17:30.184549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.091 qpair failed and we were unable to recover it. 00:28:45.091 [2024-07-25 10:17:30.184747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.091 [2024-07-25 10:17:30.184789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.091 qpair failed and we were unable to recover it. 00:28:45.092 [2024-07-25 10:17:30.185000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.092 [2024-07-25 10:17:30.185042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.092 qpair failed and we were unable to recover it. 00:28:45.092 [2024-07-25 10:17:30.185223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.092 [2024-07-25 10:17:30.185266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.092 qpair failed and we were unable to recover it. 00:28:45.092 [2024-07-25 10:17:30.185455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.092 [2024-07-25 10:17:30.185498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.092 qpair failed and we were unable to recover it. 00:28:45.092 [2024-07-25 10:17:30.185674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.092 [2024-07-25 10:17:30.185718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.092 qpair failed and we were unable to recover it. 00:28:45.092 [2024-07-25 10:17:30.185864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.092 [2024-07-25 10:17:30.185906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.092 qpair failed and we were unable to recover it. 00:28:45.092 [2024-07-25 10:17:30.186105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.092 [2024-07-25 10:17:30.186147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.092 qpair failed and we were unable to recover it. 00:28:45.092 [2024-07-25 10:17:30.186294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.092 [2024-07-25 10:17:30.186319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.092 qpair failed and we were unable to recover it. 00:28:45.092 [2024-07-25 10:17:30.186517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.092 [2024-07-25 10:17:30.186564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.092 qpair failed and we were unable to recover it. 00:28:45.092 [2024-07-25 10:17:30.186734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.092 [2024-07-25 10:17:30.186777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.092 qpair failed and we were unable to recover it. 00:28:45.092 [2024-07-25 10:17:30.186953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.092 [2024-07-25 10:17:30.186996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.092 qpair failed and we were unable to recover it. 00:28:45.092 [2024-07-25 10:17:30.187129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.092 [2024-07-25 10:17:30.187153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.092 qpair failed and we were unable to recover it. 00:28:45.092 [2024-07-25 10:17:30.187335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.092 [2024-07-25 10:17:30.187358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.092 qpair failed and we were unable to recover it. 00:28:45.092 [2024-07-25 10:17:30.187585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.092 [2024-07-25 10:17:30.187629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.092 qpair failed and we were unable to recover it. 00:28:45.092 [2024-07-25 10:17:30.187849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.092 [2024-07-25 10:17:30.187892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.092 qpair failed and we were unable to recover it. 00:28:45.092 [2024-07-25 10:17:30.188048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.092 [2024-07-25 10:17:30.188091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.092 qpair failed and we were unable to recover it. 00:28:45.092 [2024-07-25 10:17:30.188269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.092 [2024-07-25 10:17:30.188293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.092 qpair failed and we were unable to recover it. 00:28:45.092 [2024-07-25 10:17:30.188484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.092 [2024-07-25 10:17:30.188524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.092 qpair failed and we were unable to recover it. 00:28:45.092 [2024-07-25 10:17:30.188689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.092 [2024-07-25 10:17:30.188733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.092 qpair failed and we were unable to recover it. 00:28:45.092 [2024-07-25 10:17:30.188876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.092 [2024-07-25 10:17:30.188920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.092 qpair failed and we were unable to recover it. 00:28:45.092 [2024-07-25 10:17:30.189113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.092 [2024-07-25 10:17:30.189156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.092 qpair failed and we were unable to recover it. 00:28:45.092 [2024-07-25 10:17:30.189346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.092 [2024-07-25 10:17:30.189370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.092 qpair failed and we were unable to recover it. 00:28:45.092 [2024-07-25 10:17:30.189539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.092 [2024-07-25 10:17:30.189574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.092 qpair failed and we were unable to recover it. 00:28:45.092 [2024-07-25 10:17:30.189772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.092 [2024-07-25 10:17:30.189814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.092 qpair failed and we were unable to recover it. 00:28:45.092 [2024-07-25 10:17:30.190031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.092 [2024-07-25 10:17:30.190073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.092 qpair failed and we were unable to recover it. 00:28:45.092 [2024-07-25 10:17:30.190217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.092 [2024-07-25 10:17:30.190241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.092 qpair failed and we were unable to recover it. 00:28:45.092 [2024-07-25 10:17:30.190411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.092 [2024-07-25 10:17:30.190456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.092 qpair failed and we were unable to recover it. 00:28:45.092 [2024-07-25 10:17:30.190665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.092 [2024-07-25 10:17:30.190708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.092 qpair failed and we were unable to recover it. 00:28:45.092 [2024-07-25 10:17:30.190920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.092 [2024-07-25 10:17:30.190962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.092 qpair failed and we were unable to recover it. 00:28:45.092 [2024-07-25 10:17:30.191130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.092 [2024-07-25 10:17:30.191172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.092 qpair failed and we were unable to recover it. 00:28:45.092 [2024-07-25 10:17:30.191355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.092 [2024-07-25 10:17:30.191378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.092 qpair failed and we were unable to recover it. 00:28:45.092 [2024-07-25 10:17:30.191561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.092 [2024-07-25 10:17:30.191591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.092 qpair failed and we were unable to recover it. 00:28:45.092 [2024-07-25 10:17:30.191746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.092 [2024-07-25 10:17:30.191775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.092 qpair failed and we were unable to recover it. 00:28:45.092 [2024-07-25 10:17:30.191966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.092 [2024-07-25 10:17:30.192008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.092 qpair failed and we were unable to recover it. 00:28:45.092 [2024-07-25 10:17:30.192188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.092 [2024-07-25 10:17:30.192231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.092 qpair failed and we were unable to recover it. 00:28:45.092 [2024-07-25 10:17:30.192403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.092 [2024-07-25 10:17:30.192455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.092 qpair failed and we were unable to recover it. 00:28:45.092 [2024-07-25 10:17:30.192677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.092 [2024-07-25 10:17:30.192719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.092 qpair failed and we were unable to recover it. 00:28:45.092 [2024-07-25 10:17:30.192859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.092 [2024-07-25 10:17:30.192888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.092 qpair failed and we were unable to recover it. 00:28:45.092 [2024-07-25 10:17:30.193076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.093 [2024-07-25 10:17:30.193119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.093 qpair failed and we were unable to recover it. 00:28:45.093 [2024-07-25 10:17:30.193351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.093 [2024-07-25 10:17:30.193375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.093 qpair failed and we were unable to recover it. 00:28:45.093 [2024-07-25 10:17:30.193542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.093 [2024-07-25 10:17:30.193568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.093 qpair failed and we were unable to recover it. 00:28:45.093 [2024-07-25 10:17:30.193774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.093 [2024-07-25 10:17:30.193817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.093 qpair failed and we were unable to recover it. 00:28:45.093 [2024-07-25 10:17:30.194017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.093 [2024-07-25 10:17:30.194046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.093 qpair failed and we were unable to recover it. 00:28:45.093 [2024-07-25 10:17:30.194229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.093 [2024-07-25 10:17:30.194273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.093 qpair failed and we were unable to recover it. 00:28:45.093 [2024-07-25 10:17:30.194483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.093 [2024-07-25 10:17:30.194526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.093 qpair failed and we were unable to recover it. 00:28:45.093 [2024-07-25 10:17:30.194740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.093 [2024-07-25 10:17:30.194783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.093 qpair failed and we were unable to recover it. 00:28:45.093 [2024-07-25 10:17:30.194983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.093 [2024-07-25 10:17:30.195025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.093 qpair failed and we were unable to recover it. 00:28:45.093 [2024-07-25 10:17:30.195252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.093 [2024-07-25 10:17:30.195277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.093 qpair failed and we were unable to recover it. 00:28:45.093 [2024-07-25 10:17:30.195442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.093 [2024-07-25 10:17:30.195501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.093 qpair failed and we were unable to recover it. 00:28:45.093 [2024-07-25 10:17:30.195717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.093 [2024-07-25 10:17:30.195760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.093 qpair failed and we were unable to recover it. 00:28:45.093 [2024-07-25 10:17:30.195938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.093 [2024-07-25 10:17:30.195982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.093 qpair failed and we were unable to recover it. 00:28:45.093 [2024-07-25 10:17:30.196173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.093 [2024-07-25 10:17:30.196216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.093 qpair failed and we were unable to recover it. 00:28:45.093 [2024-07-25 10:17:30.196434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.093 [2024-07-25 10:17:30.196458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.093 qpair failed and we were unable to recover it. 00:28:45.093 [2024-07-25 10:17:30.196687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.093 [2024-07-25 10:17:30.196729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.093 qpair failed and we were unable to recover it. 00:28:45.093 [2024-07-25 10:17:30.196927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.093 [2024-07-25 10:17:30.196957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.093 qpair failed and we were unable to recover it. 00:28:45.093 [2024-07-25 10:17:30.197148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.093 [2024-07-25 10:17:30.197190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.093 qpair failed and we were unable to recover it. 00:28:45.093 [2024-07-25 10:17:30.197424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.093 [2024-07-25 10:17:30.197455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.093 qpair failed and we were unable to recover it. 00:28:45.093 [2024-07-25 10:17:30.197683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.093 [2024-07-25 10:17:30.197713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.093 qpair failed and we were unable to recover it. 00:28:45.093 [2024-07-25 10:17:30.197932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.093 [2024-07-25 10:17:30.197974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.093 qpair failed and we were unable to recover it. 00:28:45.093 [2024-07-25 10:17:30.198162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.093 [2024-07-25 10:17:30.198205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.093 qpair failed and we were unable to recover it. 00:28:45.093 [2024-07-25 10:17:30.198400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.093 [2024-07-25 10:17:30.198425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.093 qpair failed and we were unable to recover it. 00:28:45.093 [2024-07-25 10:17:30.198626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.093 [2024-07-25 10:17:30.198669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.093 qpair failed and we were unable to recover it. 00:28:45.093 [2024-07-25 10:17:30.198855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.093 [2024-07-25 10:17:30.198901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.093 qpair failed and we were unable to recover it. 00:28:45.093 [2024-07-25 10:17:30.199090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.093 [2024-07-25 10:17:30.199134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.093 qpair failed and we were unable to recover it. 00:28:45.093 [2024-07-25 10:17:30.199303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.093 [2024-07-25 10:17:30.199341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.093 qpair failed and we were unable to recover it. 00:28:45.093 [2024-07-25 10:17:30.199559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.093 [2024-07-25 10:17:30.199586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.093 qpair failed and we were unable to recover it. 00:28:45.093 [2024-07-25 10:17:30.199785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.093 [2024-07-25 10:17:30.199827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.093 qpair failed and we were unable to recover it. 00:28:45.093 [2024-07-25 10:17:30.200031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.093 [2024-07-25 10:17:30.200060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.093 qpair failed and we were unable to recover it. 00:28:45.093 [2024-07-25 10:17:30.200299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.093 [2024-07-25 10:17:30.200354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.093 qpair failed and we were unable to recover it. 00:28:45.093 [2024-07-25 10:17:30.200549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.093 [2024-07-25 10:17:30.200580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.093 qpair failed and we were unable to recover it. 00:28:45.093 [2024-07-25 10:17:30.200772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.093 [2024-07-25 10:17:30.200815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.093 qpair failed and we were unable to recover it. 00:28:45.093 [2024-07-25 10:17:30.201017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.093 [2024-07-25 10:17:30.201060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.093 qpair failed and we were unable to recover it. 00:28:45.093 [2024-07-25 10:17:30.201198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.093 [2024-07-25 10:17:30.201222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.093 qpair failed and we were unable to recover it. 00:28:45.093 [2024-07-25 10:17:30.201444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.093 [2024-07-25 10:17:30.201469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.093 qpair failed and we were unable to recover it. 00:28:45.093 [2024-07-25 10:17:30.201687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.093 [2024-07-25 10:17:30.201731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.093 qpair failed and we were unable to recover it. 00:28:45.093 [2024-07-25 10:17:30.201896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.093 [2024-07-25 10:17:30.201920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.094 qpair failed and we were unable to recover it. 00:28:45.094 [2024-07-25 10:17:30.202083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.094 [2024-07-25 10:17:30.202127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.094 qpair failed and we were unable to recover it. 00:28:45.094 [2024-07-25 10:17:30.202322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.094 [2024-07-25 10:17:30.202346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.094 qpair failed and we were unable to recover it. 00:28:45.094 [2024-07-25 10:17:30.202577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.094 [2024-07-25 10:17:30.202620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.094 qpair failed and we were unable to recover it. 00:28:45.094 [2024-07-25 10:17:30.202797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.094 [2024-07-25 10:17:30.202841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.094 qpair failed and we were unable to recover it. 00:28:45.094 [2024-07-25 10:17:30.203027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.094 [2024-07-25 10:17:30.203070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.094 qpair failed and we were unable to recover it. 00:28:45.094 [2024-07-25 10:17:30.203272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.094 [2024-07-25 10:17:30.203296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.094 qpair failed and we were unable to recover it. 00:28:45.094 [2024-07-25 10:17:30.203476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.094 [2024-07-25 10:17:30.203519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.094 qpair failed and we were unable to recover it. 00:28:45.094 [2024-07-25 10:17:30.203710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.094 [2024-07-25 10:17:30.203753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.094 qpair failed and we were unable to recover it. 00:28:45.094 [2024-07-25 10:17:30.203974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.094 [2024-07-25 10:17:30.204017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.094 qpair failed and we were unable to recover it. 00:28:45.094 [2024-07-25 10:17:30.204205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.094 [2024-07-25 10:17:30.204229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.094 qpair failed and we were unable to recover it. 00:28:45.094 [2024-07-25 10:17:30.204457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.094 [2024-07-25 10:17:30.204482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.094 qpair failed and we were unable to recover it. 00:28:45.094 [2024-07-25 10:17:30.204674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.094 [2024-07-25 10:17:30.204718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.094 qpair failed and we were unable to recover it. 00:28:45.094 [2024-07-25 10:17:30.204925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.094 [2024-07-25 10:17:30.204969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.094 qpair failed and we were unable to recover it. 00:28:45.094 [2024-07-25 10:17:30.205133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.094 [2024-07-25 10:17:30.205176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.094 qpair failed and we were unable to recover it. 00:28:45.094 [2024-07-25 10:17:30.205361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.094 [2024-07-25 10:17:30.205385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.094 qpair failed and we were unable to recover it. 00:28:45.094 [2024-07-25 10:17:30.205576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.094 [2024-07-25 10:17:30.205620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.094 qpair failed and we were unable to recover it. 00:28:45.094 [2024-07-25 10:17:30.205821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.094 [2024-07-25 10:17:30.205862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.094 qpair failed and we were unable to recover it. 00:28:45.094 [2024-07-25 10:17:30.206027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.094 [2024-07-25 10:17:30.206070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.094 qpair failed and we were unable to recover it. 00:28:45.094 [2024-07-25 10:17:30.206306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.094 [2024-07-25 10:17:30.206340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.094 qpair failed and we were unable to recover it. 00:28:45.094 [2024-07-25 10:17:30.206528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.094 [2024-07-25 10:17:30.206554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.094 qpair failed and we were unable to recover it. 00:28:45.094 [2024-07-25 10:17:30.206734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.094 [2024-07-25 10:17:30.206777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.094 qpair failed and we were unable to recover it. 00:28:45.094 [2024-07-25 10:17:30.206956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.094 [2024-07-25 10:17:30.207000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.094 qpair failed and we were unable to recover it. 00:28:45.094 [2024-07-25 10:17:30.207204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.094 [2024-07-25 10:17:30.207250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.094 qpair failed and we were unable to recover it. 00:28:45.094 [2024-07-25 10:17:30.207415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.094 [2024-07-25 10:17:30.207458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.094 qpair failed and we were unable to recover it. 00:28:45.094 [2024-07-25 10:17:30.207658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.094 [2024-07-25 10:17:30.207701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.094 qpair failed and we were unable to recover it. 00:28:45.094 [2024-07-25 10:17:30.207877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.094 [2024-07-25 10:17:30.207920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.094 qpair failed and we were unable to recover it. 00:28:45.094 [2024-07-25 10:17:30.208127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.094 [2024-07-25 10:17:30.208161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.094 qpair failed and we were unable to recover it. 00:28:45.094 [2024-07-25 10:17:30.208352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.094 [2024-07-25 10:17:30.208389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.094 qpair failed and we were unable to recover it. 00:28:45.094 [2024-07-25 10:17:30.208619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.094 [2024-07-25 10:17:30.208662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.094 qpair failed and we were unable to recover it. 00:28:45.094 [2024-07-25 10:17:30.208871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.095 [2024-07-25 10:17:30.208914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.095 qpair failed and we were unable to recover it. 00:28:45.095 [2024-07-25 10:17:30.209117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.095 [2024-07-25 10:17:30.209161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.095 qpair failed and we were unable to recover it. 00:28:45.095 [2024-07-25 10:17:30.209358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.095 [2024-07-25 10:17:30.209382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.095 qpair failed and we were unable to recover it. 00:28:45.095 [2024-07-25 10:17:30.209596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.095 [2024-07-25 10:17:30.209640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.095 qpair failed and we were unable to recover it. 00:28:45.095 [2024-07-25 10:17:30.209798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.095 [2024-07-25 10:17:30.209852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.095 qpair failed and we were unable to recover it. 00:28:45.095 [2024-07-25 10:17:30.210062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.095 [2024-07-25 10:17:30.210104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.095 qpair failed and we were unable to recover it. 00:28:45.095 [2024-07-25 10:17:30.210298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.095 [2024-07-25 10:17:30.210323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.095 qpair failed and we were unable to recover it. 00:28:45.095 [2024-07-25 10:17:30.210548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.095 [2024-07-25 10:17:30.210590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.095 qpair failed and we were unable to recover it. 00:28:45.095 [2024-07-25 10:17:30.210798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.095 [2024-07-25 10:17:30.210841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.095 qpair failed and we were unable to recover it. 00:28:45.095 [2024-07-25 10:17:30.211004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.095 [2024-07-25 10:17:30.211046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.095 qpair failed and we were unable to recover it. 00:28:45.095 [2024-07-25 10:17:30.211224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.095 [2024-07-25 10:17:30.211249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.095 qpair failed and we were unable to recover it. 00:28:45.095 [2024-07-25 10:17:30.211438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.095 [2024-07-25 10:17:30.211464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.095 qpair failed and we were unable to recover it. 00:28:45.095 [2024-07-25 10:17:30.211650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.095 [2024-07-25 10:17:30.211690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.095 qpair failed and we were unable to recover it. 00:28:45.095 [2024-07-25 10:17:30.211874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.095 [2024-07-25 10:17:30.211916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.095 qpair failed and we were unable to recover it. 00:28:45.095 [2024-07-25 10:17:30.212102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.095 [2024-07-25 10:17:30.212145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.095 qpair failed and we were unable to recover it. 00:28:45.095 [2024-07-25 10:17:30.212311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.095 [2024-07-25 10:17:30.212335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.095 qpair failed and we were unable to recover it. 00:28:45.095 [2024-07-25 10:17:30.212536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.095 [2024-07-25 10:17:30.212580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.095 qpair failed and we were unable to recover it. 00:28:45.095 [2024-07-25 10:17:30.212788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.095 [2024-07-25 10:17:30.212817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.095 qpair failed and we were unable to recover it. 00:28:45.095 [2024-07-25 10:17:30.212992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.095 [2024-07-25 10:17:30.213035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.095 qpair failed and we were unable to recover it. 00:28:45.095 [2024-07-25 10:17:30.213189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.095 [2024-07-25 10:17:30.213231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.095 qpair failed and we were unable to recover it. 00:28:45.095 [2024-07-25 10:17:30.213412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.095 [2024-07-25 10:17:30.213441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.095 qpair failed and we were unable to recover it. 00:28:45.095 [2024-07-25 10:17:30.213609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.095 [2024-07-25 10:17:30.213638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.095 qpair failed and we were unable to recover it. 00:28:45.095 [2024-07-25 10:17:30.213856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.095 [2024-07-25 10:17:30.213898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.095 qpair failed and we were unable to recover it. 00:28:45.095 [2024-07-25 10:17:30.214073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.095 [2024-07-25 10:17:30.214115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.095 qpair failed and we were unable to recover it. 00:28:45.095 [2024-07-25 10:17:30.214329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.095 [2024-07-25 10:17:30.214354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.095 qpair failed and we were unable to recover it. 00:28:45.095 [2024-07-25 10:17:30.214556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.095 [2024-07-25 10:17:30.214581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.095 qpair failed and we were unable to recover it. 00:28:45.095 [2024-07-25 10:17:30.214803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.095 [2024-07-25 10:17:30.214848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.095 qpair failed and we were unable to recover it. 00:28:45.095 [2024-07-25 10:17:30.215001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.095 [2024-07-25 10:17:30.215031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.095 qpair failed and we were unable to recover it. 00:28:45.095 [2024-07-25 10:17:30.215190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.095 [2024-07-25 10:17:30.215220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.095 qpair failed and we were unable to recover it. 00:28:45.095 [2024-07-25 10:17:30.215379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.095 [2024-07-25 10:17:30.215404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.095 qpair failed and we were unable to recover it. 00:28:45.095 [2024-07-25 10:17:30.215614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.095 [2024-07-25 10:17:30.215657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.095 qpair failed and we were unable to recover it. 00:28:45.095 [2024-07-25 10:17:30.215827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.095 [2024-07-25 10:17:30.215871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.095 qpair failed and we were unable to recover it. 00:28:45.095 [2024-07-25 10:17:30.216033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.095 [2024-07-25 10:17:30.216082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.095 qpair failed and we were unable to recover it. 00:28:45.095 [2024-07-25 10:17:30.216298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.095 [2024-07-25 10:17:30.216322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.095 qpair failed and we were unable to recover it. 00:28:45.095 [2024-07-25 10:17:30.216503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.095 [2024-07-25 10:17:30.216527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.095 qpair failed and we were unable to recover it. 00:28:45.095 [2024-07-25 10:17:30.216745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.095 [2024-07-25 10:17:30.216774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.095 qpair failed and we were unable to recover it. 00:28:45.095 [2024-07-25 10:17:30.216934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.095 [2024-07-25 10:17:30.216977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.095 qpair failed and we were unable to recover it. 00:28:45.095 [2024-07-25 10:17:30.217175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.095 [2024-07-25 10:17:30.217221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.095 qpair failed and we were unable to recover it. 00:28:45.095 [2024-07-25 10:17:30.217440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.096 [2024-07-25 10:17:30.217465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.096 qpair failed and we were unable to recover it. 00:28:45.096 [2024-07-25 10:17:30.217678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.096 [2024-07-25 10:17:30.217722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.096 qpair failed and we were unable to recover it. 00:28:45.096 [2024-07-25 10:17:30.217868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.096 [2024-07-25 10:17:30.217912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.096 qpair failed and we were unable to recover it. 00:28:45.096 [2024-07-25 10:17:30.218105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.096 [2024-07-25 10:17:30.218148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.096 qpair failed and we were unable to recover it. 00:28:45.096 [2024-07-25 10:17:30.218307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.096 [2024-07-25 10:17:30.218331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.096 qpair failed and we were unable to recover it. 00:28:45.096 [2024-07-25 10:17:30.218516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.096 [2024-07-25 10:17:30.218547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.096 qpair failed and we were unable to recover it. 00:28:45.096 [2024-07-25 10:17:30.218778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.096 [2024-07-25 10:17:30.218820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.096 qpair failed and we were unable to recover it. 00:28:45.096 [2024-07-25 10:17:30.218983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.096 [2024-07-25 10:17:30.219025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.096 qpair failed and we were unable to recover it. 00:28:45.096 [2024-07-25 10:17:30.219207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.096 [2024-07-25 10:17:30.219250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.096 qpair failed and we were unable to recover it. 00:28:45.096 [2024-07-25 10:17:30.219419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.096 [2024-07-25 10:17:30.219464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.096 qpair failed and we were unable to recover it. 00:28:45.096 [2024-07-25 10:17:30.219660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.096 [2024-07-25 10:17:30.219704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.096 qpair failed and we were unable to recover it. 00:28:45.096 [2024-07-25 10:17:30.219863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.096 [2024-07-25 10:17:30.219906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.096 qpair failed and we were unable to recover it. 00:28:45.096 [2024-07-25 10:17:30.220102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.096 [2024-07-25 10:17:30.220132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.096 qpair failed and we were unable to recover it. 00:28:45.096 [2024-07-25 10:17:30.220367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.096 [2024-07-25 10:17:30.220391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.096 qpair failed and we were unable to recover it. 00:28:45.096 [2024-07-25 10:17:30.220563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.096 [2024-07-25 10:17:30.220594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.096 qpair failed and we were unable to recover it. 00:28:45.096 [2024-07-25 10:17:30.220766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.096 [2024-07-25 10:17:30.220808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.096 qpair failed and we were unable to recover it. 00:28:45.096 [2024-07-25 10:17:30.220962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.096 [2024-07-25 10:17:30.221005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.096 qpair failed and we were unable to recover it. 00:28:45.096 [2024-07-25 10:17:30.221192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.096 [2024-07-25 10:17:30.221235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.096 qpair failed and we were unable to recover it. 00:28:45.096 [2024-07-25 10:17:30.221382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.096 [2024-07-25 10:17:30.221407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.096 qpair failed and we were unable to recover it. 00:28:45.096 [2024-07-25 10:17:30.221592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.096 [2024-07-25 10:17:30.221621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.096 qpair failed and we were unable to recover it. 00:28:45.096 [2024-07-25 10:17:30.221848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.096 [2024-07-25 10:17:30.221890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.096 qpair failed and we were unable to recover it. 00:28:45.096 [2024-07-25 10:17:30.222080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.096 [2024-07-25 10:17:30.222123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.096 qpair failed and we were unable to recover it. 00:28:45.096 [2024-07-25 10:17:30.222312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.096 [2024-07-25 10:17:30.222337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.096 qpair failed and we were unable to recover it. 00:28:45.096 [2024-07-25 10:17:30.222549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.096 [2024-07-25 10:17:30.222591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.096 qpair failed and we were unable to recover it. 00:28:45.096 [2024-07-25 10:17:30.222803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.096 [2024-07-25 10:17:30.222846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.096 qpair failed and we were unable to recover it. 00:28:45.096 [2024-07-25 10:17:30.223038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.096 [2024-07-25 10:17:30.223080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.096 qpair failed and we were unable to recover it. 00:28:45.096 [2024-07-25 10:17:30.223305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.096 [2024-07-25 10:17:30.223328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.096 qpair failed and we were unable to recover it. 00:28:45.096 [2024-07-25 10:17:30.223519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.096 [2024-07-25 10:17:30.223544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.096 qpair failed and we were unable to recover it. 00:28:45.096 [2024-07-25 10:17:30.223754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.096 [2024-07-25 10:17:30.223796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.096 qpair failed and we were unable to recover it. 00:28:45.096 [2024-07-25 10:17:30.223989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.096 [2024-07-25 10:17:30.224031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.096 qpair failed and we were unable to recover it. 00:28:45.096 [2024-07-25 10:17:30.224234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.096 [2024-07-25 10:17:30.224277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.096 qpair failed and we were unable to recover it. 00:28:45.096 [2024-07-25 10:17:30.224490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.096 [2024-07-25 10:17:30.224533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.096 qpair failed and we were unable to recover it. 00:28:45.096 [2024-07-25 10:17:30.224737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.096 [2024-07-25 10:17:30.224780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.096 qpair failed and we were unable to recover it. 00:28:45.096 [2024-07-25 10:17:30.224966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.096 [2024-07-25 10:17:30.225009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.096 qpair failed and we were unable to recover it. 00:28:45.096 [2024-07-25 10:17:30.225173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.096 [2024-07-25 10:17:30.225197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.096 qpair failed and we were unable to recover it. 00:28:45.096 [2024-07-25 10:17:30.225387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.096 [2024-07-25 10:17:30.225411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.096 qpair failed and we were unable to recover it. 00:28:45.096 [2024-07-25 10:17:30.225632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.096 [2024-07-25 10:17:30.225674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.096 qpair failed and we were unable to recover it. 00:28:45.096 [2024-07-25 10:17:30.225903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.096 [2024-07-25 10:17:30.225946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.096 qpair failed and we were unable to recover it. 00:28:45.097 [2024-07-25 10:17:30.226116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.097 [2024-07-25 10:17:30.226160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.097 qpair failed and we were unable to recover it. 00:28:45.097 [2024-07-25 10:17:30.226344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.097 [2024-07-25 10:17:30.226372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.097 qpair failed and we were unable to recover it. 00:28:45.097 [2024-07-25 10:17:30.226565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.097 [2024-07-25 10:17:30.226609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.097 qpair failed and we were unable to recover it. 00:28:45.097 [2024-07-25 10:17:30.226778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.097 [2024-07-25 10:17:30.226820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.097 qpair failed and we were unable to recover it. 00:28:45.097 [2024-07-25 10:17:30.227018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.097 [2024-07-25 10:17:30.227060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.097 qpair failed and we were unable to recover it. 00:28:45.097 [2024-07-25 10:17:30.227248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.097 [2024-07-25 10:17:30.227272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.097 qpair failed and we were unable to recover it. 00:28:45.097 [2024-07-25 10:17:30.227463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.097 [2024-07-25 10:17:30.227507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.097 qpair failed and we were unable to recover it. 00:28:45.097 [2024-07-25 10:17:30.227715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.097 [2024-07-25 10:17:30.227759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.097 qpair failed and we were unable to recover it. 00:28:45.097 [2024-07-25 10:17:30.227930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.097 [2024-07-25 10:17:30.227973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.097 qpair failed and we were unable to recover it. 00:28:45.097 [2024-07-25 10:17:30.228124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.097 [2024-07-25 10:17:30.228167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.097 qpair failed and we were unable to recover it. 00:28:45.097 [2024-07-25 10:17:30.228358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.097 [2024-07-25 10:17:30.228382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.097 qpair failed and we were unable to recover it. 00:28:45.097 [2024-07-25 10:17:30.228614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.097 [2024-07-25 10:17:30.228657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.097 qpair failed and we were unable to recover it. 00:28:45.097 [2024-07-25 10:17:30.228807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.097 [2024-07-25 10:17:30.228837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.097 qpair failed and we were unable to recover it. 00:28:45.097 [2024-07-25 10:17:30.229053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.097 [2024-07-25 10:17:30.229095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.097 qpair failed and we were unable to recover it. 00:28:45.097 [2024-07-25 10:17:30.229236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.097 [2024-07-25 10:17:30.229259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.097 qpair failed and we were unable to recover it. 00:28:45.097 [2024-07-25 10:17:30.229474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.097 [2024-07-25 10:17:30.229498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.097 qpair failed and we were unable to recover it. 00:28:45.097 [2024-07-25 10:17:30.229717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.097 [2024-07-25 10:17:30.229759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.097 qpair failed and we were unable to recover it. 00:28:45.097 [2024-07-25 10:17:30.229923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.097 [2024-07-25 10:17:30.229966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.097 qpair failed and we were unable to recover it. 00:28:45.097 [2024-07-25 10:17:30.230131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.097 [2024-07-25 10:17:30.230174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.097 qpair failed and we were unable to recover it. 00:28:45.097 [2024-07-25 10:17:30.230333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.097 [2024-07-25 10:17:30.230356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.097 qpair failed and we were unable to recover it. 00:28:45.097 [2024-07-25 10:17:30.230569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.097 [2024-07-25 10:17:30.230613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.097 qpair failed and we were unable to recover it. 00:28:45.097 [2024-07-25 10:17:30.230786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.097 [2024-07-25 10:17:30.230830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.097 qpair failed and we were unable to recover it. 00:28:45.097 [2024-07-25 10:17:30.231003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.097 [2024-07-25 10:17:30.231047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.097 qpair failed and we were unable to recover it. 00:28:45.097 [2024-07-25 10:17:30.231221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.097 [2024-07-25 10:17:30.231245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.097 qpair failed and we were unable to recover it. 00:28:45.097 [2024-07-25 10:17:30.231451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.097 [2024-07-25 10:17:30.231476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.097 qpair failed and we were unable to recover it. 00:28:45.097 [2024-07-25 10:17:30.231667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.097 [2024-07-25 10:17:30.231711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.097 qpair failed and we were unable to recover it. 00:28:45.097 [2024-07-25 10:17:30.231925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.097 [2024-07-25 10:17:30.231968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.097 qpair failed and we were unable to recover it. 00:28:45.097 [2024-07-25 10:17:30.232160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.097 [2024-07-25 10:17:30.232203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.097 qpair failed and we were unable to recover it. 00:28:45.097 [2024-07-25 10:17:30.232353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.097 [2024-07-25 10:17:30.232376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.097 qpair failed and we were unable to recover it. 00:28:45.097 [2024-07-25 10:17:30.232580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.097 [2024-07-25 10:17:30.232623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.097 qpair failed and we were unable to recover it. 00:28:45.097 [2024-07-25 10:17:30.232826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.097 [2024-07-25 10:17:30.232869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.097 qpair failed and we were unable to recover it. 00:28:45.097 [2024-07-25 10:17:30.233039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.097 [2024-07-25 10:17:30.233081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.097 qpair failed and we were unable to recover it. 00:28:45.097 [2024-07-25 10:17:30.233233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.097 [2024-07-25 10:17:30.233256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.097 qpair failed and we were unable to recover it. 00:28:45.097 [2024-07-25 10:17:30.233464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.097 [2024-07-25 10:17:30.233509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.097 qpair failed and we were unable to recover it. 00:28:45.097 [2024-07-25 10:17:30.233727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.097 [2024-07-25 10:17:30.233769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.097 qpair failed and we were unable to recover it. 00:28:45.097 [2024-07-25 10:17:30.233962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.097 [2024-07-25 10:17:30.233991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.097 qpair failed and we were unable to recover it. 00:28:45.097 [2024-07-25 10:17:30.234174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.097 [2024-07-25 10:17:30.234217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.097 qpair failed and we were unable to recover it. 00:28:45.097 [2024-07-25 10:17:30.234392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.097 [2024-07-25 10:17:30.234416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.098 qpair failed and we were unable to recover it. 00:28:45.098 [2024-07-25 10:17:30.234644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.098 [2024-07-25 10:17:30.234687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.098 qpair failed and we were unable to recover it. 00:28:45.098 [2024-07-25 10:17:30.234865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.098 [2024-07-25 10:17:30.234908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.098 qpair failed and we were unable to recover it. 00:28:45.098 [2024-07-25 10:17:30.235084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.098 [2024-07-25 10:17:30.235127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.098 qpair failed and we were unable to recover it. 00:28:45.098 [2024-07-25 10:17:30.235318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.098 [2024-07-25 10:17:30.235346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.098 qpair failed and we were unable to recover it. 00:28:45.098 [2024-07-25 10:17:30.235544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.098 [2024-07-25 10:17:30.235588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.098 qpair failed and we were unable to recover it. 00:28:45.098 [2024-07-25 10:17:30.235766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.098 [2024-07-25 10:17:30.235816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.098 qpair failed and we were unable to recover it. 00:28:45.098 [2024-07-25 10:17:30.236032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.098 [2024-07-25 10:17:30.236074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.098 qpair failed and we were unable to recover it. 00:28:45.098 [2024-07-25 10:17:30.236265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.098 [2024-07-25 10:17:30.236290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.098 qpair failed and we were unable to recover it. 00:28:45.098 [2024-07-25 10:17:30.236470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.098 [2024-07-25 10:17:30.236509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.098 qpair failed and we were unable to recover it. 00:28:45.098 [2024-07-25 10:17:30.236725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.098 [2024-07-25 10:17:30.236768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.098 qpair failed and we were unable to recover it. 00:28:45.098 [2024-07-25 10:17:30.236952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.098 [2024-07-25 10:17:30.236994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.098 qpair failed and we were unable to recover it. 00:28:45.098 [2024-07-25 10:17:30.237131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.098 [2024-07-25 10:17:30.237160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.098 qpair failed and we were unable to recover it. 00:28:45.098 [2024-07-25 10:17:30.237374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.098 [2024-07-25 10:17:30.237396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.098 qpair failed and we were unable to recover it. 00:28:45.098 [2024-07-25 10:17:30.237595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.098 [2024-07-25 10:17:30.237636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.098 qpair failed and we were unable to recover it. 00:28:45.098 [2024-07-25 10:17:30.237846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.098 [2024-07-25 10:17:30.237888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.098 qpair failed and we were unable to recover it. 00:28:45.098 [2024-07-25 10:17:30.238045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.098 [2024-07-25 10:17:30.238086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.098 qpair failed and we were unable to recover it. 00:28:45.098 [2024-07-25 10:17:30.238263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.098 [2024-07-25 10:17:30.238285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.098 qpair failed and we were unable to recover it. 00:28:45.098 [2024-07-25 10:17:30.238491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.098 [2024-07-25 10:17:30.238520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.098 qpair failed and we were unable to recover it. 00:28:45.098 [2024-07-25 10:17:30.238735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.098 [2024-07-25 10:17:30.238776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.098 qpair failed and we were unable to recover it. 00:28:45.098 [2024-07-25 10:17:30.238979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.098 [2024-07-25 10:17:30.239006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.098 qpair failed and we were unable to recover it. 00:28:45.098 [2024-07-25 10:17:30.239226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.098 [2024-07-25 10:17:30.239267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.098 qpair failed and we were unable to recover it. 00:28:45.098 [2024-07-25 10:17:30.239441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.098 [2024-07-25 10:17:30.239479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.098 qpair failed and we were unable to recover it. 00:28:45.098 [2024-07-25 10:17:30.239640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.098 [2024-07-25 10:17:30.239682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.098 qpair failed and we were unable to recover it. 00:28:45.098 [2024-07-25 10:17:30.239886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.098 [2024-07-25 10:17:30.239929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.098 qpair failed and we were unable to recover it. 00:28:45.098 [2024-07-25 10:17:30.240082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.098 [2024-07-25 10:17:30.240124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.098 qpair failed and we were unable to recover it. 00:28:45.098 [2024-07-25 10:17:30.240345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.098 [2024-07-25 10:17:30.240369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.098 qpair failed and we were unable to recover it. 00:28:45.098 [2024-07-25 10:17:30.240601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.098 [2024-07-25 10:17:30.240627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.098 qpair failed and we were unable to recover it. 00:28:45.098 [2024-07-25 10:17:30.240804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.098 [2024-07-25 10:17:30.240834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.098 qpair failed and we were unable to recover it. 00:28:45.098 [2024-07-25 10:17:30.241034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.098 [2024-07-25 10:17:30.241078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.098 qpair failed and we were unable to recover it. 00:28:45.098 [2024-07-25 10:17:30.241257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.098 [2024-07-25 10:17:30.241283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.098 qpair failed and we were unable to recover it. 00:28:45.098 [2024-07-25 10:17:30.241497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.098 [2024-07-25 10:17:30.241525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.098 qpair failed and we were unable to recover it. 00:28:45.098 [2024-07-25 10:17:30.241749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.098 [2024-07-25 10:17:30.241794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.098 qpair failed and we were unable to recover it. 00:28:45.099 [2024-07-25 10:17:30.241970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.099 [2024-07-25 10:17:30.242014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.099 qpair failed and we were unable to recover it. 00:28:45.099 [2024-07-25 10:17:30.242178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.099 [2024-07-25 10:17:30.242222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.099 qpair failed and we were unable to recover it. 00:28:45.099 [2024-07-25 10:17:30.242420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.099 [2024-07-25 10:17:30.242452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.099 qpair failed and we were unable to recover it. 00:28:45.099 [2024-07-25 10:17:30.242654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.099 [2024-07-25 10:17:30.242698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.099 qpair failed and we were unable to recover it. 00:28:45.099 [2024-07-25 10:17:30.242878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.099 [2024-07-25 10:17:30.242921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.099 qpair failed and we were unable to recover it. 00:28:45.099 [2024-07-25 10:17:30.243123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.099 [2024-07-25 10:17:30.243151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.099 qpair failed and we were unable to recover it. 00:28:45.099 [2024-07-25 10:17:30.243350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.099 [2024-07-25 10:17:30.243391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.099 qpair failed and we were unable to recover it. 00:28:45.099 [2024-07-25 10:17:30.243595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.099 [2024-07-25 10:17:30.243640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.099 qpair failed and we were unable to recover it. 00:28:45.099 [2024-07-25 10:17:30.243807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.099 [2024-07-25 10:17:30.243851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.099 qpair failed and we were unable to recover it. 00:28:45.099 [2024-07-25 10:17:30.244016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.099 [2024-07-25 10:17:30.244046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.099 qpair failed and we were unable to recover it. 00:28:45.099 [2024-07-25 10:17:30.244258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.099 [2024-07-25 10:17:30.244287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.099 qpair failed and we were unable to recover it. 00:28:45.099 [2024-07-25 10:17:30.244485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.099 [2024-07-25 10:17:30.244534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.099 qpair failed and we were unable to recover it. 00:28:45.099 [2024-07-25 10:17:30.244717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.099 [2024-07-25 10:17:30.244759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.099 qpair failed and we were unable to recover it. 00:28:45.099 [2024-07-25 10:17:30.244947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.099 [2024-07-25 10:17:30.244990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.099 qpair failed and we were unable to recover it. 00:28:45.099 [2024-07-25 10:17:30.245186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.099 [2024-07-25 10:17:30.245230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.099 qpair failed and we were unable to recover it. 00:28:45.099 [2024-07-25 10:17:30.245435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.099 [2024-07-25 10:17:30.245462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.099 qpair failed and we were unable to recover it. 00:28:45.099 [2024-07-25 10:17:30.245634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.099 [2024-07-25 10:17:30.245679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.099 qpair failed and we were unable to recover it. 00:28:45.099 [2024-07-25 10:17:30.245855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.099 [2024-07-25 10:17:30.245899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.099 qpair failed and we were unable to recover it. 00:28:45.099 [2024-07-25 10:17:30.246088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.099 [2024-07-25 10:17:30.246131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.099 qpair failed and we were unable to recover it. 00:28:45.099 [2024-07-25 10:17:30.246333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.099 [2024-07-25 10:17:30.246360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.099 qpair failed and we were unable to recover it. 00:28:45.099 [2024-07-25 10:17:30.246558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.099 [2024-07-25 10:17:30.246585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.099 qpair failed and we were unable to recover it. 00:28:45.099 [2024-07-25 10:17:30.246765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.099 [2024-07-25 10:17:30.246808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.099 qpair failed and we were unable to recover it. 00:28:45.099 [2024-07-25 10:17:30.246983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.099 [2024-07-25 10:17:30.247025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.099 qpair failed and we were unable to recover it. 00:28:45.099 [2024-07-25 10:17:30.247212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.099 [2024-07-25 10:17:30.247255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.099 qpair failed and we were unable to recover it. 00:28:45.099 [2024-07-25 10:17:30.247449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.099 [2024-07-25 10:17:30.247476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.099 qpair failed and we were unable to recover it. 00:28:45.099 [2024-07-25 10:17:30.247653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.099 [2024-07-25 10:17:30.247698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.099 qpair failed and we were unable to recover it. 00:28:45.099 [2024-07-25 10:17:30.247906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.099 [2024-07-25 10:17:30.247949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.099 qpair failed and we were unable to recover it. 00:28:45.099 [2024-07-25 10:17:30.248150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.099 [2024-07-25 10:17:30.248193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.099 qpair failed and we were unable to recover it. 00:28:45.099 [2024-07-25 10:17:30.248424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.099 [2024-07-25 10:17:30.248461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.099 qpair failed and we were unable to recover it. 00:28:45.099 [2024-07-25 10:17:30.248616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.099 [2024-07-25 10:17:30.248642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.099 qpair failed and we were unable to recover it. 00:28:45.099 [2024-07-25 10:17:30.248858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.099 [2024-07-25 10:17:30.248902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.099 qpair failed and we were unable to recover it. 00:28:45.099 [2024-07-25 10:17:30.249117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.099 [2024-07-25 10:17:30.249161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.099 qpair failed and we were unable to recover it. 00:28:45.099 [2024-07-25 10:17:30.249360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.099 [2024-07-25 10:17:30.249386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.099 qpair failed and we were unable to recover it. 00:28:45.376 [2024-07-25 10:17:30.249590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.376 [2024-07-25 10:17:30.249618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.376 qpair failed and we were unable to recover it. 00:28:45.376 [2024-07-25 10:17:30.249791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.376 [2024-07-25 10:17:30.249833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.376 qpair failed and we were unable to recover it. 00:28:45.376 [2024-07-25 10:17:30.250058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.376 [2024-07-25 10:17:30.250101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.376 qpair failed and we were unable to recover it. 00:28:45.376 [2024-07-25 10:17:30.250305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.376 [2024-07-25 10:17:30.250332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.376 qpair failed and we were unable to recover it. 00:28:45.376 [2024-07-25 10:17:30.250504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.376 [2024-07-25 10:17:30.250531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.376 qpair failed and we were unable to recover it. 00:28:45.376 [2024-07-25 10:17:30.250733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.376 [2024-07-25 10:17:30.250763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.376 qpair failed and we were unable to recover it. 00:28:45.376 [2024-07-25 10:17:30.250946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.376 [2024-07-25 10:17:30.250990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.376 qpair failed and we were unable to recover it. 00:28:45.376 [2024-07-25 10:17:30.251188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.376 [2024-07-25 10:17:30.251232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.376 qpair failed and we were unable to recover it. 00:28:45.376 [2024-07-25 10:17:30.251367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.376 [2024-07-25 10:17:30.251394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.376 qpair failed and we were unable to recover it. 00:28:45.376 [2024-07-25 10:17:30.251579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.376 [2024-07-25 10:17:30.251624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.376 qpair failed and we were unable to recover it. 00:28:45.376 [2024-07-25 10:17:30.251837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.376 [2024-07-25 10:17:30.251879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.376 qpair failed and we were unable to recover it. 00:28:45.376 [2024-07-25 10:17:30.252095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.376 [2024-07-25 10:17:30.252124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.376 qpair failed and we were unable to recover it. 00:28:45.376 [2024-07-25 10:17:30.252332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.376 [2024-07-25 10:17:30.252359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.376 qpair failed and we were unable to recover it. 00:28:45.376 [2024-07-25 10:17:30.252572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.376 [2024-07-25 10:17:30.252618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.376 qpair failed and we were unable to recover it. 00:28:45.376 [2024-07-25 10:17:30.252793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.376 [2024-07-25 10:17:30.252823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.376 qpair failed and we were unable to recover it. 00:28:45.376 [2024-07-25 10:17:30.253047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.376 [2024-07-25 10:17:30.253090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.376 qpair failed and we were unable to recover it. 00:28:45.376 [2024-07-25 10:17:30.253276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.376 [2024-07-25 10:17:30.253302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.376 qpair failed and we were unable to recover it. 00:28:45.376 [2024-07-25 10:17:30.253494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.376 [2024-07-25 10:17:30.253538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.376 qpair failed and we were unable to recover it. 00:28:45.376 [2024-07-25 10:17:30.253767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.376 [2024-07-25 10:17:30.253814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.376 qpair failed and we were unable to recover it. 00:28:45.376 [2024-07-25 10:17:30.253983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.376 [2024-07-25 10:17:30.254026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.376 qpair failed and we were unable to recover it. 00:28:45.376 [2024-07-25 10:17:30.254171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.376 [2024-07-25 10:17:30.254197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.376 qpair failed and we were unable to recover it. 00:28:45.376 [2024-07-25 10:17:30.254381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.376 [2024-07-25 10:17:30.254423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.376 qpair failed and we were unable to recover it. 00:28:45.376 [2024-07-25 10:17:30.254617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.376 [2024-07-25 10:17:30.254660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.376 qpair failed and we were unable to recover it. 00:28:45.376 [2024-07-25 10:17:30.254842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.376 [2024-07-25 10:17:30.254886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.376 qpair failed and we were unable to recover it. 00:28:45.376 [2024-07-25 10:17:30.255085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.376 [2024-07-25 10:17:30.255129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.376 qpair failed and we were unable to recover it. 00:28:45.376 [2024-07-25 10:17:30.255332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.376 [2024-07-25 10:17:30.255359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.376 qpair failed and we were unable to recover it. 00:28:45.376 [2024-07-25 10:17:30.255508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.376 [2024-07-25 10:17:30.255551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.376 qpair failed and we were unable to recover it. 00:28:45.376 [2024-07-25 10:17:30.255753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.376 [2024-07-25 10:17:30.255797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.376 qpair failed and we were unable to recover it. 00:28:45.376 [2024-07-25 10:17:30.256018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.376 [2024-07-25 10:17:30.256063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.376 qpair failed and we were unable to recover it. 00:28:45.376 [2024-07-25 10:17:30.256207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.376 [2024-07-25 10:17:30.256234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.376 qpair failed and we were unable to recover it. 00:28:45.376 [2024-07-25 10:17:30.256401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.376 [2024-07-25 10:17:30.256433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.376 qpair failed and we were unable to recover it. 00:28:45.376 [2024-07-25 10:17:30.256609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.376 [2024-07-25 10:17:30.256653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.376 qpair failed and we were unable to recover it. 00:28:45.376 [2024-07-25 10:17:30.256814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.376 [2024-07-25 10:17:30.256858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.376 qpair failed and we were unable to recover it. 00:28:45.376 [2024-07-25 10:17:30.257027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.376 [2024-07-25 10:17:30.257071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.376 qpair failed and we were unable to recover it. 00:28:45.376 [2024-07-25 10:17:30.257228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.376 [2024-07-25 10:17:30.257254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.376 qpair failed and we were unable to recover it. 00:28:45.376 [2024-07-25 10:17:30.257461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.377 [2024-07-25 10:17:30.257489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.377 qpair failed and we were unable to recover it. 00:28:45.377 [2024-07-25 10:17:30.257648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.377 [2024-07-25 10:17:30.257693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.377 qpair failed and we were unable to recover it. 00:28:45.377 [2024-07-25 10:17:30.257917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.377 [2024-07-25 10:17:30.257961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.377 qpair failed and we were unable to recover it. 00:28:45.377 [2024-07-25 10:17:30.258105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.377 [2024-07-25 10:17:30.258160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.377 qpair failed and we were unable to recover it. 00:28:45.377 [2024-07-25 10:17:30.258346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.377 [2024-07-25 10:17:30.258373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.377 qpair failed and we were unable to recover it. 00:28:45.377 [2024-07-25 10:17:30.258542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.377 [2024-07-25 10:17:30.258569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.377 qpair failed and we were unable to recover it. 00:28:45.377 [2024-07-25 10:17:30.258748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.377 [2024-07-25 10:17:30.258777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.377 qpair failed and we were unable to recover it. 00:28:45.377 [2024-07-25 10:17:30.259003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.377 [2024-07-25 10:17:30.259045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.377 qpair failed and we were unable to recover it. 00:28:45.377 [2024-07-25 10:17:30.259227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.377 [2024-07-25 10:17:30.259271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.377 qpair failed and we were unable to recover it. 00:28:45.377 [2024-07-25 10:17:30.259448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.377 [2024-07-25 10:17:30.259476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.377 qpair failed and we were unable to recover it. 00:28:45.377 [2024-07-25 10:17:30.259657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.377 [2024-07-25 10:17:30.259705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.377 qpair failed and we were unable to recover it. 00:28:45.377 [2024-07-25 10:17:30.259879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.377 [2024-07-25 10:17:30.259923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.377 qpair failed and we were unable to recover it. 00:28:45.377 [2024-07-25 10:17:30.260081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.377 [2024-07-25 10:17:30.260124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.377 qpair failed and we were unable to recover it. 00:28:45.377 [2024-07-25 10:17:30.260269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.377 [2024-07-25 10:17:30.260295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.377 qpair failed and we were unable to recover it. 00:28:45.377 [2024-07-25 10:17:30.260484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.377 [2024-07-25 10:17:30.260514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.377 qpair failed and we were unable to recover it. 00:28:45.377 [2024-07-25 10:17:30.260720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.377 [2024-07-25 10:17:30.260763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.377 qpair failed and we were unable to recover it. 00:28:45.377 [2024-07-25 10:17:30.260977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.377 [2024-07-25 10:17:30.261021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.377 qpair failed and we were unable to recover it. 00:28:45.377 [2024-07-25 10:17:30.261224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.377 [2024-07-25 10:17:30.261268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.377 qpair failed and we were unable to recover it. 00:28:45.377 [2024-07-25 10:17:30.261441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.377 [2024-07-25 10:17:30.261468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.377 qpair failed and we were unable to recover it. 00:28:45.377 [2024-07-25 10:17:30.261611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.377 [2024-07-25 10:17:30.261638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.377 qpair failed and we were unable to recover it. 00:28:45.377 [2024-07-25 10:17:30.261809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.377 [2024-07-25 10:17:30.261853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.377 qpair failed and we were unable to recover it. 00:28:45.377 [2024-07-25 10:17:30.262054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.377 [2024-07-25 10:17:30.262097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.377 qpair failed and we were unable to recover it. 00:28:45.377 [2024-07-25 10:17:30.262243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.377 [2024-07-25 10:17:30.262269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.377 qpair failed and we were unable to recover it. 00:28:45.377 [2024-07-25 10:17:30.262439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.377 [2024-07-25 10:17:30.262471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.377 qpair failed and we were unable to recover it. 00:28:45.377 [2024-07-25 10:17:30.262670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.377 [2024-07-25 10:17:30.262697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.377 qpair failed and we were unable to recover it. 00:28:45.377 [2024-07-25 10:17:30.262900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.377 [2024-07-25 10:17:30.262944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.377 qpair failed and we were unable to recover it. 00:28:45.377 [2024-07-25 10:17:30.263124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.377 [2024-07-25 10:17:30.263168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.377 qpair failed and we were unable to recover it. 00:28:45.377 [2024-07-25 10:17:30.263335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.377 [2024-07-25 10:17:30.263362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.377 qpair failed and we were unable to recover it. 00:28:45.377 [2024-07-25 10:17:30.263558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.377 [2024-07-25 10:17:30.263584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.377 qpair failed and we were unable to recover it. 00:28:45.377 [2024-07-25 10:17:30.263798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.377 [2024-07-25 10:17:30.263840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.377 qpair failed and we were unable to recover it. 00:28:45.377 [2024-07-25 10:17:30.264043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.377 [2024-07-25 10:17:30.264087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.377 qpair failed and we were unable to recover it. 00:28:45.377 [2024-07-25 10:17:30.264304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.377 [2024-07-25 10:17:30.264351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.377 qpair failed and we were unable to recover it. 00:28:45.377 [2024-07-25 10:17:30.264496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.377 [2024-07-25 10:17:30.264521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.377 qpair failed and we were unable to recover it. 00:28:45.377 [2024-07-25 10:17:30.264674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.377 [2024-07-25 10:17:30.264718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.377 qpair failed and we were unable to recover it. 00:28:45.377 [2024-07-25 10:17:30.264916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.377 [2024-07-25 10:17:30.264960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.377 qpair failed and we were unable to recover it. 00:28:45.377 [2024-07-25 10:17:30.265158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.377 [2024-07-25 10:17:30.265203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.377 qpair failed and we were unable to recover it. 00:28:45.377 [2024-07-25 10:17:30.265353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.377 [2024-07-25 10:17:30.265379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.377 qpair failed and we were unable to recover it. 00:28:45.377 [2024-07-25 10:17:30.265552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.378 [2024-07-25 10:17:30.265596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.378 qpair failed and we were unable to recover it. 00:28:45.378 [2024-07-25 10:17:30.265778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.378 [2024-07-25 10:17:30.265822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.378 qpair failed and we were unable to recover it. 00:28:45.378 [2024-07-25 10:17:30.266032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.378 [2024-07-25 10:17:30.266061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.378 qpair failed and we were unable to recover it. 00:28:45.378 [2024-07-25 10:17:30.266224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.378 [2024-07-25 10:17:30.266250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.378 qpair failed and we were unable to recover it. 00:28:45.378 [2024-07-25 10:17:30.266407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.378 [2024-07-25 10:17:30.266438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.378 qpair failed and we were unable to recover it. 00:28:45.378 [2024-07-25 10:17:30.266646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.378 [2024-07-25 10:17:30.266691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.378 qpair failed and we were unable to recover it. 00:28:45.378 [2024-07-25 10:17:30.266843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.378 [2024-07-25 10:17:30.266886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.378 qpair failed and we were unable to recover it. 00:28:45.378 [2024-07-25 10:17:30.267068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.378 [2024-07-25 10:17:30.267098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.378 qpair failed and we were unable to recover it. 00:28:45.378 [2024-07-25 10:17:30.267315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.378 [2024-07-25 10:17:30.267342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.378 qpair failed and we were unable to recover it. 00:28:45.378 [2024-07-25 10:17:30.267540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.378 [2024-07-25 10:17:30.267584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.378 qpair failed and we were unable to recover it. 00:28:45.378 [2024-07-25 10:17:30.267744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.378 [2024-07-25 10:17:30.267789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.378 qpair failed and we were unable to recover it. 00:28:45.378 [2024-07-25 10:17:30.267948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.378 [2024-07-25 10:17:30.267992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.378 qpair failed and we were unable to recover it. 00:28:45.378 [2024-07-25 10:17:30.268158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.378 [2024-07-25 10:17:30.268202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.378 qpair failed and we were unable to recover it. 00:28:45.378 [2024-07-25 10:17:30.268375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.378 [2024-07-25 10:17:30.268401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.378 qpair failed and we were unable to recover it. 00:28:45.378 [2024-07-25 10:17:30.268566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.378 [2024-07-25 10:17:30.268596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.378 qpair failed and we were unable to recover it. 00:28:45.378 [2024-07-25 10:17:30.268801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.378 [2024-07-25 10:17:30.268845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.378 qpair failed and we were unable to recover it. 00:28:45.378 [2024-07-25 10:17:30.269051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.378 [2024-07-25 10:17:30.269094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.378 qpair failed and we were unable to recover it. 00:28:45.378 [2024-07-25 10:17:30.269261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.378 [2024-07-25 10:17:30.269288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.378 qpair failed and we were unable to recover it. 00:28:45.378 [2024-07-25 10:17:30.269487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.378 [2024-07-25 10:17:30.269518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.378 qpair failed and we were unable to recover it. 00:28:45.378 [2024-07-25 10:17:30.269722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.378 [2024-07-25 10:17:30.269766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.378 qpair failed and we were unable to recover it. 00:28:45.378 [2024-07-25 10:17:30.269966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.378 [2024-07-25 10:17:30.270011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.378 qpair failed and we were unable to recover it. 00:28:45.378 [2024-07-25 10:17:30.270170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.378 [2024-07-25 10:17:30.270213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.378 qpair failed and we were unable to recover it. 00:28:45.378 [2024-07-25 10:17:30.270392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.378 [2024-07-25 10:17:30.270418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.378 qpair failed and we were unable to recover it. 00:28:45.378 [2024-07-25 10:17:30.270606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.378 [2024-07-25 10:17:30.270650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.378 qpair failed and we were unable to recover it. 00:28:45.378 [2024-07-25 10:17:30.270829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.378 [2024-07-25 10:17:30.270873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.378 qpair failed and we were unable to recover it. 00:28:45.378 [2024-07-25 10:17:30.271051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.378 [2024-07-25 10:17:30.271094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.378 qpair failed and we were unable to recover it. 00:28:45.378 [2024-07-25 10:17:30.271280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.378 [2024-07-25 10:17:30.271311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.378 qpair failed and we were unable to recover it. 00:28:45.378 [2024-07-25 10:17:30.271482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.378 [2024-07-25 10:17:30.271513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.378 qpair failed and we were unable to recover it. 00:28:45.378 [2024-07-25 10:17:30.271736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.378 [2024-07-25 10:17:30.271781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.378 qpair failed and we were unable to recover it. 00:28:45.378 [2024-07-25 10:17:30.271993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.378 [2024-07-25 10:17:30.272036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.378 qpair failed and we were unable to recover it. 00:28:45.378 [2024-07-25 10:17:30.272252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.378 [2024-07-25 10:17:30.272278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.378 qpair failed and we were unable to recover it. 00:28:45.378 [2024-07-25 10:17:30.272434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.378 [2024-07-25 10:17:30.272461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.378 qpair failed and we were unable to recover it. 00:28:45.378 [2024-07-25 10:17:30.272621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.378 [2024-07-25 10:17:30.272651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.378 qpair failed and we were unable to recover it. 00:28:45.378 [2024-07-25 10:17:30.272841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.378 [2024-07-25 10:17:30.272883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.378 qpair failed and we were unable to recover it. 00:28:45.378 [2024-07-25 10:17:30.273087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.378 [2024-07-25 10:17:30.273131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.378 qpair failed and we were unable to recover it. 00:28:45.378 [2024-07-25 10:17:30.273302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.378 [2024-07-25 10:17:30.273328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.378 qpair failed and we were unable to recover it. 00:28:45.378 [2024-07-25 10:17:30.273526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.378 [2024-07-25 10:17:30.273569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.378 qpair failed and we were unable to recover it. 00:28:45.378 [2024-07-25 10:17:30.273756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.378 [2024-07-25 10:17:30.273799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.378 qpair failed and we were unable to recover it. 00:28:45.379 [2024-07-25 10:17:30.273997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.379 [2024-07-25 10:17:30.274039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.379 qpair failed and we were unable to recover it. 00:28:45.379 [2024-07-25 10:17:30.274209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.379 [2024-07-25 10:17:30.274235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.379 qpair failed and we were unable to recover it. 00:28:45.379 [2024-07-25 10:17:30.274445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.379 [2024-07-25 10:17:30.274473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.379 qpair failed and we were unable to recover it. 00:28:45.379 [2024-07-25 10:17:30.274625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.379 [2024-07-25 10:17:30.274668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.379 qpair failed and we were unable to recover it. 00:28:45.379 [2024-07-25 10:17:30.274846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.379 [2024-07-25 10:17:30.274875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.379 qpair failed and we were unable to recover it. 00:28:45.379 [2024-07-25 10:17:30.275045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.379 [2024-07-25 10:17:30.275088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.379 qpair failed and we were unable to recover it. 00:28:45.379 [2024-07-25 10:17:30.275266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.379 [2024-07-25 10:17:30.275292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.379 qpair failed and we were unable to recover it. 00:28:45.379 [2024-07-25 10:17:30.275461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.379 [2024-07-25 10:17:30.275505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.379 qpair failed and we were unable to recover it. 00:28:45.379 [2024-07-25 10:17:30.275707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.379 [2024-07-25 10:17:30.275752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.379 qpair failed and we were unable to recover it. 00:28:45.379 [2024-07-25 10:17:30.275944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.379 [2024-07-25 10:17:30.275987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.379 qpair failed and we were unable to recover it. 00:28:45.379 [2024-07-25 10:17:30.276130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.379 [2024-07-25 10:17:30.276172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.379 qpair failed and we were unable to recover it. 00:28:45.379 [2024-07-25 10:17:30.276379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.379 [2024-07-25 10:17:30.276405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.379 qpair failed and we were unable to recover it. 00:28:45.379 [2024-07-25 10:17:30.276604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.379 [2024-07-25 10:17:30.276648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.379 qpair failed and we were unable to recover it. 00:28:45.379 [2024-07-25 10:17:30.276820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.379 [2024-07-25 10:17:30.276862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.379 qpair failed and we were unable to recover it. 00:28:45.379 [2024-07-25 10:17:30.277063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.379 [2024-07-25 10:17:30.277106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.379 qpair failed and we were unable to recover it. 00:28:45.379 [2024-07-25 10:17:30.277278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.379 [2024-07-25 10:17:30.277304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.379 qpair failed and we were unable to recover it. 00:28:45.379 [2024-07-25 10:17:30.277507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.379 [2024-07-25 10:17:30.277551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.379 qpair failed and we were unable to recover it. 00:28:45.379 [2024-07-25 10:17:30.277722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.379 [2024-07-25 10:17:30.277764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.379 qpair failed and we were unable to recover it. 00:28:45.379 [2024-07-25 10:17:30.277930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.379 [2024-07-25 10:17:30.277973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.379 qpair failed and we were unable to recover it. 00:28:45.379 [2024-07-25 10:17:30.278134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.379 [2024-07-25 10:17:30.278177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.379 qpair failed and we were unable to recover it. 00:28:45.379 [2024-07-25 10:17:30.278371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.379 [2024-07-25 10:17:30.278397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.379 qpair failed and we were unable to recover it. 00:28:45.379 [2024-07-25 10:17:30.278565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.379 [2024-07-25 10:17:30.278608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.379 qpair failed and we were unable to recover it. 00:28:45.379 [2024-07-25 10:17:30.278779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.379 [2024-07-25 10:17:30.278823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.379 qpair failed and we were unable to recover it. 00:28:45.379 [2024-07-25 10:17:30.279030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.379 [2024-07-25 10:17:30.279073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.379 qpair failed and we were unable to recover it. 00:28:45.379 [2024-07-25 10:17:30.279254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.379 [2024-07-25 10:17:30.279280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.379 qpair failed and we were unable to recover it. 00:28:45.379 [2024-07-25 10:17:30.279442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.379 [2024-07-25 10:17:30.279486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.379 qpair failed and we were unable to recover it. 00:28:45.379 [2024-07-25 10:17:30.279628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.379 [2024-07-25 10:17:30.279682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.379 qpair failed and we were unable to recover it. 00:28:45.379 [2024-07-25 10:17:30.279847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.379 [2024-07-25 10:17:30.279890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.379 qpair failed and we were unable to recover it. 00:28:45.379 [2024-07-25 10:17:30.280035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.379 [2024-07-25 10:17:30.280079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.379 qpair failed and we were unable to recover it. 00:28:45.379 [2024-07-25 10:17:30.280247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.379 [2024-07-25 10:17:30.280273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.379 qpair failed and we were unable to recover it. 00:28:45.379 [2024-07-25 10:17:30.280427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.379 [2024-07-25 10:17:30.280468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.379 qpair failed and we were unable to recover it. 00:28:45.379 [2024-07-25 10:17:30.280645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.379 [2024-07-25 10:17:30.280687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.379 qpair failed and we were unable to recover it. 00:28:45.379 [2024-07-25 10:17:30.280898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.379 [2024-07-25 10:17:30.280942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.379 qpair failed and we were unable to recover it. 00:28:45.379 [2024-07-25 10:17:30.281133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.379 [2024-07-25 10:17:30.281175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.379 qpair failed and we were unable to recover it. 00:28:45.379 [2024-07-25 10:17:30.281357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.379 [2024-07-25 10:17:30.281392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.379 qpair failed and we were unable to recover it. 00:28:45.379 [2024-07-25 10:17:30.281592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.379 [2024-07-25 10:17:30.281636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.379 qpair failed and we were unable to recover it. 00:28:45.379 [2024-07-25 10:17:30.281812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.379 [2024-07-25 10:17:30.281854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.379 qpair failed and we were unable to recover it. 00:28:45.379 [2024-07-25 10:17:30.282062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.380 [2024-07-25 10:17:30.282106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.380 qpair failed and we were unable to recover it. 00:28:45.380 [2024-07-25 10:17:30.282274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.380 [2024-07-25 10:17:30.282300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.380 qpair failed and we were unable to recover it. 00:28:45.380 [2024-07-25 10:17:30.282486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.380 [2024-07-25 10:17:30.282516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.380 qpair failed and we were unable to recover it. 00:28:45.380 [2024-07-25 10:17:30.282739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.380 [2024-07-25 10:17:30.282784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.380 qpair failed and we were unable to recover it. 00:28:45.380 [2024-07-25 10:17:30.282963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.380 [2024-07-25 10:17:30.283006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.380 qpair failed and we were unable to recover it. 00:28:45.380 [2024-07-25 10:17:30.283165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.380 [2024-07-25 10:17:30.283209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.380 qpair failed and we were unable to recover it. 00:28:45.380 [2024-07-25 10:17:30.283351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.380 [2024-07-25 10:17:30.283376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.380 qpair failed and we were unable to recover it. 00:28:45.380 [2024-07-25 10:17:30.283566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.380 [2024-07-25 10:17:30.283610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.380 qpair failed and we were unable to recover it. 00:28:45.380 [2024-07-25 10:17:30.283824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.380 [2024-07-25 10:17:30.283868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.380 qpair failed and we were unable to recover it. 00:28:45.380 [2024-07-25 10:17:30.284044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.380 [2024-07-25 10:17:30.284087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.380 qpair failed and we were unable to recover it. 00:28:45.380 [2024-07-25 10:17:30.284261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.380 [2024-07-25 10:17:30.284287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.380 qpair failed and we were unable to recover it. 00:28:45.380 [2024-07-25 10:17:30.284464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.380 [2024-07-25 10:17:30.284508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.380 qpair failed and we were unable to recover it. 00:28:45.380 [2024-07-25 10:17:30.284655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.380 [2024-07-25 10:17:30.284698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.380 qpair failed and we were unable to recover it. 00:28:45.380 [2024-07-25 10:17:30.284897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.380 [2024-07-25 10:17:30.284940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.380 qpair failed and we were unable to recover it. 00:28:45.380 [2024-07-25 10:17:30.285103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.380 [2024-07-25 10:17:30.285146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.380 qpair failed and we were unable to recover it. 00:28:45.380 [2024-07-25 10:17:30.285343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.380 [2024-07-25 10:17:30.285368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.380 qpair failed and we were unable to recover it. 00:28:45.380 [2024-07-25 10:17:30.285554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.380 [2024-07-25 10:17:30.285599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.380 qpair failed and we were unable to recover it. 00:28:45.380 [2024-07-25 10:17:30.285778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.380 [2024-07-25 10:17:30.285821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.380 qpair failed and we were unable to recover it. 00:28:45.380 [2024-07-25 10:17:30.285995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.380 [2024-07-25 10:17:30.286043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.380 qpair failed and we were unable to recover it. 00:28:45.380 [2024-07-25 10:17:30.286212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.380 [2024-07-25 10:17:30.286238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.380 qpair failed and we were unable to recover it. 00:28:45.380 [2024-07-25 10:17:30.286404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.380 [2024-07-25 10:17:30.286436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.380 qpair failed and we were unable to recover it. 00:28:45.380 [2024-07-25 10:17:30.286637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.380 [2024-07-25 10:17:30.286681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.380 qpair failed and we were unable to recover it. 00:28:45.380 [2024-07-25 10:17:30.286858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.380 [2024-07-25 10:17:30.286901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.380 qpair failed and we were unable to recover it. 00:28:45.380 [2024-07-25 10:17:30.287107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.380 [2024-07-25 10:17:30.287150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.380 qpair failed and we were unable to recover it. 00:28:45.380 [2024-07-25 10:17:30.287357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.380 [2024-07-25 10:17:30.287383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.380 qpair failed and we were unable to recover it. 00:28:45.380 [2024-07-25 10:17:30.287577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.380 [2024-07-25 10:17:30.287620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.380 qpair failed and we were unable to recover it. 00:28:45.380 [2024-07-25 10:17:30.287774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.380 [2024-07-25 10:17:30.287803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.380 qpair failed and we were unable to recover it. 00:28:45.380 [2024-07-25 10:17:30.287958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.380 [2024-07-25 10:17:30.287987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.380 qpair failed and we were unable to recover it. 00:28:45.380 [2024-07-25 10:17:30.288186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.380 [2024-07-25 10:17:30.288228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.380 qpair failed and we were unable to recover it. 00:28:45.380 [2024-07-25 10:17:30.288400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.380 [2024-07-25 10:17:30.288424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.380 qpair failed and we were unable to recover it. 00:28:45.380 [2024-07-25 10:17:30.288640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.381 [2024-07-25 10:17:30.288684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.381 qpair failed and we were unable to recover it. 00:28:45.381 [2024-07-25 10:17:30.288849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.381 [2024-07-25 10:17:30.288891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.381 qpair failed and we were unable to recover it. 00:28:45.381 [2024-07-25 10:17:30.289078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.381 [2024-07-25 10:17:30.289121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.381 qpair failed and we were unable to recover it. 00:28:45.381 [2024-07-25 10:17:30.289288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.381 [2024-07-25 10:17:30.289311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.381 qpair failed and we were unable to recover it. 00:28:45.381 [2024-07-25 10:17:30.289502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.381 [2024-07-25 10:17:30.289532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.381 qpair failed and we were unable to recover it. 00:28:45.381 [2024-07-25 10:17:30.289728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.381 [2024-07-25 10:17:30.289772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.381 qpair failed and we were unable to recover it. 00:28:45.381 [2024-07-25 10:17:30.289959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.381 [2024-07-25 10:17:30.290002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.381 qpair failed and we were unable to recover it. 00:28:45.381 [2024-07-25 10:17:30.290162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.381 [2024-07-25 10:17:30.290204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.381 qpair failed and we were unable to recover it. 00:28:45.381 [2024-07-25 10:17:30.290392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.381 [2024-07-25 10:17:30.290416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.381 qpair failed and we were unable to recover it. 00:28:45.381 [2024-07-25 10:17:30.290592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.381 [2024-07-25 10:17:30.290637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.381 qpair failed and we were unable to recover it. 00:28:45.381 [2024-07-25 10:17:30.290856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.381 [2024-07-25 10:17:30.290899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.381 qpair failed and we were unable to recover it. 00:28:45.381 [2024-07-25 10:17:30.291078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.381 [2024-07-25 10:17:30.291120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.381 qpair failed and we were unable to recover it. 00:28:45.381 [2024-07-25 10:17:30.291296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.381 [2024-07-25 10:17:30.291320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.381 qpair failed and we were unable to recover it. 00:28:45.381 [2024-07-25 10:17:30.291542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.381 [2024-07-25 10:17:30.291587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.381 qpair failed and we were unable to recover it. 00:28:45.381 [2024-07-25 10:17:30.291782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.381 [2024-07-25 10:17:30.291825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.381 qpair failed and we were unable to recover it. 00:28:45.381 [2024-07-25 10:17:30.292021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.381 [2024-07-25 10:17:30.292050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.381 qpair failed and we were unable to recover it. 00:28:45.381 [2024-07-25 10:17:30.292268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.381 [2024-07-25 10:17:30.292292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.381 qpair failed and we were unable to recover it. 00:28:45.381 [2024-07-25 10:17:30.292450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.381 [2024-07-25 10:17:30.292476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.381 qpair failed and we were unable to recover it. 00:28:45.381 [2024-07-25 10:17:30.292682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.381 [2024-07-25 10:17:30.292712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.381 qpair failed and we were unable to recover it. 00:28:45.381 [2024-07-25 10:17:30.292870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.381 [2024-07-25 10:17:30.292899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.381 qpair failed and we were unable to recover it. 00:28:45.381 [2024-07-25 10:17:30.293075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.381 [2024-07-25 10:17:30.293119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.381 qpair failed and we were unable to recover it. 00:28:45.381 [2024-07-25 10:17:30.293305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.381 [2024-07-25 10:17:30.293329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.381 qpair failed and we were unable to recover it. 00:28:45.381 [2024-07-25 10:17:30.293511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.381 [2024-07-25 10:17:30.293541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.381 qpair failed and we were unable to recover it. 00:28:45.381 [2024-07-25 10:17:30.293778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.381 [2024-07-25 10:17:30.293822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.381 qpair failed and we were unable to recover it. 00:28:45.381 [2024-07-25 10:17:30.293967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.381 [2024-07-25 10:17:30.294011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.381 qpair failed and we were unable to recover it. 00:28:45.381 [2024-07-25 10:17:30.294184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.381 [2024-07-25 10:17:30.294228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.381 qpair failed and we were unable to recover it. 00:28:45.381 [2024-07-25 10:17:30.294452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.381 [2024-07-25 10:17:30.294476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.381 qpair failed and we were unable to recover it. 00:28:45.381 [2024-07-25 10:17:30.294664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.381 [2024-07-25 10:17:30.294706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.381 qpair failed and we were unable to recover it. 00:28:45.381 [2024-07-25 10:17:30.294861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.381 [2024-07-25 10:17:30.294909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.381 qpair failed and we were unable to recover it. 00:28:45.381 [2024-07-25 10:17:30.295105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.381 [2024-07-25 10:17:30.295148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.381 qpair failed and we were unable to recover it. 00:28:45.381 [2024-07-25 10:17:30.295319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.381 [2024-07-25 10:17:30.295357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.381 qpair failed and we were unable to recover it. 00:28:45.381 [2024-07-25 10:17:30.295548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.381 [2024-07-25 10:17:30.295578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.381 qpair failed and we were unable to recover it. 00:28:45.381 [2024-07-25 10:17:30.295791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.381 [2024-07-25 10:17:30.295835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.381 qpair failed and we were unable to recover it. 00:28:45.381 [2024-07-25 10:17:30.296046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.381 [2024-07-25 10:17:30.296089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.381 qpair failed and we were unable to recover it. 00:28:45.381 [2024-07-25 10:17:30.296241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.381 [2024-07-25 10:17:30.296279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.381 qpair failed and we were unable to recover it. 00:28:45.381 [2024-07-25 10:17:30.296500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.381 [2024-07-25 10:17:30.296526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.381 qpair failed and we were unable to recover it. 00:28:45.381 [2024-07-25 10:17:30.296693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.381 [2024-07-25 10:17:30.296738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.381 qpair failed and we were unable to recover it. 00:28:45.381 [2024-07-25 10:17:30.296940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.381 [2024-07-25 10:17:30.296982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.381 qpair failed and we were unable to recover it. 00:28:45.382 [2024-07-25 10:17:30.297199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.382 [2024-07-25 10:17:30.297241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.382 qpair failed and we were unable to recover it. 00:28:45.382 [2024-07-25 10:17:30.297399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.382 [2024-07-25 10:17:30.297422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.382 qpair failed and we were unable to recover it. 00:28:45.382 [2024-07-25 10:17:30.297663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.382 [2024-07-25 10:17:30.297706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.382 qpair failed and we were unable to recover it. 00:28:45.382 [2024-07-25 10:17:30.297907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.382 [2024-07-25 10:17:30.297950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.382 qpair failed and we were unable to recover it. 00:28:45.382 [2024-07-25 10:17:30.298151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.382 [2024-07-25 10:17:30.298180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.382 qpair failed and we were unable to recover it. 00:28:45.382 [2024-07-25 10:17:30.298380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.382 [2024-07-25 10:17:30.298404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.382 qpair failed and we were unable to recover it. 00:28:45.382 [2024-07-25 10:17:30.298593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.382 [2024-07-25 10:17:30.298623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.382 qpair failed and we were unable to recover it. 00:28:45.382 [2024-07-25 10:17:30.298816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.382 [2024-07-25 10:17:30.298859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.382 qpair failed and we were unable to recover it. 00:28:45.382 [2024-07-25 10:17:30.299088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.382 [2024-07-25 10:17:30.299131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.382 qpair failed and we were unable to recover it. 00:28:45.382 [2024-07-25 10:17:30.299272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.382 [2024-07-25 10:17:30.299295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.382 qpair failed and we were unable to recover it. 00:28:45.382 [2024-07-25 10:17:30.299497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.382 [2024-07-25 10:17:30.299523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.382 qpair failed and we were unable to recover it. 00:28:45.382 [2024-07-25 10:17:30.299701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.382 [2024-07-25 10:17:30.299743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.382 qpair failed and we were unable to recover it. 00:28:45.382 [2024-07-25 10:17:30.299918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.382 [2024-07-25 10:17:30.299961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.382 qpair failed and we were unable to recover it. 00:28:45.382 [2024-07-25 10:17:30.300145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.382 [2024-07-25 10:17:30.300187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.382 qpair failed and we were unable to recover it. 00:28:45.382 [2024-07-25 10:17:30.300389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.382 [2024-07-25 10:17:30.300413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.382 qpair failed and we were unable to recover it. 00:28:45.382 [2024-07-25 10:17:30.300606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.382 [2024-07-25 10:17:30.300648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.382 qpair failed and we were unable to recover it. 00:28:45.382 [2024-07-25 10:17:30.300836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.382 [2024-07-25 10:17:30.300879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.382 qpair failed and we were unable to recover it. 00:28:45.382 [2024-07-25 10:17:30.301071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.382 [2024-07-25 10:17:30.301114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.382 qpair failed and we were unable to recover it. 00:28:45.382 [2024-07-25 10:17:30.301284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.382 [2024-07-25 10:17:30.301308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.382 qpair failed and we were unable to recover it. 00:28:45.382 [2024-07-25 10:17:30.301492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.382 [2024-07-25 10:17:30.301535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.382 qpair failed and we were unable to recover it. 00:28:45.382 [2024-07-25 10:17:30.301720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.382 [2024-07-25 10:17:30.301762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.382 qpair failed and we were unable to recover it. 00:28:45.382 [2024-07-25 10:17:30.301954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.382 [2024-07-25 10:17:30.301997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.382 qpair failed and we were unable to recover it. 00:28:45.382 [2024-07-25 10:17:30.302178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.382 [2024-07-25 10:17:30.302221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.382 qpair failed and we were unable to recover it. 00:28:45.382 [2024-07-25 10:17:30.302388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.382 [2024-07-25 10:17:30.302412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.382 qpair failed and we were unable to recover it. 00:28:45.382 [2024-07-25 10:17:30.302616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.382 [2024-07-25 10:17:30.302659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.382 qpair failed and we were unable to recover it. 00:28:45.382 [2024-07-25 10:17:30.302862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.382 [2024-07-25 10:17:30.302904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.382 qpair failed and we were unable to recover it. 00:28:45.382 [2024-07-25 10:17:30.303099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.382 [2024-07-25 10:17:30.303128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.382 qpair failed and we were unable to recover it. 00:28:45.382 [2024-07-25 10:17:30.303310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.382 [2024-07-25 10:17:30.303348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.382 qpair failed and we were unable to recover it. 00:28:45.382 [2024-07-25 10:17:30.303562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.382 [2024-07-25 10:17:30.303606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.382 qpair failed and we were unable to recover it. 00:28:45.382 [2024-07-25 10:17:30.303829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.382 [2024-07-25 10:17:30.303870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.382 qpair failed and we were unable to recover it. 00:28:45.382 [2024-07-25 10:17:30.304056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.382 [2024-07-25 10:17:30.304103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.382 qpair failed and we were unable to recover it. 00:28:45.382 [2024-07-25 10:17:30.304243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.382 [2024-07-25 10:17:30.304266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.382 qpair failed and we were unable to recover it. 00:28:45.382 [2024-07-25 10:17:30.304460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.382 [2024-07-25 10:17:30.304490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.382 qpair failed and we were unable to recover it. 00:28:45.382 [2024-07-25 10:17:30.304709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.382 [2024-07-25 10:17:30.304753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.382 qpair failed and we were unable to recover it. 00:28:45.382 [2024-07-25 10:17:30.304911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.382 [2024-07-25 10:17:30.304935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.382 qpair failed and we were unable to recover it. 00:28:45.382 [2024-07-25 10:17:30.305124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.382 [2024-07-25 10:17:30.305167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.382 qpair failed and we were unable to recover it. 00:28:45.382 [2024-07-25 10:17:30.305344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.382 [2024-07-25 10:17:30.305368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.382 qpair failed and we were unable to recover it. 00:28:45.382 [2024-07-25 10:17:30.305501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.382 [2024-07-25 10:17:30.305527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.383 qpair failed and we were unable to recover it. 00:28:45.383 [2024-07-25 10:17:30.305733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.383 [2024-07-25 10:17:30.305776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.383 qpair failed and we were unable to recover it. 00:28:45.383 [2024-07-25 10:17:30.305985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.383 [2024-07-25 10:17:30.306027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.383 qpair failed and we were unable to recover it. 00:28:45.383 [2024-07-25 10:17:30.306181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.383 [2024-07-25 10:17:30.306224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.383 qpair failed and we were unable to recover it. 00:28:45.383 [2024-07-25 10:17:30.306405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.383 [2024-07-25 10:17:30.306433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.383 qpair failed and we were unable to recover it. 00:28:45.383 [2024-07-25 10:17:30.306601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.383 [2024-07-25 10:17:30.306625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.383 qpair failed and we were unable to recover it. 00:28:45.383 [2024-07-25 10:17:30.306809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.383 [2024-07-25 10:17:30.306851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.383 qpair failed and we were unable to recover it. 00:28:45.383 [2024-07-25 10:17:30.307031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.383 [2024-07-25 10:17:30.307074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.383 qpair failed and we were unable to recover it. 00:28:45.383 [2024-07-25 10:17:30.307231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.383 [2024-07-25 10:17:30.307255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.383 qpair failed and we were unable to recover it. 00:28:45.383 [2024-07-25 10:17:30.307391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.383 [2024-07-25 10:17:30.307415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.383 qpair failed and we were unable to recover it. 00:28:45.383 [2024-07-25 10:17:30.307565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.383 [2024-07-25 10:17:30.307605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.383 qpair failed and we were unable to recover it. 00:28:45.383 [2024-07-25 10:17:30.307818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.383 [2024-07-25 10:17:30.307861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.383 qpair failed and we were unable to recover it. 00:28:45.383 [2024-07-25 10:17:30.308015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.383 [2024-07-25 10:17:30.308039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.383 qpair failed and we were unable to recover it. 00:28:45.383 [2024-07-25 10:17:30.308220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.383 [2024-07-25 10:17:30.308262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.383 qpair failed and we were unable to recover it. 00:28:45.383 [2024-07-25 10:17:30.308423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.383 [2024-07-25 10:17:30.308469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.383 qpair failed and we were unable to recover it. 00:28:45.383 [2024-07-25 10:17:30.308681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.383 [2024-07-25 10:17:30.308724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.383 qpair failed and we were unable to recover it. 00:28:45.383 [2024-07-25 10:17:30.308899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.383 [2024-07-25 10:17:30.308941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.383 qpair failed and we were unable to recover it. 00:28:45.383 [2024-07-25 10:17:30.309090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.383 [2024-07-25 10:17:30.309134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.383 qpair failed and we were unable to recover it. 00:28:45.383 [2024-07-25 10:17:30.309312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.383 [2024-07-25 10:17:30.309336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.383 qpair failed and we were unable to recover it. 00:28:45.383 [2024-07-25 10:17:30.309528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.383 [2024-07-25 10:17:30.309554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.383 qpair failed and we were unable to recover it. 00:28:45.383 [2024-07-25 10:17:30.309720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.383 [2024-07-25 10:17:30.309762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.383 qpair failed and we were unable to recover it. 00:28:45.383 [2024-07-25 10:17:30.309936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.383 [2024-07-25 10:17:30.309979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.383 qpair failed and we were unable to recover it. 00:28:45.383 [2024-07-25 10:17:30.310165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.383 [2024-07-25 10:17:30.310207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.383 qpair failed and we were unable to recover it. 00:28:45.383 [2024-07-25 10:17:30.310379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.383 [2024-07-25 10:17:30.310403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.383 qpair failed and we were unable to recover it. 00:28:45.383 [2024-07-25 10:17:30.310564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.383 [2024-07-25 10:17:30.310607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.383 qpair failed and we were unable to recover it. 00:28:45.383 [2024-07-25 10:17:30.310797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.383 [2024-07-25 10:17:30.310840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.383 qpair failed and we were unable to recover it. 00:28:45.383 [2024-07-25 10:17:30.311037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.383 [2024-07-25 10:17:30.311080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.383 qpair failed and we were unable to recover it. 00:28:45.383 [2024-07-25 10:17:30.311241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.383 [2024-07-25 10:17:30.311265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.383 qpair failed and we were unable to recover it. 00:28:45.383 [2024-07-25 10:17:30.311453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.383 [2024-07-25 10:17:30.311477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.383 qpair failed and we were unable to recover it. 00:28:45.383 [2024-07-25 10:17:30.311666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.383 [2024-07-25 10:17:30.311713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.383 qpair failed and we were unable to recover it. 00:28:45.383 [2024-07-25 10:17:30.311897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.383 [2024-07-25 10:17:30.311939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.383 qpair failed and we were unable to recover it. 00:28:45.383 [2024-07-25 10:17:30.312136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.383 [2024-07-25 10:17:30.312179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.383 qpair failed and we were unable to recover it. 00:28:45.383 [2024-07-25 10:17:30.312361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.383 [2024-07-25 10:17:30.312384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.383 qpair failed and we were unable to recover it. 00:28:45.383 [2024-07-25 10:17:30.312608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.383 [2024-07-25 10:17:30.312638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.383 qpair failed and we were unable to recover it. 00:28:45.383 [2024-07-25 10:17:30.312805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.383 [2024-07-25 10:17:30.312848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.383 qpair failed and we were unable to recover it. 00:28:45.383 [2024-07-25 10:17:30.313013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.383 [2024-07-25 10:17:30.313056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.383 qpair failed and we were unable to recover it. 00:28:45.383 [2024-07-25 10:17:30.313210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.383 [2024-07-25 10:17:30.313253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.383 qpair failed and we were unable to recover it. 00:28:45.383 [2024-07-25 10:17:30.313434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.383 [2024-07-25 10:17:30.313458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.383 qpair failed and we were unable to recover it. 00:28:45.383 [2024-07-25 10:17:30.313673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.384 [2024-07-25 10:17:30.313715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.384 qpair failed and we were unable to recover it. 00:28:45.384 [2024-07-25 10:17:30.313868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.384 [2024-07-25 10:17:30.313892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.384 qpair failed and we were unable to recover it. 00:28:45.384 [2024-07-25 10:17:30.314075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.384 [2024-07-25 10:17:30.314118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.384 qpair failed and we were unable to recover it. 00:28:45.384 [2024-07-25 10:17:30.314293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.384 [2024-07-25 10:17:30.314316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.384 qpair failed and we were unable to recover it. 00:28:45.384 [2024-07-25 10:17:30.314493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.384 [2024-07-25 10:17:30.314532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.384 qpair failed and we were unable to recover it. 00:28:45.384 [2024-07-25 10:17:30.314714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.384 [2024-07-25 10:17:30.314755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.384 qpair failed and we were unable to recover it. 00:28:45.384 [2024-07-25 10:17:30.314966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.384 [2024-07-25 10:17:30.315009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.384 qpair failed and we were unable to recover it. 00:28:45.384 [2024-07-25 10:17:30.315175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.384 [2024-07-25 10:17:30.315218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.384 qpair failed and we were unable to recover it. 00:28:45.384 [2024-07-25 10:17:30.315440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.384 [2024-07-25 10:17:30.315466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.384 qpair failed and we were unable to recover it. 00:28:45.384 [2024-07-25 10:17:30.315613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.384 [2024-07-25 10:17:30.315654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.384 qpair failed and we were unable to recover it. 00:28:45.384 [2024-07-25 10:17:30.315833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.384 [2024-07-25 10:17:30.315876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.384 qpair failed and we were unable to recover it. 00:28:45.384 [2024-07-25 10:17:30.316024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.384 [2024-07-25 10:17:30.316067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.384 qpair failed and we were unable to recover it. 00:28:45.384 [2024-07-25 10:17:30.316254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.384 [2024-07-25 10:17:30.316279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.384 qpair failed and we were unable to recover it. 00:28:45.384 [2024-07-25 10:17:30.316458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.384 [2024-07-25 10:17:30.316503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.384 qpair failed and we were unable to recover it. 00:28:45.384 [2024-07-25 10:17:30.316658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.384 [2024-07-25 10:17:30.316701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.384 qpair failed and we were unable to recover it. 00:28:45.384 [2024-07-25 10:17:30.316915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.384 [2024-07-25 10:17:30.316965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.384 qpair failed and we were unable to recover it. 00:28:45.384 [2024-07-25 10:17:30.317178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.384 [2024-07-25 10:17:30.317222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.384 qpair failed and we were unable to recover it. 00:28:45.384 [2024-07-25 10:17:30.317358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.384 [2024-07-25 10:17:30.317383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.384 qpair failed and we were unable to recover it. 00:28:45.384 [2024-07-25 10:17:30.317616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.384 [2024-07-25 10:17:30.317660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.384 qpair failed and we were unable to recover it. 00:28:45.384 [2024-07-25 10:17:30.317865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.384 [2024-07-25 10:17:30.317928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.384 qpair failed and we were unable to recover it. 00:28:45.384 [2024-07-25 10:17:30.318112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.384 [2024-07-25 10:17:30.318155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.384 qpair failed and we were unable to recover it. 00:28:45.384 [2024-07-25 10:17:30.318359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.384 [2024-07-25 10:17:30.318383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.384 qpair failed and we were unable to recover it. 00:28:45.384 [2024-07-25 10:17:30.318576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.384 [2024-07-25 10:17:30.318606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.384 qpair failed and we were unable to recover it. 00:28:45.384 [2024-07-25 10:17:30.318844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.384 [2024-07-25 10:17:30.318896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.384 qpair failed and we were unable to recover it. 00:28:45.384 [2024-07-25 10:17:30.319128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.384 [2024-07-25 10:17:30.319171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.384 qpair failed and we were unable to recover it. 00:28:45.384 [2024-07-25 10:17:30.319343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.384 [2024-07-25 10:17:30.319367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.384 qpair failed and we were unable to recover it. 00:28:45.384 [2024-07-25 10:17:30.319581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.384 [2024-07-25 10:17:30.319624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.384 qpair failed and we were unable to recover it. 00:28:45.384 [2024-07-25 10:17:30.319837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.384 [2024-07-25 10:17:30.319894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.384 qpair failed and we were unable to recover it. 00:28:45.384 [2024-07-25 10:17:30.320083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.384 [2024-07-25 10:17:30.320126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.384 qpair failed and we were unable to recover it. 00:28:45.384 [2024-07-25 10:17:30.320275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.384 [2024-07-25 10:17:30.320299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.384 qpair failed and we were unable to recover it. 00:28:45.384 [2024-07-25 10:17:30.320523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.384 [2024-07-25 10:17:30.320566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.384 qpair failed and we were unable to recover it. 00:28:45.384 [2024-07-25 10:17:30.320770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.384 [2024-07-25 10:17:30.320829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.384 qpair failed and we were unable to recover it. 00:28:45.384 [2024-07-25 10:17:30.320984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.384 [2024-07-25 10:17:30.321026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.384 qpair failed and we were unable to recover it. 00:28:45.384 [2024-07-25 10:17:30.321223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.384 [2024-07-25 10:17:30.321247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.384 qpair failed and we were unable to recover it. 00:28:45.384 [2024-07-25 10:17:30.321460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.384 [2024-07-25 10:17:30.321485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.384 qpair failed and we were unable to recover it. 00:28:45.384 [2024-07-25 10:17:30.321671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.384 [2024-07-25 10:17:30.321718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.384 qpair failed and we were unable to recover it. 00:28:45.384 [2024-07-25 10:17:30.321923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.384 [2024-07-25 10:17:30.321967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.384 qpair failed and we were unable to recover it. 00:28:45.384 [2024-07-25 10:17:30.322132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.384 [2024-07-25 10:17:30.322175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.384 qpair failed and we were unable to recover it. 00:28:45.385 [2024-07-25 10:17:30.322351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.385 [2024-07-25 10:17:30.322374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.385 qpair failed and we were unable to recover it. 00:28:45.385 [2024-07-25 10:17:30.322555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.385 [2024-07-25 10:17:30.322585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.385 qpair failed and we were unable to recover it. 00:28:45.385 [2024-07-25 10:17:30.322776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.385 [2024-07-25 10:17:30.322820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.385 qpair failed and we were unable to recover it. 00:28:45.385 [2024-07-25 10:17:30.323021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.385 [2024-07-25 10:17:30.323063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.385 qpair failed and we were unable to recover it. 00:28:45.385 [2024-07-25 10:17:30.323273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.385 [2024-07-25 10:17:30.323297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.385 qpair failed and we were unable to recover it. 00:28:45.385 [2024-07-25 10:17:30.323451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.385 [2024-07-25 10:17:30.323491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.385 qpair failed and we were unable to recover it. 00:28:45.385 [2024-07-25 10:17:30.323637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.385 [2024-07-25 10:17:30.323681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.385 qpair failed and we were unable to recover it. 00:28:45.385 [2024-07-25 10:17:30.323870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.385 [2024-07-25 10:17:30.323912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.385 qpair failed and we were unable to recover it. 00:28:45.385 [2024-07-25 10:17:30.324100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.385 [2024-07-25 10:17:30.324142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.385 qpair failed and we were unable to recover it. 00:28:45.385 [2024-07-25 10:17:30.324325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.385 [2024-07-25 10:17:30.324357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.385 qpair failed and we were unable to recover it. 00:28:45.385 [2024-07-25 10:17:30.324543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.385 [2024-07-25 10:17:30.324587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.385 qpair failed and we were unable to recover it. 00:28:45.385 [2024-07-25 10:17:30.324805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.385 [2024-07-25 10:17:30.324849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.385 qpair failed and we were unable to recover it. 00:28:45.385 [2024-07-25 10:17:30.324997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.385 [2024-07-25 10:17:30.325026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.385 qpair failed and we were unable to recover it. 00:28:45.385 [2024-07-25 10:17:30.325221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.385 [2024-07-25 10:17:30.325244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.385 qpair failed and we were unable to recover it. 00:28:45.385 [2024-07-25 10:17:30.325442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.385 [2024-07-25 10:17:30.325468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.385 qpair failed and we were unable to recover it. 00:28:45.385 [2024-07-25 10:17:30.325663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.385 [2024-07-25 10:17:30.325708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.385 qpair failed and we were unable to recover it. 00:28:45.385 [2024-07-25 10:17:30.325853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.385 [2024-07-25 10:17:30.325897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.385 qpair failed and we were unable to recover it. 00:28:45.385 [2024-07-25 10:17:30.326046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.385 [2024-07-25 10:17:30.326089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.385 qpair failed and we were unable to recover it. 00:28:45.385 [2024-07-25 10:17:30.326239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.385 [2024-07-25 10:17:30.326278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.385 qpair failed and we were unable to recover it. 00:28:45.385 [2024-07-25 10:17:30.326435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.385 [2024-07-25 10:17:30.326479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.385 qpair failed and we were unable to recover it. 00:28:45.385 [2024-07-25 10:17:30.326663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.385 [2024-07-25 10:17:30.326708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.385 qpair failed and we were unable to recover it. 00:28:45.385 [2024-07-25 10:17:30.326877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.385 [2024-07-25 10:17:30.326934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.385 qpair failed and we were unable to recover it. 00:28:45.385 [2024-07-25 10:17:30.327141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.385 [2024-07-25 10:17:30.327184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.385 qpair failed and we were unable to recover it. 00:28:45.385 [2024-07-25 10:17:30.327390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.385 [2024-07-25 10:17:30.327413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.385 qpair failed and we were unable to recover it. 00:28:45.385 [2024-07-25 10:17:30.327606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.385 [2024-07-25 10:17:30.327648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.385 qpair failed and we were unable to recover it. 00:28:45.385 [2024-07-25 10:17:30.327857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.385 [2024-07-25 10:17:30.327916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.385 qpair failed and we were unable to recover it. 00:28:45.385 [2024-07-25 10:17:30.328127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.385 [2024-07-25 10:17:30.328170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.385 qpair failed and we were unable to recover it. 00:28:45.385 [2024-07-25 10:17:30.328311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.385 [2024-07-25 10:17:30.328335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.385 qpair failed and we were unable to recover it. 00:28:45.385 [2024-07-25 10:17:30.328474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.385 [2024-07-25 10:17:30.328500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.385 qpair failed and we were unable to recover it. 00:28:45.385 [2024-07-25 10:17:30.328696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.385 [2024-07-25 10:17:30.328740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.385 qpair failed and we were unable to recover it. 00:28:45.385 [2024-07-25 10:17:30.328920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.385 [2024-07-25 10:17:30.328963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.385 qpair failed and we were unable to recover it. 00:28:45.385 [2024-07-25 10:17:30.329157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.385 [2024-07-25 10:17:30.329200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.386 qpair failed and we were unable to recover it. 00:28:45.386 [2024-07-25 10:17:30.329338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.386 [2024-07-25 10:17:30.329362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.386 qpair failed and we were unable to recover it. 00:28:45.386 [2024-07-25 10:17:30.329543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.386 [2024-07-25 10:17:30.329598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.386 qpair failed and we were unable to recover it. 00:28:45.386 [2024-07-25 10:17:30.329807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.386 [2024-07-25 10:17:30.329849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.386 qpair failed and we were unable to recover it. 00:28:45.386 [2024-07-25 10:17:30.330024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.386 [2024-07-25 10:17:30.330066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.386 qpair failed and we were unable to recover it. 00:28:45.386 [2024-07-25 10:17:30.330273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.386 [2024-07-25 10:17:30.330297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.386 qpair failed and we were unable to recover it. 00:28:45.386 [2024-07-25 10:17:30.330499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.386 [2024-07-25 10:17:30.330547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.386 qpair failed and we were unable to recover it. 00:28:45.386 [2024-07-25 10:17:30.330745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.386 [2024-07-25 10:17:30.330788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.386 qpair failed and we were unable to recover it. 00:28:45.386 [2024-07-25 10:17:30.330951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.386 [2024-07-25 10:17:30.330994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.386 qpair failed and we were unable to recover it. 00:28:45.386 [2024-07-25 10:17:30.331182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.386 [2024-07-25 10:17:30.331205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.386 qpair failed and we were unable to recover it. 00:28:45.386 [2024-07-25 10:17:30.331368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.386 [2024-07-25 10:17:30.331392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.386 qpair failed and we were unable to recover it. 00:28:45.386 [2024-07-25 10:17:30.331611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.386 [2024-07-25 10:17:30.331654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.386 qpair failed and we were unable to recover it. 00:28:45.386 [2024-07-25 10:17:30.331839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.386 [2024-07-25 10:17:30.331881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.386 qpair failed and we were unable to recover it. 00:28:45.386 [2024-07-25 10:17:30.332085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.386 [2024-07-25 10:17:30.332128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.386 qpair failed and we were unable to recover it. 00:28:45.386 [2024-07-25 10:17:30.332268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.386 [2024-07-25 10:17:30.332291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.386 qpair failed and we were unable to recover it. 00:28:45.386 [2024-07-25 10:17:30.332480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.386 [2024-07-25 10:17:30.332511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.386 qpair failed and we were unable to recover it. 00:28:45.386 [2024-07-25 10:17:30.332712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.386 [2024-07-25 10:17:30.332755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.386 qpair failed and we were unable to recover it. 00:28:45.386 [2024-07-25 10:17:30.332917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.386 [2024-07-25 10:17:30.332967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.386 qpair failed and we were unable to recover it. 00:28:45.386 [2024-07-25 10:17:30.333168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.386 [2024-07-25 10:17:30.333228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.386 qpair failed and we were unable to recover it. 00:28:45.386 [2024-07-25 10:17:30.333407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.386 [2024-07-25 10:17:30.333451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.386 qpair failed and we were unable to recover it. 00:28:45.386 [2024-07-25 10:17:30.333613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.386 [2024-07-25 10:17:30.333656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.386 qpair failed and we were unable to recover it. 00:28:45.386 [2024-07-25 10:17:30.333871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.386 [2024-07-25 10:17:30.333915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.386 qpair failed and we were unable to recover it. 00:28:45.386 [2024-07-25 10:17:30.334134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.386 [2024-07-25 10:17:30.334187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.386 qpair failed and we were unable to recover it. 00:28:45.386 [2024-07-25 10:17:30.334422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.386 [2024-07-25 10:17:30.334450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.386 qpair failed and we were unable to recover it. 00:28:45.386 [2024-07-25 10:17:30.334637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.386 [2024-07-25 10:17:30.334682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.386 qpair failed and we were unable to recover it. 00:28:45.386 [2024-07-25 10:17:30.334878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.386 [2024-07-25 10:17:30.334921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.386 qpair failed and we were unable to recover it. 00:28:45.386 [2024-07-25 10:17:30.335070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.386 [2024-07-25 10:17:30.335126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.386 qpair failed and we were unable to recover it. 00:28:45.386 [2024-07-25 10:17:30.335264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.386 [2024-07-25 10:17:30.335304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.386 qpair failed and we were unable to recover it. 00:28:45.386 [2024-07-25 10:17:30.335480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.386 [2024-07-25 10:17:30.335510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.386 qpair failed and we were unable to recover it. 00:28:45.386 [2024-07-25 10:17:30.335703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.386 [2024-07-25 10:17:30.335750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.386 qpair failed and we were unable to recover it. 00:28:45.386 [2024-07-25 10:17:30.335937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.386 [2024-07-25 10:17:30.335992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.386 qpair failed and we were unable to recover it. 00:28:45.386 [2024-07-25 10:17:30.336172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.386 [2024-07-25 10:17:30.336214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.386 qpair failed and we were unable to recover it. 00:28:45.386 [2024-07-25 10:17:30.336419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.386 [2024-07-25 10:17:30.336470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.386 qpair failed and we were unable to recover it. 00:28:45.386 [2024-07-25 10:17:30.336656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.386 [2024-07-25 10:17:30.336699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.386 qpair failed and we were unable to recover it. 00:28:45.386 [2024-07-25 10:17:30.336888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.386 [2024-07-25 10:17:30.336938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.386 qpair failed and we were unable to recover it. 00:28:45.386 [2024-07-25 10:17:30.337136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.386 [2024-07-25 10:17:30.337178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.386 qpair failed and we were unable to recover it. 00:28:45.386 [2024-07-25 10:17:30.337376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.386 [2024-07-25 10:17:30.337400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.386 qpair failed and we were unable to recover it. 00:28:45.386 [2024-07-25 10:17:30.337581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.386 [2024-07-25 10:17:30.337611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.386 qpair failed and we were unable to recover it. 00:28:45.386 [2024-07-25 10:17:30.337774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.387 [2024-07-25 10:17:30.337803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.387 qpair failed and we were unable to recover it. 00:28:45.387 [2024-07-25 10:17:30.338015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.387 [2024-07-25 10:17:30.338058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.387 qpair failed and we were unable to recover it. 00:28:45.387 [2024-07-25 10:17:30.338261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.387 [2024-07-25 10:17:30.338304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.387 qpair failed and we were unable to recover it. 00:28:45.387 [2024-07-25 10:17:30.338481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.387 [2024-07-25 10:17:30.338512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.387 qpair failed and we were unable to recover it. 00:28:45.387 [2024-07-25 10:17:30.338689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.387 [2024-07-25 10:17:30.338731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.387 qpair failed and we were unable to recover it. 00:28:45.387 [2024-07-25 10:17:30.338928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.387 [2024-07-25 10:17:30.338972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.387 qpair failed and we were unable to recover it. 00:28:45.387 [2024-07-25 10:17:30.339168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.387 [2024-07-25 10:17:30.339211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.387 qpair failed and we were unable to recover it. 00:28:45.387 [2024-07-25 10:17:30.339349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.387 [2024-07-25 10:17:30.339373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.387 qpair failed and we were unable to recover it. 00:28:45.387 [2024-07-25 10:17:30.339548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.387 [2024-07-25 10:17:30.339582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.387 qpair failed and we were unable to recover it. 00:28:45.387 [2024-07-25 10:17:30.339778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.387 [2024-07-25 10:17:30.339822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.387 qpair failed and we were unable to recover it. 00:28:45.387 [2024-07-25 10:17:30.340003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.387 [2024-07-25 10:17:30.340045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.387 qpair failed and we were unable to recover it. 00:28:45.387 [2024-07-25 10:17:30.340235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.387 [2024-07-25 10:17:30.340258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.387 qpair failed and we were unable to recover it. 00:28:45.387 [2024-07-25 10:17:30.340471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.387 [2024-07-25 10:17:30.340497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.387 qpair failed and we were unable to recover it. 00:28:45.387 [2024-07-25 10:17:30.340656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.387 [2024-07-25 10:17:30.340698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.387 qpair failed and we were unable to recover it. 00:28:45.387 [2024-07-25 10:17:30.340892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.387 [2024-07-25 10:17:30.340921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.387 qpair failed and we were unable to recover it. 00:28:45.387 [2024-07-25 10:17:30.341098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.387 [2024-07-25 10:17:30.341127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.387 qpair failed and we were unable to recover it. 00:28:45.387 [2024-07-25 10:17:30.341287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.387 [2024-07-25 10:17:30.341325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.387 qpair failed and we were unable to recover it. 00:28:45.387 [2024-07-25 10:17:30.341481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.387 [2024-07-25 10:17:30.341511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.387 qpair failed and we were unable to recover it. 00:28:45.387 [2024-07-25 10:17:30.341708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.387 [2024-07-25 10:17:30.341751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.387 qpair failed and we were unable to recover it. 00:28:45.387 [2024-07-25 10:17:30.341933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.387 [2024-07-25 10:17:30.341962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.387 qpair failed and we were unable to recover it. 00:28:45.387 [2024-07-25 10:17:30.342142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.387 [2024-07-25 10:17:30.342207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.387 qpair failed and we were unable to recover it. 00:28:45.387 [2024-07-25 10:17:30.342441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.387 [2024-07-25 10:17:30.342466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.387 qpair failed and we were unable to recover it. 00:28:45.387 [2024-07-25 10:17:30.342695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.387 [2024-07-25 10:17:30.342738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.387 qpair failed and we were unable to recover it. 00:28:45.387 [2024-07-25 10:17:30.342939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.387 [2024-07-25 10:17:30.342969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.387 qpair failed and we were unable to recover it. 00:28:45.387 [2024-07-25 10:17:30.343206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.387 [2024-07-25 10:17:30.343264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.387 qpair failed and we were unable to recover it. 00:28:45.387 [2024-07-25 10:17:30.343411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.387 [2024-07-25 10:17:30.343439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.387 qpair failed and we were unable to recover it. 00:28:45.387 [2024-07-25 10:17:30.343592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.387 [2024-07-25 10:17:30.343616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.387 qpair failed and we were unable to recover it. 00:28:45.387 [2024-07-25 10:17:30.343802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.387 [2024-07-25 10:17:30.343845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.387 qpair failed and we were unable to recover it. 00:28:45.387 [2024-07-25 10:17:30.344010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.387 [2024-07-25 10:17:30.344062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.387 qpair failed and we were unable to recover it. 00:28:45.387 [2024-07-25 10:17:30.344278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.387 [2024-07-25 10:17:30.344320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.387 qpair failed and we were unable to recover it. 00:28:45.387 [2024-07-25 10:17:30.344519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.387 [2024-07-25 10:17:30.344549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.387 qpair failed and we were unable to recover it. 00:28:45.387 [2024-07-25 10:17:30.344771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.387 [2024-07-25 10:17:30.344814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.387 qpair failed and we were unable to recover it. 00:28:45.387 [2024-07-25 10:17:30.345025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.387 [2024-07-25 10:17:30.345082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.387 qpair failed and we were unable to recover it. 00:28:45.387 [2024-07-25 10:17:30.345298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.387 [2024-07-25 10:17:30.345322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.387 qpair failed and we were unable to recover it. 00:28:45.387 [2024-07-25 10:17:30.345517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.387 [2024-07-25 10:17:30.345542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.387 qpair failed and we were unable to recover it. 00:28:45.387 [2024-07-25 10:17:30.345735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.387 [2024-07-25 10:17:30.345780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.387 qpair failed and we were unable to recover it. 00:28:45.387 [2024-07-25 10:17:30.346010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.387 [2024-07-25 10:17:30.346058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.387 qpair failed and we were unable to recover it. 00:28:45.387 [2024-07-25 10:17:30.346227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.388 [2024-07-25 10:17:30.346252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.388 qpair failed and we were unable to recover it. 00:28:45.388 [2024-07-25 10:17:30.346476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.388 [2024-07-25 10:17:30.346501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.388 qpair failed and we were unable to recover it. 00:28:45.388 [2024-07-25 10:17:30.346676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.388 [2024-07-25 10:17:30.346729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.388 qpair failed and we were unable to recover it. 00:28:45.388 [2024-07-25 10:17:30.346912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.388 [2024-07-25 10:17:30.346963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.388 qpair failed and we were unable to recover it. 00:28:45.388 [2024-07-25 10:17:30.347151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.388 [2024-07-25 10:17:30.347194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.388 qpair failed and we were unable to recover it. 00:28:45.388 [2024-07-25 10:17:30.347385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.388 [2024-07-25 10:17:30.347408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.388 qpair failed and we were unable to recover it. 00:28:45.388 [2024-07-25 10:17:30.347603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.388 [2024-07-25 10:17:30.347628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.388 qpair failed and we were unable to recover it. 00:28:45.388 [2024-07-25 10:17:30.347790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.388 [2024-07-25 10:17:30.347832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.388 qpair failed and we were unable to recover it. 00:28:45.388 [2024-07-25 10:17:30.348011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.388 [2024-07-25 10:17:30.348053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.388 qpair failed and we were unable to recover it. 00:28:45.388 [2024-07-25 10:17:30.348204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.388 [2024-07-25 10:17:30.348246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.388 qpair failed and we were unable to recover it. 00:28:45.388 [2024-07-25 10:17:30.348411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.388 [2024-07-25 10:17:30.348465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.388 qpair failed and we were unable to recover it. 00:28:45.388 [2024-07-25 10:17:30.348644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.388 [2024-07-25 10:17:30.348692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.388 qpair failed and we were unable to recover it. 00:28:45.388 [2024-07-25 10:17:30.348910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.388 [2024-07-25 10:17:30.348952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.388 qpair failed and we were unable to recover it. 00:28:45.388 [2024-07-25 10:17:30.349107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.388 [2024-07-25 10:17:30.349131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.388 qpair failed and we were unable to recover it. 00:28:45.388 [2024-07-25 10:17:30.349316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.388 [2024-07-25 10:17:30.349341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.388 qpair failed and we were unable to recover it. 00:28:45.388 [2024-07-25 10:17:30.349543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.388 [2024-07-25 10:17:30.349588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.388 qpair failed and we were unable to recover it. 00:28:45.388 [2024-07-25 10:17:30.349745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.388 [2024-07-25 10:17:30.349799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.388 qpair failed and we were unable to recover it. 00:28:45.388 [2024-07-25 10:17:30.350012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.388 [2024-07-25 10:17:30.350055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.388 qpair failed and we were unable to recover it. 00:28:45.388 [2024-07-25 10:17:30.350260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.388 [2024-07-25 10:17:30.350284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.388 qpair failed and we were unable to recover it. 00:28:45.388 [2024-07-25 10:17:30.350467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.388 [2024-07-25 10:17:30.350493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.388 qpair failed and we were unable to recover it. 00:28:45.388 [2024-07-25 10:17:30.350641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.388 [2024-07-25 10:17:30.350683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.388 qpair failed and we were unable to recover it. 00:28:45.388 [2024-07-25 10:17:30.350866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.388 [2024-07-25 10:17:30.350909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.388 qpair failed and we were unable to recover it. 00:28:45.388 [2024-07-25 10:17:30.351091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.388 [2024-07-25 10:17:30.351134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.388 qpair failed and we were unable to recover it. 00:28:45.388 [2024-07-25 10:17:30.351334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.388 [2024-07-25 10:17:30.351358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.388 qpair failed and we were unable to recover it. 00:28:45.388 [2024-07-25 10:17:30.351495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.388 [2024-07-25 10:17:30.351535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.388 qpair failed and we were unable to recover it. 00:28:45.388 [2024-07-25 10:17:30.351713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.388 [2024-07-25 10:17:30.351756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.388 qpair failed and we were unable to recover it. 00:28:45.388 [2024-07-25 10:17:30.351960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.388 [2024-07-25 10:17:30.351990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.388 qpair failed and we were unable to recover it. 00:28:45.388 [2024-07-25 10:17:30.352170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.388 [2024-07-25 10:17:30.352221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.388 qpair failed and we were unable to recover it. 00:28:45.388 [2024-07-25 10:17:30.352408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.388 [2024-07-25 10:17:30.352451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.388 qpair failed and we were unable to recover it. 00:28:45.388 [2024-07-25 10:17:30.352683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.388 [2024-07-25 10:17:30.352730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.388 qpair failed and we were unable to recover it. 00:28:45.388 [2024-07-25 10:17:30.352948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.388 [2024-07-25 10:17:30.352990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.388 qpair failed and we were unable to recover it. 00:28:45.388 [2024-07-25 10:17:30.353203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.388 [2024-07-25 10:17:30.353254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.388 qpair failed and we were unable to recover it. 00:28:45.388 [2024-07-25 10:17:30.353467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.388 [2024-07-25 10:17:30.353493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.388 qpair failed and we were unable to recover it. 00:28:45.388 [2024-07-25 10:17:30.353713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.388 [2024-07-25 10:17:30.353756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.388 qpair failed and we were unable to recover it. 00:28:45.388 [2024-07-25 10:17:30.353916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.388 [2024-07-25 10:17:30.353958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.388 qpair failed and we were unable to recover it. 00:28:45.388 [2024-07-25 10:17:30.354177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.388 [2024-07-25 10:17:30.354228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.388 qpair failed and we were unable to recover it. 00:28:45.388 [2024-07-25 10:17:30.354447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.388 [2024-07-25 10:17:30.354471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.388 qpair failed and we were unable to recover it. 00:28:45.388 [2024-07-25 10:17:30.354654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.388 [2024-07-25 10:17:30.354696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.388 qpair failed and we were unable to recover it. 00:28:45.388 [2024-07-25 10:17:30.354877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.389 [2024-07-25 10:17:30.354920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.389 qpair failed and we were unable to recover it. 00:28:45.389 [2024-07-25 10:17:30.355100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.389 [2024-07-25 10:17:30.355150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.389 qpair failed and we were unable to recover it. 00:28:45.389 [2024-07-25 10:17:30.355363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.389 [2024-07-25 10:17:30.355387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.389 qpair failed and we were unable to recover it. 00:28:45.389 [2024-07-25 10:17:30.355588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.389 [2024-07-25 10:17:30.355613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.389 qpair failed and we were unable to recover it. 00:28:45.389 [2024-07-25 10:17:30.355812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.389 [2024-07-25 10:17:30.355856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.389 qpair failed and we were unable to recover it. 00:28:45.389 [2024-07-25 10:17:30.356025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.389 [2024-07-25 10:17:30.356078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.389 qpair failed and we were unable to recover it. 00:28:45.389 [2024-07-25 10:17:30.356292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.389 [2024-07-25 10:17:30.356316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.389 qpair failed and we were unable to recover it. 00:28:45.389 [2024-07-25 10:17:30.356527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.389 [2024-07-25 10:17:30.356552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.389 qpair failed and we were unable to recover it. 00:28:45.389 [2024-07-25 10:17:30.356742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.389 [2024-07-25 10:17:30.356785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.389 qpair failed and we were unable to recover it. 00:28:45.389 [2024-07-25 10:17:30.356945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.389 [2024-07-25 10:17:30.357000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.389 qpair failed and we were unable to recover it. 00:28:45.389 [2024-07-25 10:17:30.357200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.389 [2024-07-25 10:17:30.357223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.389 qpair failed and we were unable to recover it. 00:28:45.389 [2024-07-25 10:17:30.357461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.389 [2024-07-25 10:17:30.357486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.389 qpair failed and we were unable to recover it. 00:28:45.389 [2024-07-25 10:17:30.357706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.389 [2024-07-25 10:17:30.357736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.389 qpair failed and we were unable to recover it. 00:28:45.389 [2024-07-25 10:17:30.357914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.389 [2024-07-25 10:17:30.357942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.389 qpair failed and we were unable to recover it. 00:28:45.389 [2024-07-25 10:17:30.358158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.389 [2024-07-25 10:17:30.358202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.389 qpair failed and we were unable to recover it. 00:28:45.389 [2024-07-25 10:17:30.358368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.389 [2024-07-25 10:17:30.358392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.389 qpair failed and we were unable to recover it. 00:28:45.389 [2024-07-25 10:17:30.358588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.389 [2024-07-25 10:17:30.358613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.389 qpair failed and we were unable to recover it. 00:28:45.389 [2024-07-25 10:17:30.358814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.389 [2024-07-25 10:17:30.358869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.389 qpair failed and we were unable to recover it. 00:28:45.389 [2024-07-25 10:17:30.359059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.389 [2024-07-25 10:17:30.359102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.389 qpair failed and we were unable to recover it. 00:28:45.389 [2024-07-25 10:17:30.359249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.389 [2024-07-25 10:17:30.359273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.389 qpair failed and we were unable to recover it. 00:28:45.389 [2024-07-25 10:17:30.359505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.389 [2024-07-25 10:17:30.359535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.389 qpair failed and we were unable to recover it. 00:28:45.389 [2024-07-25 10:17:30.359754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.389 [2024-07-25 10:17:30.359796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.389 qpair failed and we were unable to recover it. 00:28:45.389 [2024-07-25 10:17:30.359933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.389 [2024-07-25 10:17:30.359962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.389 qpair failed and we were unable to recover it. 00:28:45.389 [2024-07-25 10:17:30.360141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.389 [2024-07-25 10:17:30.360179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.389 qpair failed and we were unable to recover it. 00:28:45.389 [2024-07-25 10:17:30.360362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.389 [2024-07-25 10:17:30.360385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.389 qpair failed and we were unable to recover it. 00:28:45.389 [2024-07-25 10:17:30.360573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.389 [2024-07-25 10:17:30.360603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.389 qpair failed and we were unable to recover it. 00:28:45.389 [2024-07-25 10:17:30.360821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.389 [2024-07-25 10:17:30.360864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.389 qpair failed and we were unable to recover it. 00:28:45.389 [2024-07-25 10:17:30.361032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.389 [2024-07-25 10:17:30.361076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.389 qpair failed and we were unable to recover it. 00:28:45.389 [2024-07-25 10:17:30.361247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.389 [2024-07-25 10:17:30.361271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.389 qpair failed and we were unable to recover it. 00:28:45.389 [2024-07-25 10:17:30.361471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.389 [2024-07-25 10:17:30.361526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.389 qpair failed and we were unable to recover it. 00:28:45.389 [2024-07-25 10:17:30.361728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.389 [2024-07-25 10:17:30.361772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.389 qpair failed and we were unable to recover it. 00:28:45.389 [2024-07-25 10:17:30.361946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.389 [2024-07-25 10:17:30.361989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.389 qpair failed and we were unable to recover it. 00:28:45.389 [2024-07-25 10:17:30.362165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.389 [2024-07-25 10:17:30.362189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.389 qpair failed and we were unable to recover it. 00:28:45.389 [2024-07-25 10:17:30.362374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.389 [2024-07-25 10:17:30.362397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.389 qpair failed and we were unable to recover it. 00:28:45.389 [2024-07-25 10:17:30.362621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.390 [2024-07-25 10:17:30.362663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.390 qpair failed and we were unable to recover it. 00:28:45.390 [2024-07-25 10:17:30.362846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.390 [2024-07-25 10:17:30.362888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.390 qpair failed and we were unable to recover it. 00:28:45.390 [2024-07-25 10:17:30.363079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.390 [2024-07-25 10:17:30.363123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.390 qpair failed and we were unable to recover it. 00:28:45.390 [2024-07-25 10:17:30.363296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.390 [2024-07-25 10:17:30.363320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.390 qpair failed and we were unable to recover it. 00:28:45.390 [2024-07-25 10:17:30.363500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.390 [2024-07-25 10:17:30.363544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.390 qpair failed and we were unable to recover it. 00:28:45.390 [2024-07-25 10:17:30.363714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.390 [2024-07-25 10:17:30.363757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.390 qpair failed and we were unable to recover it. 00:28:45.390 [2024-07-25 10:17:30.363934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.390 [2024-07-25 10:17:30.363977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.390 qpair failed and we were unable to recover it. 00:28:45.390 [2024-07-25 10:17:30.364184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.390 [2024-07-25 10:17:30.364250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.390 qpair failed and we were unable to recover it. 00:28:45.390 [2024-07-25 10:17:30.364476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.390 [2024-07-25 10:17:30.364501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.390 qpair failed and we were unable to recover it. 00:28:45.390 [2024-07-25 10:17:30.364673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.390 [2024-07-25 10:17:30.364703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.390 qpair failed and we were unable to recover it. 00:28:45.390 [2024-07-25 10:17:30.364913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.390 [2024-07-25 10:17:30.364956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.390 qpair failed and we were unable to recover it. 00:28:45.390 [2024-07-25 10:17:30.365158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.390 [2024-07-25 10:17:30.365220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.390 qpair failed and we were unable to recover it. 00:28:45.390 [2024-07-25 10:17:30.365449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.390 [2024-07-25 10:17:30.365474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.390 qpair failed and we were unable to recover it. 00:28:45.390 [2024-07-25 10:17:30.365693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.390 [2024-07-25 10:17:30.365736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.390 qpair failed and we were unable to recover it. 00:28:45.390 [2024-07-25 10:17:30.365909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.390 [2024-07-25 10:17:30.365951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.390 qpair failed and we were unable to recover it. 00:28:45.390 [2024-07-25 10:17:30.366154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.390 [2024-07-25 10:17:30.366194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.390 qpair failed and we were unable to recover it. 00:28:45.390 [2024-07-25 10:17:30.366417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.390 [2024-07-25 10:17:30.366447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.390 qpair failed and we were unable to recover it. 00:28:45.390 [2024-07-25 10:17:30.366613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.390 [2024-07-25 10:17:30.366638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.390 qpair failed and we were unable to recover it. 00:28:45.390 [2024-07-25 10:17:30.366827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.390 [2024-07-25 10:17:30.366870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.390 qpair failed and we were unable to recover it. 00:28:45.390 [2024-07-25 10:17:30.367035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.390 [2024-07-25 10:17:30.367126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.390 qpair failed and we were unable to recover it. 00:28:45.390 [2024-07-25 10:17:30.367293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.390 [2024-07-25 10:17:30.367316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.390 qpair failed and we were unable to recover it. 00:28:45.390 [2024-07-25 10:17:30.367491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.390 [2024-07-25 10:17:30.367536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.390 qpair failed and we were unable to recover it. 00:28:45.390 [2024-07-25 10:17:30.367702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.390 [2024-07-25 10:17:30.367744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.390 qpair failed and we were unable to recover it. 00:28:45.390 [2024-07-25 10:17:30.367950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.390 [2024-07-25 10:17:30.368005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.390 qpair failed and we were unable to recover it. 00:28:45.390 [2024-07-25 10:17:30.368180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.390 [2024-07-25 10:17:30.368223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.390 qpair failed and we were unable to recover it. 00:28:45.390 [2024-07-25 10:17:30.368391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.390 [2024-07-25 10:17:30.368415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.390 qpair failed and we were unable to recover it. 00:28:45.390 [2024-07-25 10:17:30.368619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.390 [2024-07-25 10:17:30.368662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.390 qpair failed and we were unable to recover it. 00:28:45.390 [2024-07-25 10:17:30.368825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.390 [2024-07-25 10:17:30.368878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.390 qpair failed and we were unable to recover it. 00:28:45.390 [2024-07-25 10:17:30.369060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.390 [2024-07-25 10:17:30.369102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.390 qpair failed and we were unable to recover it. 00:28:45.390 [2024-07-25 10:17:30.369261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.390 [2024-07-25 10:17:30.369284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.390 qpair failed and we were unable to recover it. 00:28:45.390 [2024-07-25 10:17:30.369509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.390 [2024-07-25 10:17:30.369540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.390 qpair failed and we were unable to recover it. 00:28:45.391 [2024-07-25 10:17:30.369772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.391 [2024-07-25 10:17:30.369824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.391 qpair failed and we were unable to recover it. 00:28:45.391 [2024-07-25 10:17:30.370026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.391 [2024-07-25 10:17:30.370068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.391 qpair failed and we were unable to recover it. 00:28:45.391 [2024-07-25 10:17:30.370300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.391 [2024-07-25 10:17:30.370324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.391 qpair failed and we were unable to recover it. 00:28:45.391 [2024-07-25 10:17:30.370505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.391 [2024-07-25 10:17:30.370535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.391 qpair failed and we were unable to recover it. 00:28:45.391 [2024-07-25 10:17:30.370774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.391 [2024-07-25 10:17:30.370827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.391 qpair failed and we were unable to recover it. 00:28:45.391 [2024-07-25 10:17:30.371010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.391 [2024-07-25 10:17:30.371052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.391 qpair failed and we were unable to recover it. 00:28:45.391 [2024-07-25 10:17:30.371198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.391 [2024-07-25 10:17:30.371222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.391 qpair failed and we were unable to recover it. 00:28:45.391 [2024-07-25 10:17:30.371408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.391 [2024-07-25 10:17:30.371452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.391 qpair failed and we were unable to recover it. 00:28:45.391 [2024-07-25 10:17:30.371652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.391 [2024-07-25 10:17:30.371698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.391 qpair failed and we were unable to recover it. 00:28:45.391 [2024-07-25 10:17:30.371890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.391 [2024-07-25 10:17:30.371934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.391 qpair failed and we were unable to recover it. 00:28:45.391 [2024-07-25 10:17:30.372112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.391 [2024-07-25 10:17:30.372157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.391 qpair failed and we were unable to recover it. 00:28:45.391 [2024-07-25 10:17:30.372353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.391 [2024-07-25 10:17:30.372377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.391 qpair failed and we were unable to recover it. 00:28:45.391 [2024-07-25 10:17:30.372563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.391 [2024-07-25 10:17:30.372608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.391 qpair failed and we were unable to recover it. 00:28:45.391 [2024-07-25 10:17:30.372752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.391 [2024-07-25 10:17:30.372796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.391 qpair failed and we were unable to recover it. 00:28:45.391 [2024-07-25 10:17:30.372968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.391 [2024-07-25 10:17:30.373011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.391 qpair failed and we were unable to recover it. 00:28:45.391 [2024-07-25 10:17:30.373196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.391 [2024-07-25 10:17:30.373239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.391 qpair failed and we were unable to recover it. 00:28:45.391 [2024-07-25 10:17:30.373447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.391 [2024-07-25 10:17:30.373472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.391 qpair failed and we were unable to recover it. 00:28:45.391 [2024-07-25 10:17:30.373653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.391 [2024-07-25 10:17:30.373696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.391 qpair failed and we were unable to recover it. 00:28:45.391 [2024-07-25 10:17:30.373850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.391 [2024-07-25 10:17:30.373892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.391 qpair failed and we were unable to recover it. 00:28:45.391 [2024-07-25 10:17:30.374072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.391 [2024-07-25 10:17:30.374115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.391 qpair failed and we were unable to recover it. 00:28:45.391 [2024-07-25 10:17:30.374315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.391 [2024-07-25 10:17:30.374339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.391 qpair failed and we were unable to recover it. 00:28:45.391 [2024-07-25 10:17:30.374524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.391 [2024-07-25 10:17:30.374548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.391 qpair failed and we were unable to recover it. 00:28:45.391 [2024-07-25 10:17:30.374741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.391 [2024-07-25 10:17:30.374785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.391 qpair failed and we were unable to recover it. 00:28:45.391 [2024-07-25 10:17:30.374948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.391 [2024-07-25 10:17:30.374990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.391 qpair failed and we were unable to recover it. 00:28:45.391 [2024-07-25 10:17:30.375203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.391 [2024-07-25 10:17:30.375257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.391 qpair failed and we were unable to recover it. 00:28:45.391 [2024-07-25 10:17:30.375438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.391 [2024-07-25 10:17:30.375462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.391 qpair failed and we were unable to recover it. 00:28:45.391 [2024-07-25 10:17:30.375667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.391 [2024-07-25 10:17:30.375691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.391 qpair failed and we were unable to recover it. 00:28:45.391 [2024-07-25 10:17:30.375869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.391 [2024-07-25 10:17:30.375912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.391 qpair failed and we were unable to recover it. 00:28:45.391 [2024-07-25 10:17:30.376128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.391 [2024-07-25 10:17:30.376186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.391 qpair failed and we were unable to recover it. 00:28:45.391 [2024-07-25 10:17:30.376395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.391 [2024-07-25 10:17:30.376419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.391 qpair failed and we were unable to recover it. 00:28:45.391 [2024-07-25 10:17:30.376612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.391 [2024-07-25 10:17:30.376655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.391 qpair failed and we were unable to recover it. 00:28:45.391 [2024-07-25 10:17:30.376827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.391 [2024-07-25 10:17:30.376869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.391 qpair failed and we were unable to recover it. 00:28:45.391 [2024-07-25 10:17:30.377093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.391 [2024-07-25 10:17:30.377142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.391 qpair failed and we were unable to recover it. 00:28:45.391 [2024-07-25 10:17:30.377364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.391 [2024-07-25 10:17:30.377388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.391 qpair failed and we were unable to recover it. 00:28:45.391 [2024-07-25 10:17:30.377581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.391 [2024-07-25 10:17:30.377607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.391 qpair failed and we were unable to recover it. 00:28:45.391 [2024-07-25 10:17:30.377823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.391 [2024-07-25 10:17:30.377865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.391 qpair failed and we were unable to recover it. 00:28:45.391 [2024-07-25 10:17:30.378080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.391 [2024-07-25 10:17:30.378133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.391 qpair failed and we were unable to recover it. 00:28:45.391 [2024-07-25 10:17:30.378344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.391 [2024-07-25 10:17:30.378368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.391 qpair failed and we were unable to recover it. 00:28:45.392 [2024-07-25 10:17:30.378550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.392 [2024-07-25 10:17:30.378575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.392 qpair failed and we were unable to recover it. 00:28:45.392 [2024-07-25 10:17:30.378758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.392 [2024-07-25 10:17:30.378802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.392 qpair failed and we were unable to recover it. 00:28:45.392 [2024-07-25 10:17:30.378998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.392 [2024-07-25 10:17:30.379047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.392 qpair failed and we were unable to recover it. 00:28:45.392 [2024-07-25 10:17:30.379263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.392 [2024-07-25 10:17:30.379306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.392 qpair failed and we were unable to recover it. 00:28:45.392 [2024-07-25 10:17:30.379504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.392 [2024-07-25 10:17:30.379534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.392 qpair failed and we were unable to recover it. 00:28:45.392 [2024-07-25 10:17:30.379762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.392 [2024-07-25 10:17:30.379804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.392 qpair failed and we were unable to recover it. 00:28:45.392 [2024-07-25 10:17:30.379980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.392 [2024-07-25 10:17:30.380033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.392 qpair failed and we were unable to recover it. 00:28:45.392 [2024-07-25 10:17:30.380217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.392 [2024-07-25 10:17:30.380241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.392 qpair failed and we were unable to recover it. 00:28:45.392 [2024-07-25 10:17:30.380463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.392 [2024-07-25 10:17:30.380489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.392 qpair failed and we were unable to recover it. 00:28:45.392 [2024-07-25 10:17:30.380707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.392 [2024-07-25 10:17:30.380736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.392 qpair failed and we were unable to recover it. 00:28:45.392 [2024-07-25 10:17:30.380921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.392 [2024-07-25 10:17:30.380983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.392 qpair failed and we were unable to recover it. 00:28:45.392 [2024-07-25 10:17:30.381203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.392 [2024-07-25 10:17:30.381247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.392 qpair failed and we were unable to recover it. 00:28:45.392 [2024-07-25 10:17:30.381412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.392 [2024-07-25 10:17:30.381454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.392 qpair failed and we were unable to recover it. 00:28:45.392 [2024-07-25 10:17:30.381628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.392 [2024-07-25 10:17:30.381654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.392 qpair failed and we were unable to recover it. 00:28:45.392 [2024-07-25 10:17:30.381847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.392 [2024-07-25 10:17:30.381905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.392 qpair failed and we were unable to recover it. 00:28:45.392 [2024-07-25 10:17:30.382084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.392 [2024-07-25 10:17:30.382126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.392 qpair failed and we were unable to recover it. 00:28:45.392 [2024-07-25 10:17:30.382324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.392 [2024-07-25 10:17:30.382348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.392 qpair failed and we were unable to recover it. 00:28:45.392 [2024-07-25 10:17:30.382593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.392 [2024-07-25 10:17:30.382619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.392 qpair failed and we were unable to recover it. 00:28:45.392 [2024-07-25 10:17:30.382805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.392 [2024-07-25 10:17:30.382865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.392 qpair failed and we were unable to recover it. 00:28:45.392 [2024-07-25 10:17:30.383054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.392 [2024-07-25 10:17:30.383097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.392 qpair failed and we were unable to recover it. 00:28:45.392 [2024-07-25 10:17:30.383302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.392 [2024-07-25 10:17:30.383326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.392 qpair failed and we were unable to recover it. 00:28:45.392 [2024-07-25 10:17:30.383541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.392 [2024-07-25 10:17:30.383584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.392 qpair failed and we were unable to recover it. 00:28:45.392 [2024-07-25 10:17:30.383809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.392 [2024-07-25 10:17:30.383860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.392 qpair failed and we were unable to recover it. 00:28:45.392 [2024-07-25 10:17:30.384077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.392 [2024-07-25 10:17:30.384119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.392 qpair failed and we were unable to recover it. 00:28:45.392 [2024-07-25 10:17:30.384300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.392 [2024-07-25 10:17:30.384324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.392 qpair failed and we were unable to recover it. 00:28:45.392 [2024-07-25 10:17:30.384517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.392 [2024-07-25 10:17:30.384561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.392 qpair failed and we were unable to recover it. 00:28:45.392 [2024-07-25 10:17:30.384780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.392 [2024-07-25 10:17:30.384830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.392 qpair failed and we were unable to recover it. 00:28:45.392 [2024-07-25 10:17:30.385016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.392 [2024-07-25 10:17:30.385058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.392 qpair failed and we were unable to recover it. 00:28:45.392 [2024-07-25 10:17:30.385251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.392 [2024-07-25 10:17:30.385275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.392 qpair failed and we were unable to recover it. 00:28:45.392 [2024-07-25 10:17:30.385494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.392 [2024-07-25 10:17:30.385538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.392 qpair failed and we were unable to recover it. 00:28:45.392 [2024-07-25 10:17:30.385754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.392 [2024-07-25 10:17:30.385808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.392 qpair failed and we were unable to recover it. 00:28:45.392 [2024-07-25 10:17:30.386037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.392 [2024-07-25 10:17:30.386080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.392 qpair failed and we were unable to recover it. 00:28:45.392 [2024-07-25 10:17:30.386232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.392 [2024-07-25 10:17:30.386255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.392 qpair failed and we were unable to recover it. 00:28:45.392 [2024-07-25 10:17:30.386438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.392 [2024-07-25 10:17:30.386465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.392 qpair failed and we were unable to recover it. 00:28:45.392 [2024-07-25 10:17:30.386641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.392 [2024-07-25 10:17:30.386684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.392 qpair failed and we were unable to recover it. 00:28:45.392 [2024-07-25 10:17:30.386875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.392 [2024-07-25 10:17:30.386918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.392 qpair failed and we were unable to recover it. 00:28:45.392 [2024-07-25 10:17:30.387085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.392 [2024-07-25 10:17:30.387127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.392 qpair failed and we were unable to recover it. 00:28:45.392 [2024-07-25 10:17:30.387341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.392 [2024-07-25 10:17:30.387365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.393 qpair failed and we were unable to recover it. 00:28:45.393 [2024-07-25 10:17:30.387549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.393 [2024-07-25 10:17:30.387574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.393 qpair failed and we were unable to recover it. 00:28:45.393 [2024-07-25 10:17:30.387756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.393 [2024-07-25 10:17:30.387798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.393 qpair failed and we were unable to recover it. 00:28:45.393 [2024-07-25 10:17:30.388012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.393 [2024-07-25 10:17:30.388056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.393 qpair failed and we were unable to recover it. 00:28:45.393 [2024-07-25 10:17:30.388261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.393 [2024-07-25 10:17:30.388311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.393 qpair failed and we were unable to recover it. 00:28:45.393 [2024-07-25 10:17:30.388488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.393 [2024-07-25 10:17:30.388513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.393 qpair failed and we were unable to recover it. 00:28:45.393 [2024-07-25 10:17:30.388725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.393 [2024-07-25 10:17:30.388768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.393 qpair failed and we were unable to recover it. 00:28:45.393 [2024-07-25 10:17:30.388911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.393 [2024-07-25 10:17:30.388954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.393 qpair failed and we were unable to recover it. 00:28:45.393 [2024-07-25 10:17:30.389154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.393 [2024-07-25 10:17:30.389196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.393 qpair failed and we were unable to recover it. 00:28:45.393 [2024-07-25 10:17:30.389373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.393 [2024-07-25 10:17:30.389397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.393 qpair failed and we were unable to recover it. 00:28:45.393 [2024-07-25 10:17:30.389557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.393 [2024-07-25 10:17:30.389587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.393 qpair failed and we were unable to recover it. 00:28:45.393 [2024-07-25 10:17:30.389813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.393 [2024-07-25 10:17:30.389855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.393 qpair failed and we were unable to recover it. 00:28:45.393 [2024-07-25 10:17:30.390058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.393 [2024-07-25 10:17:30.390101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.393 qpair failed and we were unable to recover it. 00:28:45.393 [2024-07-25 10:17:30.390287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.393 [2024-07-25 10:17:30.390311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.393 qpair failed and we were unable to recover it. 00:28:45.393 [2024-07-25 10:17:30.390493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.393 [2024-07-25 10:17:30.390536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.393 qpair failed and we were unable to recover it. 00:28:45.393 [2024-07-25 10:17:30.390675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.393 [2024-07-25 10:17:30.390704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.393 qpair failed and we were unable to recover it. 00:28:45.393 [2024-07-25 10:17:30.390920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.393 [2024-07-25 10:17:30.390962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.393 qpair failed and we were unable to recover it. 00:28:45.393 [2024-07-25 10:17:30.391171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.393 [2024-07-25 10:17:30.391227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.393 qpair failed and we were unable to recover it. 00:28:45.393 [2024-07-25 10:17:30.391401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.393 [2024-07-25 10:17:30.391425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.393 qpair failed and we were unable to recover it. 00:28:45.393 [2024-07-25 10:17:30.391664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.393 [2024-07-25 10:17:30.391708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.393 qpair failed and we were unable to recover it. 00:28:45.393 [2024-07-25 10:17:30.391892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.393 [2024-07-25 10:17:30.391934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.393 qpair failed and we were unable to recover it. 00:28:45.393 [2024-07-25 10:17:30.392112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.393 [2024-07-25 10:17:30.392163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.393 qpair failed and we were unable to recover it. 00:28:45.393 [2024-07-25 10:17:30.392313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.393 [2024-07-25 10:17:30.392337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.393 qpair failed and we were unable to recover it. 00:28:45.393 [2024-07-25 10:17:30.392566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.393 [2024-07-25 10:17:30.392609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.393 qpair failed and we were unable to recover it. 00:28:45.393 [2024-07-25 10:17:30.392785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.393 [2024-07-25 10:17:30.392837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.393 qpair failed and we were unable to recover it. 00:28:45.393 [2024-07-25 10:17:30.393047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.393 [2024-07-25 10:17:30.393101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.393 qpair failed and we were unable to recover it. 00:28:45.393 [2024-07-25 10:17:30.393240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.393 [2024-07-25 10:17:30.393264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.393 qpair failed and we were unable to recover it. 00:28:45.393 [2024-07-25 10:17:30.393445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.393 [2024-07-25 10:17:30.393469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.393 qpair failed and we were unable to recover it. 00:28:45.393 [2024-07-25 10:17:30.393685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.393 [2024-07-25 10:17:30.393728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.393 qpair failed and we were unable to recover it. 00:28:45.393 [2024-07-25 10:17:30.393892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.393 [2024-07-25 10:17:30.393946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.393 qpair failed and we were unable to recover it. 00:28:45.393 [2024-07-25 10:17:30.394123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.393 [2024-07-25 10:17:30.394166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.393 qpair failed and we were unable to recover it. 00:28:45.393 [2024-07-25 10:17:30.394314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.393 [2024-07-25 10:17:30.394338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.393 qpair failed and we were unable to recover it. 00:28:45.393 [2024-07-25 10:17:30.394520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.393 [2024-07-25 10:17:30.394547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.393 qpair failed and we were unable to recover it. 00:28:45.393 [2024-07-25 10:17:30.394715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.393 [2024-07-25 10:17:30.394758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.393 qpair failed and we were unable to recover it. 00:28:45.393 [2024-07-25 10:17:30.394953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.393 [2024-07-25 10:17:30.394995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.393 qpair failed and we were unable to recover it. 00:28:45.393 [2024-07-25 10:17:30.395167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.393 [2024-07-25 10:17:30.395209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.393 qpair failed and we were unable to recover it. 00:28:45.393 [2024-07-25 10:17:30.395385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.393 [2024-07-25 10:17:30.395408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.393 qpair failed and we were unable to recover it. 00:28:45.393 [2024-07-25 10:17:30.395630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.393 [2024-07-25 10:17:30.395673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.393 qpair failed and we were unable to recover it. 00:28:45.393 [2024-07-25 10:17:30.395866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.394 [2024-07-25 10:17:30.395908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.394 qpair failed and we were unable to recover it. 00:28:45.394 [2024-07-25 10:17:30.396072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.394 [2024-07-25 10:17:30.396114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.394 qpair failed and we were unable to recover it. 00:28:45.394 [2024-07-25 10:17:30.396296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.394 [2024-07-25 10:17:30.396335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.394 qpair failed and we were unable to recover it. 00:28:45.394 [2024-07-25 10:17:30.396504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.394 [2024-07-25 10:17:30.396548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.394 qpair failed and we were unable to recover it. 00:28:45.394 [2024-07-25 10:17:30.396743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.394 [2024-07-25 10:17:30.396784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.394 qpair failed and we were unable to recover it. 00:28:45.394 [2024-07-25 10:17:30.396931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.394 [2024-07-25 10:17:30.396955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.394 qpair failed and we were unable to recover it. 00:28:45.394 [2024-07-25 10:17:30.397131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.394 [2024-07-25 10:17:30.397175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.394 qpair failed and we were unable to recover it. 00:28:45.394 [2024-07-25 10:17:30.397333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.394 [2024-07-25 10:17:30.397357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.394 qpair failed and we were unable to recover it. 00:28:45.394 [2024-07-25 10:17:30.397587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.394 [2024-07-25 10:17:30.397629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.394 qpair failed and we were unable to recover it. 00:28:45.394 [2024-07-25 10:17:30.397825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.394 [2024-07-25 10:17:30.397868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.394 qpair failed and we were unable to recover it. 00:28:45.394 [2024-07-25 10:17:30.398036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.394 [2024-07-25 10:17:30.398078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.394 qpair failed and we were unable to recover it. 00:28:45.394 [2024-07-25 10:17:30.398254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.394 [2024-07-25 10:17:30.398277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.394 qpair failed and we were unable to recover it. 00:28:45.394 [2024-07-25 10:17:30.398440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.394 [2024-07-25 10:17:30.398491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.394 qpair failed and we were unable to recover it. 00:28:45.394 [2024-07-25 10:17:30.398631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.394 [2024-07-25 10:17:30.398674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.394 qpair failed and we were unable to recover it. 00:28:45.394 [2024-07-25 10:17:30.398841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.394 [2024-07-25 10:17:30.398884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.394 qpair failed and we were unable to recover it. 00:28:45.394 [2024-07-25 10:17:30.399038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.394 [2024-07-25 10:17:30.399067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.394 qpair failed and we were unable to recover it. 00:28:45.394 [2024-07-25 10:17:30.399278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.394 [2024-07-25 10:17:30.399301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.394 qpair failed and we were unable to recover it. 00:28:45.394 [2024-07-25 10:17:30.399515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.394 [2024-07-25 10:17:30.399539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.394 qpair failed and we were unable to recover it. 00:28:45.394 [2024-07-25 10:17:30.399754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.394 [2024-07-25 10:17:30.399777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.394 qpair failed and we were unable to recover it. 00:28:45.394 [2024-07-25 10:17:30.399950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.394 [2024-07-25 10:17:30.399974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.394 qpair failed and we were unable to recover it. 00:28:45.394 [2024-07-25 10:17:30.400150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.394 [2024-07-25 10:17:30.400174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.394 qpair failed and we were unable to recover it. 00:28:45.394 [2024-07-25 10:17:30.400388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.394 [2024-07-25 10:17:30.400426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.394 qpair failed and we were unable to recover it. 00:28:45.394 [2024-07-25 10:17:30.400591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.394 [2024-07-25 10:17:30.400639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.394 qpair failed and we were unable to recover it. 00:28:45.394 [2024-07-25 10:17:30.400855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.394 [2024-07-25 10:17:30.400907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.394 qpair failed and we were unable to recover it. 00:28:45.394 [2024-07-25 10:17:30.401121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.394 [2024-07-25 10:17:30.401164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.394 qpair failed and we were unable to recover it. 00:28:45.394 [2024-07-25 10:17:30.401356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.394 [2024-07-25 10:17:30.401380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.394 qpair failed and we were unable to recover it. 00:28:45.394 [2024-07-25 10:17:30.401568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.394 [2024-07-25 10:17:30.401612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.394 qpair failed and we were unable to recover it. 00:28:45.394 [2024-07-25 10:17:30.401828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.394 [2024-07-25 10:17:30.401882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.394 qpair failed and we were unable to recover it. 00:28:45.394 [2024-07-25 10:17:30.402054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.394 [2024-07-25 10:17:30.402096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.394 qpair failed and we were unable to recover it. 00:28:45.394 [2024-07-25 10:17:30.402302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.394 [2024-07-25 10:17:30.402326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.394 qpair failed and we were unable to recover it. 00:28:45.394 [2024-07-25 10:17:30.402551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.394 [2024-07-25 10:17:30.402594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.394 qpair failed and we were unable to recover it. 00:28:45.394 [2024-07-25 10:17:30.402806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.394 [2024-07-25 10:17:30.402858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.394 qpair failed and we were unable to recover it. 00:28:45.394 [2024-07-25 10:17:30.403037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.395 [2024-07-25 10:17:30.403080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.395 qpair failed and we were unable to recover it. 00:28:45.395 [2024-07-25 10:17:30.403225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.395 [2024-07-25 10:17:30.403249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.395 qpair failed and we were unable to recover it. 00:28:45.395 [2024-07-25 10:17:30.403501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.395 [2024-07-25 10:17:30.403537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.395 qpair failed and we were unable to recover it. 00:28:45.395 [2024-07-25 10:17:30.403684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.395 [2024-07-25 10:17:30.403729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.395 qpair failed and we were unable to recover it. 00:28:45.395 [2024-07-25 10:17:30.403930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.395 [2024-07-25 10:17:30.403972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.395 qpair failed and we were unable to recover it. 00:28:45.395 [2024-07-25 10:17:30.404178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.395 [2024-07-25 10:17:30.404221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.395 qpair failed and we were unable to recover it. 00:28:45.395 [2024-07-25 10:17:30.404462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.395 [2024-07-25 10:17:30.404486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.395 qpair failed and we were unable to recover it. 00:28:45.395 [2024-07-25 10:17:30.404681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.395 [2024-07-25 10:17:30.404738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.395 qpair failed and we were unable to recover it. 00:28:45.395 [2024-07-25 10:17:30.404933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.395 [2024-07-25 10:17:30.404962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.395 qpair failed and we were unable to recover it. 00:28:45.395 [2024-07-25 10:17:30.405136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.395 [2024-07-25 10:17:30.405179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.395 qpair failed and we were unable to recover it. 00:28:45.395 [2024-07-25 10:17:30.405372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.395 [2024-07-25 10:17:30.405395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.395 qpair failed and we were unable to recover it. 00:28:45.395 [2024-07-25 10:17:30.405583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.395 [2024-07-25 10:17:30.405609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.395 qpair failed and we were unable to recover it. 00:28:45.395 [2024-07-25 10:17:30.405783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.395 [2024-07-25 10:17:30.405826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.395 qpair failed and we were unable to recover it. 00:28:45.395 [2024-07-25 10:17:30.405974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.395 [2024-07-25 10:17:30.406017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.395 qpair failed and we were unable to recover it. 00:28:45.395 [2024-07-25 10:17:30.406208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.395 [2024-07-25 10:17:30.406251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.395 qpair failed and we were unable to recover it. 00:28:45.395 [2024-07-25 10:17:30.406393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.395 [2024-07-25 10:17:30.406415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.395 qpair failed and we were unable to recover it. 00:28:45.395 [2024-07-25 10:17:30.406614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.395 [2024-07-25 10:17:30.406657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.395 qpair failed and we were unable to recover it. 00:28:45.395 [2024-07-25 10:17:30.406842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.395 [2024-07-25 10:17:30.406884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.395 qpair failed and we were unable to recover it. 00:28:45.395 [2024-07-25 10:17:30.407045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.395 [2024-07-25 10:17:30.407087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.395 qpair failed and we were unable to recover it. 00:28:45.395 [2024-07-25 10:17:30.407290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.395 [2024-07-25 10:17:30.407314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.395 qpair failed and we were unable to recover it. 00:28:45.395 [2024-07-25 10:17:30.407534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.395 [2024-07-25 10:17:30.407565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.395 qpair failed and we were unable to recover it. 00:28:45.395 [2024-07-25 10:17:30.407774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.395 [2024-07-25 10:17:30.407817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.395 qpair failed and we were unable to recover it. 00:28:45.395 [2024-07-25 10:17:30.407965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.395 [2024-07-25 10:17:30.408008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.395 qpair failed and we were unable to recover it. 00:28:45.395 [2024-07-25 10:17:30.408170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.395 [2024-07-25 10:17:30.408211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.395 qpair failed and we were unable to recover it. 00:28:45.395 [2024-07-25 10:17:30.408416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.395 [2024-07-25 10:17:30.408445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.395 qpair failed and we were unable to recover it. 00:28:45.395 [2024-07-25 10:17:30.408615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.395 [2024-07-25 10:17:30.408658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.395 qpair failed and we were unable to recover it. 00:28:45.395 [2024-07-25 10:17:30.408830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.395 [2024-07-25 10:17:30.408872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.395 qpair failed and we were unable to recover it. 00:28:45.395 [2024-07-25 10:17:30.409086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.395 [2024-07-25 10:17:30.409144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.395 qpair failed and we were unable to recover it. 00:28:45.395 [2024-07-25 10:17:30.409328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.395 [2024-07-25 10:17:30.409351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.395 qpair failed and we were unable to recover it. 00:28:45.395 [2024-07-25 10:17:30.409521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.395 [2024-07-25 10:17:30.409565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.395 qpair failed and we were unable to recover it. 00:28:45.395 [2024-07-25 10:17:30.409752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.395 [2024-07-25 10:17:30.409797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.395 qpair failed and we were unable to recover it. 00:28:45.395 [2024-07-25 10:17:30.409999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.395 [2024-07-25 10:17:30.410059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.395 qpair failed and we were unable to recover it. 00:28:45.395 [2024-07-25 10:17:30.410237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.395 [2024-07-25 10:17:30.410261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.395 qpair failed and we were unable to recover it. 00:28:45.395 [2024-07-25 10:17:30.410440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.395 [2024-07-25 10:17:30.410464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.395 qpair failed and we were unable to recover it. 00:28:45.395 [2024-07-25 10:17:30.410680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.395 [2024-07-25 10:17:30.410705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.395 qpair failed and we were unable to recover it. 00:28:45.395 [2024-07-25 10:17:30.410867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.395 [2024-07-25 10:17:30.410910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.395 qpair failed and we were unable to recover it. 00:28:45.395 [2024-07-25 10:17:30.411110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.395 [2024-07-25 10:17:30.411152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.395 qpair failed and we were unable to recover it. 00:28:45.395 [2024-07-25 10:17:30.411346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.395 [2024-07-25 10:17:30.411370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.395 qpair failed and we were unable to recover it. 00:28:45.396 [2024-07-25 10:17:30.411570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.396 [2024-07-25 10:17:30.411622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.396 qpair failed and we were unable to recover it. 00:28:45.396 [2024-07-25 10:17:30.411827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.396 [2024-07-25 10:17:30.411881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.396 qpair failed and we were unable to recover it. 00:28:45.396 [2024-07-25 10:17:30.412099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.396 [2024-07-25 10:17:30.412141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.396 qpair failed and we were unable to recover it. 00:28:45.396 [2024-07-25 10:17:30.412351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.396 [2024-07-25 10:17:30.412375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.396 qpair failed and we were unable to recover it. 00:28:45.396 [2024-07-25 10:17:30.412571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.396 [2024-07-25 10:17:30.412615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.396 qpair failed and we were unable to recover it. 00:28:45.396 [2024-07-25 10:17:30.412820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.396 [2024-07-25 10:17:30.412879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.396 qpair failed and we were unable to recover it. 00:28:45.396 [2024-07-25 10:17:30.413068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.396 [2024-07-25 10:17:30.413111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.396 qpair failed and we were unable to recover it. 00:28:45.396 [2024-07-25 10:17:30.413278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.396 [2024-07-25 10:17:30.413302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.396 qpair failed and we were unable to recover it. 00:28:45.396 [2024-07-25 10:17:30.413516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.396 [2024-07-25 10:17:30.413559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.396 qpair failed and we were unable to recover it. 00:28:45.396 [2024-07-25 10:17:30.413765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.396 [2024-07-25 10:17:30.413823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.396 qpair failed and we were unable to recover it. 00:28:45.396 [2024-07-25 10:17:30.414017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.396 [2024-07-25 10:17:30.414060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.396 qpair failed and we were unable to recover it. 00:28:45.396 [2024-07-25 10:17:30.414256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.396 [2024-07-25 10:17:30.414280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.396 qpair failed and we were unable to recover it. 00:28:45.396 [2024-07-25 10:17:30.414502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.396 [2024-07-25 10:17:30.414546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.396 qpair failed and we were unable to recover it. 00:28:45.396 [2024-07-25 10:17:30.414755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.396 [2024-07-25 10:17:30.414779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.396 qpair failed and we were unable to recover it. 00:28:45.396 [2024-07-25 10:17:30.415003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.396 [2024-07-25 10:17:30.415047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.396 qpair failed and we were unable to recover it. 00:28:45.396 [2024-07-25 10:17:30.415235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.396 [2024-07-25 10:17:30.415259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.396 qpair failed and we were unable to recover it. 00:28:45.396 [2024-07-25 10:17:30.415448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.396 [2024-07-25 10:17:30.415474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.396 qpair failed and we were unable to recover it. 00:28:45.396 [2024-07-25 10:17:30.415701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.396 [2024-07-25 10:17:30.415744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.396 qpair failed and we were unable to recover it. 00:28:45.396 [2024-07-25 10:17:30.415942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.396 [2024-07-25 10:17:30.415971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.396 qpair failed and we were unable to recover it. 00:28:45.396 [2024-07-25 10:17:30.416186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.396 [2024-07-25 10:17:30.416229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.396 qpair failed and we were unable to recover it. 00:28:45.396 [2024-07-25 10:17:30.416444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.396 [2024-07-25 10:17:30.416469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.396 qpair failed and we were unable to recover it. 00:28:45.396 [2024-07-25 10:17:30.416657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.396 [2024-07-25 10:17:30.416701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.396 qpair failed and we were unable to recover it. 00:28:45.396 [2024-07-25 10:17:30.416858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.396 [2024-07-25 10:17:30.416900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.396 qpair failed and we were unable to recover it. 00:28:45.396 [2024-07-25 10:17:30.417105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.396 [2024-07-25 10:17:30.417148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.396 qpair failed and we were unable to recover it. 00:28:45.396 [2024-07-25 10:17:30.417310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.396 [2024-07-25 10:17:30.417334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.396 qpair failed and we were unable to recover it. 00:28:45.396 [2024-07-25 10:17:30.417564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.396 [2024-07-25 10:17:30.417621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.396 qpair failed and we were unable to recover it. 00:28:45.396 [2024-07-25 10:17:30.417823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.396 [2024-07-25 10:17:30.417865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.396 qpair failed and we were unable to recover it. 00:28:45.396 [2024-07-25 10:17:30.418063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.396 [2024-07-25 10:17:30.418105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.396 qpair failed and we were unable to recover it. 00:28:45.396 [2024-07-25 10:17:30.418241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.396 [2024-07-25 10:17:30.418264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.396 qpair failed and we were unable to recover it. 00:28:45.396 [2024-07-25 10:17:30.418491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.396 [2024-07-25 10:17:30.418515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.396 qpair failed and we were unable to recover it. 00:28:45.396 [2024-07-25 10:17:30.418718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.396 [2024-07-25 10:17:30.418761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.396 qpair failed and we were unable to recover it. 00:28:45.396 [2024-07-25 10:17:30.418904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.396 [2024-07-25 10:17:30.418948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.396 qpair failed and we were unable to recover it. 00:28:45.396 [2024-07-25 10:17:30.419133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.396 [2024-07-25 10:17:30.419179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.396 qpair failed and we were unable to recover it. 00:28:45.396 [2024-07-25 10:17:30.419392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.396 [2024-07-25 10:17:30.419417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.396 qpair failed and we were unable to recover it. 00:28:45.396 [2024-07-25 10:17:30.419578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.396 [2024-07-25 10:17:30.419605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.396 qpair failed and we were unable to recover it. 00:28:45.396 [2024-07-25 10:17:30.419836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.396 [2024-07-25 10:17:30.419879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.396 qpair failed and we were unable to recover it. 00:28:45.396 [2024-07-25 10:17:30.420107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.396 [2024-07-25 10:17:30.420155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.396 qpair failed and we were unable to recover it. 00:28:45.397 [2024-07-25 10:17:30.420331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.397 [2024-07-25 10:17:30.420356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.397 qpair failed and we were unable to recover it. 00:28:45.397 [2024-07-25 10:17:30.420569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.397 [2024-07-25 10:17:30.420596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.397 qpair failed and we were unable to recover it. 00:28:45.397 [2024-07-25 10:17:30.420817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.397 [2024-07-25 10:17:30.420846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.397 qpair failed and we were unable to recover it. 00:28:45.397 [2024-07-25 10:17:30.421013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.397 [2024-07-25 10:17:30.421038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.397 qpair failed and we were unable to recover it. 00:28:45.397 [2024-07-25 10:17:30.421294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.397 [2024-07-25 10:17:30.421337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.397 qpair failed and we were unable to recover it. 00:28:45.397 [2024-07-25 10:17:30.421557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.397 [2024-07-25 10:17:30.421602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.397 qpair failed and we were unable to recover it. 00:28:45.397 [2024-07-25 10:17:30.421807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.397 [2024-07-25 10:17:30.421851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.397 qpair failed and we were unable to recover it. 00:28:45.397 [2024-07-25 10:17:30.421998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.397 [2024-07-25 10:17:30.422042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.397 qpair failed and we were unable to recover it. 00:28:45.397 [2024-07-25 10:17:30.422258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.397 [2024-07-25 10:17:30.422285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.397 qpair failed and we were unable to recover it. 00:28:45.397 [2024-07-25 10:17:30.422480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.397 [2024-07-25 10:17:30.422511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.397 qpair failed and we were unable to recover it. 00:28:45.397 [2024-07-25 10:17:30.422771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.397 [2024-07-25 10:17:30.422814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.397 qpair failed and we were unable to recover it. 00:28:45.397 [2024-07-25 10:17:30.423006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.397 [2024-07-25 10:17:30.423051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.397 qpair failed and we were unable to recover it. 00:28:45.397 [2024-07-25 10:17:30.423250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.397 [2024-07-25 10:17:30.423276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.397 qpair failed and we were unable to recover it. 00:28:45.397 [2024-07-25 10:17:30.423450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.397 [2024-07-25 10:17:30.423476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.397 qpair failed and we were unable to recover it. 00:28:45.397 [2024-07-25 10:17:30.423608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.397 [2024-07-25 10:17:30.423651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.397 qpair failed and we were unable to recover it. 00:28:45.397 [2024-07-25 10:17:30.423870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.397 [2024-07-25 10:17:30.423913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.397 qpair failed and we were unable to recover it. 00:28:45.397 [2024-07-25 10:17:30.424129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.397 [2024-07-25 10:17:30.424172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.397 qpair failed and we were unable to recover it. 00:28:45.397 [2024-07-25 10:17:30.424348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.397 [2024-07-25 10:17:30.424373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.397 qpair failed and we were unable to recover it. 00:28:45.397 [2024-07-25 10:17:30.424580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.397 [2024-07-25 10:17:30.424607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.397 qpair failed and we were unable to recover it. 00:28:45.397 [2024-07-25 10:17:30.424777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.397 [2024-07-25 10:17:30.424826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.397 qpair failed and we were unable to recover it. 00:28:45.397 [2024-07-25 10:17:30.425003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.397 [2024-07-25 10:17:30.425047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.397 qpair failed and we were unable to recover it. 00:28:45.397 [2024-07-25 10:17:30.425239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.397 [2024-07-25 10:17:30.425292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.397 qpair failed and we were unable to recover it. 00:28:45.397 [2024-07-25 10:17:30.425493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.397 [2024-07-25 10:17:30.425524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.397 qpair failed and we were unable to recover it. 00:28:45.397 [2024-07-25 10:17:30.425721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.397 [2024-07-25 10:17:30.425765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.397 qpair failed and we were unable to recover it. 00:28:45.397 [2024-07-25 10:17:30.425962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.397 [2024-07-25 10:17:30.426006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.397 qpair failed and we were unable to recover it. 00:28:45.397 [2024-07-25 10:17:30.426188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.397 [2024-07-25 10:17:30.426214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.397 qpair failed and we were unable to recover it. 00:28:45.397 [2024-07-25 10:17:30.426401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.397 [2024-07-25 10:17:30.426434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.397 qpair failed and we were unable to recover it. 00:28:45.397 [2024-07-25 10:17:30.426627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.397 [2024-07-25 10:17:30.426671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.397 qpair failed and we were unable to recover it. 00:28:45.397 [2024-07-25 10:17:30.426884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.397 [2024-07-25 10:17:30.426928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.397 qpair failed and we were unable to recover it. 00:28:45.397 [2024-07-25 10:17:30.427149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.397 [2024-07-25 10:17:30.427195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.397 qpair failed and we were unable to recover it. 00:28:45.397 [2024-07-25 10:17:30.427394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.397 [2024-07-25 10:17:30.427420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.397 qpair failed and we were unable to recover it. 00:28:45.397 [2024-07-25 10:17:30.427610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.397 [2024-07-25 10:17:30.427653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.397 qpair failed and we were unable to recover it. 00:28:45.397 [2024-07-25 10:17:30.427822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.397 [2024-07-25 10:17:30.427865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.397 qpair failed and we were unable to recover it. 00:28:45.397 [2024-07-25 10:17:30.428043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.397 [2024-07-25 10:17:30.428096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.397 qpair failed and we were unable to recover it. 00:28:45.397 [2024-07-25 10:17:30.428253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.397 [2024-07-25 10:17:30.428279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.397 qpair failed and we were unable to recover it. 00:28:45.397 [2024-07-25 10:17:30.428426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.397 [2024-07-25 10:17:30.428480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.397 qpair failed and we were unable to recover it. 00:28:45.397 [2024-07-25 10:17:30.428648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.397 [2024-07-25 10:17:30.428692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.397 qpair failed and we were unable to recover it. 00:28:45.397 [2024-07-25 10:17:30.428872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.397 [2024-07-25 10:17:30.428923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.398 qpair failed and we were unable to recover it. 00:28:45.398 [2024-07-25 10:17:30.429104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.398 [2024-07-25 10:17:30.429148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.398 qpair failed and we were unable to recover it. 00:28:45.398 [2024-07-25 10:17:30.429375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.398 [2024-07-25 10:17:30.429401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.398 qpair failed and we were unable to recover it. 00:28:45.398 [2024-07-25 10:17:30.429634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.398 [2024-07-25 10:17:30.429678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.398 qpair failed and we were unable to recover it. 00:28:45.398 [2024-07-25 10:17:30.429890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.398 [2024-07-25 10:17:30.429941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.398 qpair failed and we were unable to recover it. 00:28:45.398 [2024-07-25 10:17:30.430169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.398 [2024-07-25 10:17:30.430213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.398 qpair failed and we were unable to recover it. 00:28:45.398 [2024-07-25 10:17:30.430357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.398 [2024-07-25 10:17:30.430383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.398 qpair failed and we were unable to recover it. 00:28:45.398 [2024-07-25 10:17:30.430601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.398 [2024-07-25 10:17:30.430645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.398 qpair failed and we were unable to recover it. 00:28:45.398 [2024-07-25 10:17:30.430867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.398 [2024-07-25 10:17:30.430924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.398 qpair failed and we were unable to recover it. 00:28:45.398 [2024-07-25 10:17:30.431109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.398 [2024-07-25 10:17:30.431153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.398 qpair failed and we were unable to recover it. 00:28:45.398 [2024-07-25 10:17:30.431317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.398 [2024-07-25 10:17:30.431343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.398 qpair failed and we were unable to recover it. 00:28:45.398 [2024-07-25 10:17:30.431562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.398 [2024-07-25 10:17:30.431605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.398 qpair failed and we were unable to recover it. 00:28:45.398 [2024-07-25 10:17:30.431787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.398 [2024-07-25 10:17:30.431840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.398 qpair failed and we were unable to recover it. 00:28:45.398 [2024-07-25 10:17:30.431996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.398 [2024-07-25 10:17:30.432039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.398 qpair failed and we were unable to recover it. 00:28:45.398 [2024-07-25 10:17:30.432226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.398 [2024-07-25 10:17:30.432251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.398 qpair failed and we were unable to recover it. 00:28:45.398 [2024-07-25 10:17:30.432439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.398 [2024-07-25 10:17:30.432481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.398 qpair failed and we were unable to recover it. 00:28:45.398 [2024-07-25 10:17:30.432649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.398 [2024-07-25 10:17:30.432676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.398 qpair failed and we were unable to recover it. 00:28:45.398 [2024-07-25 10:17:30.432824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.398 [2024-07-25 10:17:30.432868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.398 qpair failed and we were unable to recover it. 00:28:45.398 [2024-07-25 10:17:30.433077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.398 [2024-07-25 10:17:30.433119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.398 qpair failed and we were unable to recover it. 00:28:45.398 [2024-07-25 10:17:30.433295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.398 [2024-07-25 10:17:30.433321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.398 qpair failed and we were unable to recover it. 00:28:45.398 [2024-07-25 10:17:30.433493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.398 [2024-07-25 10:17:30.433524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.398 qpair failed and we were unable to recover it. 00:28:45.398 [2024-07-25 10:17:30.433747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.398 [2024-07-25 10:17:30.433789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.398 qpair failed and we were unable to recover it. 00:28:45.398 [2024-07-25 10:17:30.433986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.398 [2024-07-25 10:17:30.434028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.398 qpair failed and we were unable to recover it. 00:28:45.398 [2024-07-25 10:17:30.434237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.398 [2024-07-25 10:17:30.434280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.398 qpair failed and we were unable to recover it. 00:28:45.398 [2024-07-25 10:17:30.434444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.398 [2024-07-25 10:17:30.434502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.398 qpair failed and we were unable to recover it. 00:28:45.398 [2024-07-25 10:17:30.434681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.398 [2024-07-25 10:17:30.434725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.398 qpair failed and we were unable to recover it. 00:28:45.398 [2024-07-25 10:17:30.434936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.398 [2024-07-25 10:17:30.434980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.398 qpair failed and we were unable to recover it. 00:28:45.398 [2024-07-25 10:17:30.435167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.398 [2024-07-25 10:17:30.435210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.398 qpair failed and we were unable to recover it. 00:28:45.398 [2024-07-25 10:17:30.435416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.398 [2024-07-25 10:17:30.435448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.398 qpair failed and we were unable to recover it. 00:28:45.398 [2024-07-25 10:17:30.435674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.398 [2024-07-25 10:17:30.435718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.398 qpair failed and we were unable to recover it. 00:28:45.398 [2024-07-25 10:17:30.435898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.398 [2024-07-25 10:17:30.435941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.398 qpair failed and we were unable to recover it. 00:28:45.398 [2024-07-25 10:17:30.436130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.398 [2024-07-25 10:17:30.436174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.398 qpair failed and we were unable to recover it. 00:28:45.398 [2024-07-25 10:17:30.436390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.398 [2024-07-25 10:17:30.436437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.398 qpair failed and we were unable to recover it. 00:28:45.398 [2024-07-25 10:17:30.436623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.398 [2024-07-25 10:17:30.436667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.398 qpair failed and we were unable to recover it. 00:28:45.398 [2024-07-25 10:17:30.436875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.398 [2024-07-25 10:17:30.436917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.398 qpair failed and we were unable to recover it. 00:28:45.398 [2024-07-25 10:17:30.437085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.398 [2024-07-25 10:17:30.437129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.398 qpair failed and we were unable to recover it. 00:28:45.398 [2024-07-25 10:17:30.437324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.398 [2024-07-25 10:17:30.437350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.398 qpair failed and we were unable to recover it. 00:28:45.398 [2024-07-25 10:17:30.437499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.398 [2024-07-25 10:17:30.437527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.398 qpair failed and we were unable to recover it. 00:28:45.398 [2024-07-25 10:17:30.437698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.399 [2024-07-25 10:17:30.437745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.399 qpair failed and we were unable to recover it. 00:28:45.399 [2024-07-25 10:17:30.437930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.399 [2024-07-25 10:17:30.437974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.399 qpair failed and we were unable to recover it. 00:28:45.399 [2024-07-25 10:17:30.438197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.399 [2024-07-25 10:17:30.438247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.399 qpair failed and we were unable to recover it. 00:28:45.399 [2024-07-25 10:17:30.438436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.399 [2024-07-25 10:17:30.438463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.399 qpair failed and we were unable to recover it. 00:28:45.399 [2024-07-25 10:17:30.438660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.399 [2024-07-25 10:17:30.438704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.399 qpair failed and we were unable to recover it. 00:28:45.399 [2024-07-25 10:17:30.438883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.399 [2024-07-25 10:17:30.438910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.399 qpair failed and we were unable to recover it. 00:28:45.399 [2024-07-25 10:17:30.439150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.399 [2024-07-25 10:17:30.439199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.399 qpair failed and we were unable to recover it. 00:28:45.399 [2024-07-25 10:17:30.439369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.399 [2024-07-25 10:17:30.439394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.399 qpair failed and we were unable to recover it. 00:28:45.399 [2024-07-25 10:17:30.439546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.399 [2024-07-25 10:17:30.439572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.399 qpair failed and we were unable to recover it. 00:28:45.399 [2024-07-25 10:17:30.439717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.399 [2024-07-25 10:17:30.439764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.399 qpair failed and we were unable to recover it. 00:28:45.399 [2024-07-25 10:17:30.439938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.399 [2024-07-25 10:17:30.439982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.399 qpair failed and we were unable to recover it. 00:28:45.399 [2024-07-25 10:17:30.440191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.399 [2024-07-25 10:17:30.440234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.399 qpair failed and we were unable to recover it. 00:28:45.399 [2024-07-25 10:17:30.440415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.399 [2024-07-25 10:17:30.440447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.399 qpair failed and we were unable to recover it. 00:28:45.399 [2024-07-25 10:17:30.440619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.399 [2024-07-25 10:17:30.440646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.399 qpair failed and we were unable to recover it. 00:28:45.399 [2024-07-25 10:17:30.440873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.399 [2024-07-25 10:17:30.440927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.399 qpair failed and we were unable to recover it. 00:28:45.399 [2024-07-25 10:17:30.441106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.399 [2024-07-25 10:17:30.441148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.399 qpair failed and we were unable to recover it. 00:28:45.399 [2024-07-25 10:17:30.441317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.399 [2024-07-25 10:17:30.441344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.399 qpair failed and we were unable to recover it. 00:28:45.399 [2024-07-25 10:17:30.441475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.399 [2024-07-25 10:17:30.441503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.399 qpair failed and we were unable to recover it. 00:28:45.399 [2024-07-25 10:17:30.441673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.399 [2024-07-25 10:17:30.441718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.399 qpair failed and we were unable to recover it. 00:28:45.399 [2024-07-25 10:17:30.441876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.399 [2024-07-25 10:17:30.441920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.399 qpair failed and we were unable to recover it. 00:28:45.399 [2024-07-25 10:17:30.442104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.399 [2024-07-25 10:17:30.442147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.399 qpair failed and we were unable to recover it. 00:28:45.399 [2024-07-25 10:17:30.442328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.399 [2024-07-25 10:17:30.442354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.399 qpair failed and we were unable to recover it. 00:28:45.399 [2024-07-25 10:17:30.442557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.399 [2024-07-25 10:17:30.442601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.399 qpair failed and we were unable to recover it. 00:28:45.399 [2024-07-25 10:17:30.442799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.399 [2024-07-25 10:17:30.442844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.399 qpair failed and we were unable to recover it. 00:28:45.399 [2024-07-25 10:17:30.443024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.399 [2024-07-25 10:17:30.443066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.399 qpair failed and we were unable to recover it. 00:28:45.399 [2024-07-25 10:17:30.443233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.399 [2024-07-25 10:17:30.443259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.399 qpair failed and we were unable to recover it. 00:28:45.399 [2024-07-25 10:17:30.443436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.399 [2024-07-25 10:17:30.443462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.399 qpair failed and we were unable to recover it. 00:28:45.399 [2024-07-25 10:17:30.443622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.399 [2024-07-25 10:17:30.443648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.399 qpair failed and we were unable to recover it. 00:28:45.399 [2024-07-25 10:17:30.443838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.399 [2024-07-25 10:17:30.443867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.399 qpair failed and we were unable to recover it. 00:28:45.399 [2024-07-25 10:17:30.444050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.399 [2024-07-25 10:17:30.444092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.399 qpair failed and we were unable to recover it. 00:28:45.399 [2024-07-25 10:17:30.444260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.399 [2024-07-25 10:17:30.444285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.399 qpair failed and we were unable to recover it. 00:28:45.399 [2024-07-25 10:17:30.444441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.399 [2024-07-25 10:17:30.444468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.399 qpair failed and we were unable to recover it. 00:28:45.399 [2024-07-25 10:17:30.444613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.400 [2024-07-25 10:17:30.444656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.400 qpair failed and we were unable to recover it. 00:28:45.400 [2024-07-25 10:17:30.444833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.400 [2024-07-25 10:17:30.444876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.400 qpair failed and we were unable to recover it. 00:28:45.400 [2024-07-25 10:17:30.445107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.400 [2024-07-25 10:17:30.445160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.400 qpair failed and we were unable to recover it. 00:28:45.400 [2024-07-25 10:17:30.445373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.400 [2024-07-25 10:17:30.445400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.400 qpair failed and we were unable to recover it. 00:28:45.400 [2024-07-25 10:17:30.445589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.400 [2024-07-25 10:17:30.445634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.400 qpair failed and we were unable to recover it. 00:28:45.400 [2024-07-25 10:17:30.445820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.400 [2024-07-25 10:17:30.445864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.400 qpair failed and we were unable to recover it. 00:28:45.400 [2024-07-25 10:17:30.446042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.400 [2024-07-25 10:17:30.446101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.400 qpair failed and we were unable to recover it. 00:28:45.400 [2024-07-25 10:17:30.446276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.400 [2024-07-25 10:17:30.446303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.400 qpair failed and we were unable to recover it. 00:28:45.400 [2024-07-25 10:17:30.446485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.400 [2024-07-25 10:17:30.446537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.400 qpair failed and we were unable to recover it. 00:28:45.400 [2024-07-25 10:17:30.446762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.400 [2024-07-25 10:17:30.446791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.400 qpair failed and we were unable to recover it. 00:28:45.400 [2024-07-25 10:17:30.446974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.400 [2024-07-25 10:17:30.447036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.400 qpair failed and we were unable to recover it. 00:28:45.400 [2024-07-25 10:17:30.447213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.400 [2024-07-25 10:17:30.447239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.400 qpair failed and we were unable to recover it. 00:28:45.400 [2024-07-25 10:17:30.447434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.400 [2024-07-25 10:17:30.447461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.400 qpair failed and we were unable to recover it. 00:28:45.400 [2024-07-25 10:17:30.447671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.400 [2024-07-25 10:17:30.447715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.400 qpair failed and we were unable to recover it. 00:28:45.400 [2024-07-25 10:17:30.447930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.400 [2024-07-25 10:17:30.447996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.400 qpair failed and we were unable to recover it. 00:28:45.400 [2024-07-25 10:17:30.448222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.400 [2024-07-25 10:17:30.448265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.400 qpair failed and we were unable to recover it. 00:28:45.400 [2024-07-25 10:17:30.448440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.400 [2024-07-25 10:17:30.448467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.400 qpair failed and we were unable to recover it. 00:28:45.400 [2024-07-25 10:17:30.448638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.400 [2024-07-25 10:17:30.448665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.400 qpair failed and we were unable to recover it. 00:28:45.400 [2024-07-25 10:17:30.448857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.400 [2024-07-25 10:17:30.448909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.400 qpair failed and we were unable to recover it. 00:28:45.400 [2024-07-25 10:17:30.449080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.400 [2024-07-25 10:17:30.449123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.400 qpair failed and we were unable to recover it. 00:28:45.400 [2024-07-25 10:17:30.449288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.400 [2024-07-25 10:17:30.449315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.400 qpair failed and we were unable to recover it. 00:28:45.400 [2024-07-25 10:17:30.449510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.400 [2024-07-25 10:17:30.449537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.400 qpair failed and we were unable to recover it. 00:28:45.400 [2024-07-25 10:17:30.449767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.400 [2024-07-25 10:17:30.449823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.400 qpair failed and we were unable to recover it. 00:28:45.400 [2024-07-25 10:17:30.450027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.400 [2024-07-25 10:17:30.450072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.400 qpair failed and we were unable to recover it. 00:28:45.400 [2024-07-25 10:17:30.450280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.400 [2024-07-25 10:17:30.450306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.400 qpair failed and we were unable to recover it. 00:28:45.400 [2024-07-25 10:17:30.450520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.400 [2024-07-25 10:17:30.450547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.400 qpair failed and we were unable to recover it. 00:28:45.400 [2024-07-25 10:17:30.450708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.400 [2024-07-25 10:17:30.450752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.400 qpair failed and we were unable to recover it. 00:28:45.400 [2024-07-25 10:17:30.450912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.400 [2024-07-25 10:17:30.450941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.400 qpair failed and we were unable to recover it. 00:28:45.400 [2024-07-25 10:17:30.451103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.400 [2024-07-25 10:17:30.451146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.400 qpair failed and we were unable to recover it. 00:28:45.400 [2024-07-25 10:17:30.451340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.400 [2024-07-25 10:17:30.451366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.400 qpair failed and we were unable to recover it. 00:28:45.400 [2024-07-25 10:17:30.451541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.400 [2024-07-25 10:17:30.451585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.400 qpair failed and we were unable to recover it. 00:28:45.400 [2024-07-25 10:17:30.451730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.400 [2024-07-25 10:17:30.451774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.400 qpair failed and we were unable to recover it. 00:28:45.400 [2024-07-25 10:17:30.451924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.400 [2024-07-25 10:17:30.451966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.400 qpair failed and we were unable to recover it. 00:28:45.400 [2024-07-25 10:17:30.452150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.400 [2024-07-25 10:17:30.452194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.400 qpair failed and we were unable to recover it. 00:28:45.400 [2024-07-25 10:17:30.452383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.400 [2024-07-25 10:17:30.452424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.400 qpair failed and we were unable to recover it. 00:28:45.400 [2024-07-25 10:17:30.452614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.400 [2024-07-25 10:17:30.452658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.400 qpair failed and we were unable to recover it. 00:28:45.400 [2024-07-25 10:17:30.452821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.400 [2024-07-25 10:17:30.452864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.400 qpair failed and we were unable to recover it. 00:28:45.400 [2024-07-25 10:17:30.453061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.400 [2024-07-25 10:17:30.453105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.401 qpair failed and we were unable to recover it. 00:28:45.401 [2024-07-25 10:17:30.453291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.401 [2024-07-25 10:17:30.453316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.401 qpair failed and we were unable to recover it. 00:28:45.401 [2024-07-25 10:17:30.453543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.401 [2024-07-25 10:17:30.453588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.401 qpair failed and we were unable to recover it. 00:28:45.401 [2024-07-25 10:17:30.453793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.401 [2024-07-25 10:17:30.453836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.401 qpair failed and we were unable to recover it. 00:28:45.401 [2024-07-25 10:17:30.454045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.401 [2024-07-25 10:17:30.454090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.401 qpair failed and we were unable to recover it. 00:28:45.401 [2024-07-25 10:17:30.454248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.401 [2024-07-25 10:17:30.454274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.401 qpair failed and we were unable to recover it. 00:28:45.401 [2024-07-25 10:17:30.454467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.401 [2024-07-25 10:17:30.454494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.401 qpair failed and we were unable to recover it. 00:28:45.401 [2024-07-25 10:17:30.454756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.401 [2024-07-25 10:17:30.454799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.401 qpair failed and we were unable to recover it. 00:28:45.401 [2024-07-25 10:17:30.455029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.401 [2024-07-25 10:17:30.455072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.401 qpair failed and we were unable to recover it. 00:28:45.401 [2024-07-25 10:17:30.455279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.401 [2024-07-25 10:17:30.455331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.401 qpair failed and we were unable to recover it. 00:28:45.401 [2024-07-25 10:17:30.455516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.401 [2024-07-25 10:17:30.455543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.401 qpair failed and we were unable to recover it. 00:28:45.401 [2024-07-25 10:17:30.455741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.401 [2024-07-25 10:17:30.455788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.401 qpair failed and we were unable to recover it. 00:28:45.401 [2024-07-25 10:17:30.456015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.401 [2024-07-25 10:17:30.456058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.401 qpair failed and we were unable to recover it. 00:28:45.401 [2024-07-25 10:17:30.456245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.401 [2024-07-25 10:17:30.456271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.401 qpair failed and we were unable to recover it. 00:28:45.401 [2024-07-25 10:17:30.456464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.401 [2024-07-25 10:17:30.456507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.401 qpair failed and we were unable to recover it. 00:28:45.401 [2024-07-25 10:17:30.456727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.401 [2024-07-25 10:17:30.456773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.401 qpair failed and we were unable to recover it. 00:28:45.401 [2024-07-25 10:17:30.456950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.401 [2024-07-25 10:17:30.456994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.401 qpair failed and we were unable to recover it. 00:28:45.401 [2024-07-25 10:17:30.457162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.401 [2024-07-25 10:17:30.457235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.401 qpair failed and we were unable to recover it. 00:28:45.401 [2024-07-25 10:17:30.457435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.401 [2024-07-25 10:17:30.457461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.401 qpair failed and we were unable to recover it. 00:28:45.401 [2024-07-25 10:17:30.457633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.401 [2024-07-25 10:17:30.457677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.401 qpair failed and we were unable to recover it. 00:28:45.401 [2024-07-25 10:17:30.457892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.401 [2024-07-25 10:17:30.457937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.401 qpair failed and we were unable to recover it. 00:28:45.401 [2024-07-25 10:17:30.458149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.401 [2024-07-25 10:17:30.458199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.401 qpair failed and we were unable to recover it. 00:28:45.401 [2024-07-25 10:17:30.458368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.401 [2024-07-25 10:17:30.458410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.401 qpair failed and we were unable to recover it. 00:28:45.401 [2024-07-25 10:17:30.458601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.401 [2024-07-25 10:17:30.458646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.401 qpair failed and we were unable to recover it. 00:28:45.401 [2024-07-25 10:17:30.458818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.401 [2024-07-25 10:17:30.458862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.401 qpair failed and we were unable to recover it. 00:28:45.401 [2024-07-25 10:17:30.459052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.401 [2024-07-25 10:17:30.459103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.401 qpair failed and we were unable to recover it. 00:28:45.401 [2024-07-25 10:17:30.459297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.401 [2024-07-25 10:17:30.459324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.401 qpair failed and we were unable to recover it. 00:28:45.401 [2024-07-25 10:17:30.459508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.401 [2024-07-25 10:17:30.459552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.401 qpair failed and we were unable to recover it. 00:28:45.401 [2024-07-25 10:17:30.459725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.401 [2024-07-25 10:17:30.459770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.401 qpair failed and we were unable to recover it. 00:28:45.401 [2024-07-25 10:17:30.459950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.401 [2024-07-25 10:17:30.460038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.401 qpair failed and we were unable to recover it. 00:28:45.401 [2024-07-25 10:17:30.460203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.401 [2024-07-25 10:17:30.460229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.401 qpair failed and we were unable to recover it. 00:28:45.401 [2024-07-25 10:17:30.460439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.401 [2024-07-25 10:17:30.460465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.401 qpair failed and we were unable to recover it. 00:28:45.401 [2024-07-25 10:17:30.460663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.401 [2024-07-25 10:17:30.460689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.401 qpair failed and we were unable to recover it. 00:28:45.401 [2024-07-25 10:17:30.460909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.401 [2024-07-25 10:17:30.460965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.401 qpair failed and we were unable to recover it. 00:28:45.401 [2024-07-25 10:17:30.461111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.401 [2024-07-25 10:17:30.461155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.401 qpair failed and we were unable to recover it. 00:28:45.401 [2024-07-25 10:17:30.461324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.401 [2024-07-25 10:17:30.461350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.401 qpair failed and we were unable to recover it. 00:28:45.401 [2024-07-25 10:17:30.461529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.401 [2024-07-25 10:17:30.461556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.401 qpair failed and we were unable to recover it. 00:28:45.401 [2024-07-25 10:17:30.461775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.401 [2024-07-25 10:17:30.461831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.401 qpair failed and we were unable to recover it. 00:28:45.401 [2024-07-25 10:17:30.462028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.402 [2024-07-25 10:17:30.462071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.402 qpair failed and we were unable to recover it. 00:28:45.402 [2024-07-25 10:17:30.462266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.402 [2024-07-25 10:17:30.462292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.402 qpair failed and we were unable to recover it. 00:28:45.402 [2024-07-25 10:17:30.462459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.402 [2024-07-25 10:17:30.462486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.402 qpair failed and we were unable to recover it. 00:28:45.402 [2024-07-25 10:17:30.462662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.402 [2024-07-25 10:17:30.462709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.402 qpair failed and we were unable to recover it. 00:28:45.402 [2024-07-25 10:17:30.462940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.402 [2024-07-25 10:17:30.462983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.402 qpair failed and we were unable to recover it. 00:28:45.402 [2024-07-25 10:17:30.463153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.402 [2024-07-25 10:17:30.463197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.402 qpair failed and we were unable to recover it. 00:28:45.402 [2024-07-25 10:17:30.463404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.402 [2024-07-25 10:17:30.463452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.402 qpair failed and we were unable to recover it. 00:28:45.402 [2024-07-25 10:17:30.463623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.402 [2024-07-25 10:17:30.463668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.402 qpair failed and we were unable to recover it. 00:28:45.402 [2024-07-25 10:17:30.463885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.402 [2024-07-25 10:17:30.463930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.402 qpair failed and we were unable to recover it. 00:28:45.402 [2024-07-25 10:17:30.464161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.402 [2024-07-25 10:17:30.464205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.402 qpair failed and we were unable to recover it. 00:28:45.402 [2024-07-25 10:17:30.464351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.402 [2024-07-25 10:17:30.464377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.402 qpair failed and we were unable to recover it. 00:28:45.402 [2024-07-25 10:17:30.464546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.402 [2024-07-25 10:17:30.464590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.402 qpair failed and we were unable to recover it. 00:28:45.402 [2024-07-25 10:17:30.464776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.402 [2024-07-25 10:17:30.464818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.402 qpair failed and we were unable to recover it. 00:28:45.402 [2024-07-25 10:17:30.465011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.402 [2024-07-25 10:17:30.465059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.402 qpair failed and we were unable to recover it. 00:28:45.402 [2024-07-25 10:17:30.465239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.402 [2024-07-25 10:17:30.465282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.402 qpair failed and we were unable to recover it. 00:28:45.402 [2024-07-25 10:17:30.465476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.402 [2024-07-25 10:17:30.465522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.402 qpair failed and we were unable to recover it. 00:28:45.402 [2024-07-25 10:17:30.465688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.402 [2024-07-25 10:17:30.465717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.402 qpair failed and we were unable to recover it. 00:28:45.402 [2024-07-25 10:17:30.465928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.402 [2024-07-25 10:17:30.465972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.402 qpair failed and we were unable to recover it. 00:28:45.402 [2024-07-25 10:17:30.466113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.402 [2024-07-25 10:17:30.466157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.402 qpair failed and we were unable to recover it. 00:28:45.402 [2024-07-25 10:17:30.466299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.402 [2024-07-25 10:17:30.466325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.402 qpair failed and we were unable to recover it. 00:28:45.402 [2024-07-25 10:17:30.466537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.402 [2024-07-25 10:17:30.466582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.402 qpair failed and we were unable to recover it. 00:28:45.402 [2024-07-25 10:17:30.466732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.402 [2024-07-25 10:17:30.466757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.402 qpair failed and we were unable to recover it. 00:28:45.402 [2024-07-25 10:17:30.466951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.402 [2024-07-25 10:17:30.466976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.402 qpair failed and we were unable to recover it. 00:28:45.402 [2024-07-25 10:17:30.467144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.402 [2024-07-25 10:17:30.467204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.402 qpair failed and we were unable to recover it. 00:28:45.402 [2024-07-25 10:17:30.467395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.402 [2024-07-25 10:17:30.467422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.402 qpair failed and we were unable to recover it. 00:28:45.402 [2024-07-25 10:17:30.467624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.402 [2024-07-25 10:17:30.467651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.402 qpair failed and we were unable to recover it. 00:28:45.402 [2024-07-25 10:17:30.467861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.402 [2024-07-25 10:17:30.467905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.402 qpair failed and we were unable to recover it. 00:28:45.402 [2024-07-25 10:17:30.468110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.402 [2024-07-25 10:17:30.468159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.402 qpair failed and we were unable to recover it. 00:28:45.402 [2024-07-25 10:17:30.468371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.402 [2024-07-25 10:17:30.468397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.402 qpair failed and we were unable to recover it. 00:28:45.402 [2024-07-25 10:17:30.468615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.402 [2024-07-25 10:17:30.468658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.402 qpair failed and we were unable to recover it. 00:28:45.402 [2024-07-25 10:17:30.468882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.402 [2024-07-25 10:17:30.468926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.402 qpair failed and we were unable to recover it. 00:28:45.402 [2024-07-25 10:17:30.469170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.402 [2024-07-25 10:17:30.469223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.402 qpair failed and we were unable to recover it. 00:28:45.402 [2024-07-25 10:17:30.469389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.402 [2024-07-25 10:17:30.469414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.402 qpair failed and we were unable to recover it. 00:28:45.402 [2024-07-25 10:17:30.469620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.402 [2024-07-25 10:17:30.469664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.402 qpair failed and we were unable to recover it. 00:28:45.402 [2024-07-25 10:17:30.469853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.402 [2024-07-25 10:17:30.469883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.402 qpair failed and we were unable to recover it. 00:28:45.402 [2024-07-25 10:17:30.470126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.402 [2024-07-25 10:17:30.470177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.402 qpair failed and we were unable to recover it. 00:28:45.402 [2024-07-25 10:17:30.470358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.402 [2024-07-25 10:17:30.470384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.402 qpair failed and we were unable to recover it. 00:28:45.402 [2024-07-25 10:17:30.470608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.402 [2024-07-25 10:17:30.470653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.402 qpair failed and we were unable to recover it. 00:28:45.402 [2024-07-25 10:17:30.470864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.403 [2024-07-25 10:17:30.470908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.403 qpair failed and we were unable to recover it. 00:28:45.403 [2024-07-25 10:17:30.471063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.403 [2024-07-25 10:17:30.471115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.403 qpair failed and we were unable to recover it. 00:28:45.403 [2024-07-25 10:17:30.471284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.403 [2024-07-25 10:17:30.471311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.403 qpair failed and we were unable to recover it. 00:28:45.403 [2024-07-25 10:17:30.471480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.403 [2024-07-25 10:17:30.471511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.403 qpair failed and we were unable to recover it. 00:28:45.403 [2024-07-25 10:17:30.471710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.403 [2024-07-25 10:17:30.471754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.403 qpair failed and we were unable to recover it. 00:28:45.403 [2024-07-25 10:17:30.471939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.403 [2024-07-25 10:17:30.471996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.403 qpair failed and we were unable to recover it. 00:28:45.403 [2024-07-25 10:17:30.472209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.403 [2024-07-25 10:17:30.472253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.403 qpair failed and we were unable to recover it. 00:28:45.403 [2024-07-25 10:17:30.472453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.403 [2024-07-25 10:17:30.472480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.403 qpair failed and we were unable to recover it. 00:28:45.403 [2024-07-25 10:17:30.472662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.403 [2024-07-25 10:17:30.472688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.403 qpair failed and we were unable to recover it. 00:28:45.403 [2024-07-25 10:17:30.472926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.403 [2024-07-25 10:17:30.472979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.403 qpair failed and we were unable to recover it. 00:28:45.403 [2024-07-25 10:17:30.473180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.403 [2024-07-25 10:17:30.473225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.403 qpair failed and we were unable to recover it. 00:28:45.403 [2024-07-25 10:17:30.473378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.403 [2024-07-25 10:17:30.473404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.403 qpair failed and we were unable to recover it. 00:28:45.403 [2024-07-25 10:17:30.473603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.403 [2024-07-25 10:17:30.473630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.403 qpair failed and we were unable to recover it. 00:28:45.403 [2024-07-25 10:17:30.473866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.403 [2024-07-25 10:17:30.473915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.403 qpair failed and we were unable to recover it. 00:28:45.403 [2024-07-25 10:17:30.474130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.403 [2024-07-25 10:17:30.474174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.403 qpair failed and we were unable to recover it. 00:28:45.403 [2024-07-25 10:17:30.474376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.403 [2024-07-25 10:17:30.474406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.403 qpair failed and we were unable to recover it. 00:28:45.403 [2024-07-25 10:17:30.474593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.403 [2024-07-25 10:17:30.474619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.403 qpair failed and we were unable to recover it. 00:28:45.403 [2024-07-25 10:17:30.474858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.403 [2024-07-25 10:17:30.474910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.403 qpair failed and we were unable to recover it. 00:28:45.403 [2024-07-25 10:17:30.475114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.403 [2024-07-25 10:17:30.475157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.403 qpair failed and we were unable to recover it. 00:28:45.403 [2024-07-25 10:17:30.475321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.403 [2024-07-25 10:17:30.475348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.403 qpair failed and we were unable to recover it. 00:28:45.403 [2024-07-25 10:17:30.475544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.403 [2024-07-25 10:17:30.475572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.403 qpair failed and we were unable to recover it. 00:28:45.403 [2024-07-25 10:17:30.475775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.403 [2024-07-25 10:17:30.475819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.403 qpair failed and we were unable to recover it. 00:28:45.403 [2024-07-25 10:17:30.476041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.403 [2024-07-25 10:17:30.476084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.403 qpair failed and we were unable to recover it. 00:28:45.403 [2024-07-25 10:17:30.476280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.403 [2024-07-25 10:17:30.476307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.403 qpair failed and we were unable to recover it. 00:28:45.403 [2024-07-25 10:17:30.476456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.403 [2024-07-25 10:17:30.476484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.403 qpair failed and we were unable to recover it. 00:28:45.403 [2024-07-25 10:17:30.476667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.403 [2024-07-25 10:17:30.476711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.403 qpair failed and we were unable to recover it. 00:28:45.403 [2024-07-25 10:17:30.476936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.403 [2024-07-25 10:17:30.476980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.403 qpair failed and we were unable to recover it. 00:28:45.403 [2024-07-25 10:17:30.477180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.403 [2024-07-25 10:17:30.477224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.403 qpair failed and we were unable to recover it. 00:28:45.403 [2024-07-25 10:17:30.477439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.403 [2024-07-25 10:17:30.477466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.403 qpair failed and we were unable to recover it. 00:28:45.403 [2024-07-25 10:17:30.477654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.403 [2024-07-25 10:17:30.477688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.403 qpair failed and we were unable to recover it. 00:28:45.403 [2024-07-25 10:17:30.477884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.403 [2024-07-25 10:17:30.477927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.403 qpair failed and we were unable to recover it. 00:28:45.403 [2024-07-25 10:17:30.478145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.403 [2024-07-25 10:17:30.478188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.403 qpair failed and we were unable to recover it. 00:28:45.403 [2024-07-25 10:17:30.478369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.403 [2024-07-25 10:17:30.478394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.403 qpair failed and we were unable to recover it. 00:28:45.403 [2024-07-25 10:17:30.478549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.403 [2024-07-25 10:17:30.478576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.403 qpair failed and we were unable to recover it. 00:28:45.403 [2024-07-25 10:17:30.478782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.403 [2024-07-25 10:17:30.478823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.403 qpair failed and we were unable to recover it. 00:28:45.403 [2024-07-25 10:17:30.478996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.404 [2024-07-25 10:17:30.479039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.404 qpair failed and we were unable to recover it. 00:28:45.404 [2024-07-25 10:17:30.479231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.404 [2024-07-25 10:17:30.479275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.404 qpair failed and we were unable to recover it. 00:28:45.404 [2024-07-25 10:17:30.479441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.404 [2024-07-25 10:17:30.479486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.404 qpair failed and we were unable to recover it. 00:28:45.404 [2024-07-25 10:17:30.479689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.404 [2024-07-25 10:17:30.479732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.404 qpair failed and we were unable to recover it. 00:28:45.404 [2024-07-25 10:17:30.479939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.404 [2024-07-25 10:17:30.479983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.404 qpair failed and we were unable to recover it. 00:28:45.404 [2024-07-25 10:17:30.480190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.404 [2024-07-25 10:17:30.480234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.404 qpair failed and we were unable to recover it. 00:28:45.404 [2024-07-25 10:17:30.480439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.404 [2024-07-25 10:17:30.480466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.404 qpair failed and we were unable to recover it. 00:28:45.404 Read completed with error (sct=0, sc=8) 00:28:45.404 starting I/O failed 00:28:45.404 Read completed with error (sct=0, sc=8) 00:28:45.404 starting I/O failed 00:28:45.404 Read completed with error (sct=0, sc=8) 00:28:45.404 starting I/O failed 00:28:45.404 Read completed with error (sct=0, sc=8) 00:28:45.404 starting I/O failed 00:28:45.404 Read completed with error (sct=0, sc=8) 00:28:45.404 starting I/O failed 00:28:45.404 Read completed with error (sct=0, sc=8) 00:28:45.404 starting I/O failed 00:28:45.404 Read completed with error (sct=0, sc=8) 00:28:45.404 starting I/O failed 00:28:45.404 Write completed with error (sct=0, sc=8) 00:28:45.404 starting I/O failed 00:28:45.404 Write completed with error (sct=0, sc=8) 00:28:45.404 starting I/O failed 00:28:45.404 Write completed with error (sct=0, sc=8) 00:28:45.404 starting I/O failed 00:28:45.404 Read completed with error (sct=0, sc=8) 00:28:45.404 starting I/O failed 00:28:45.404 Read completed with error (sct=0, sc=8) 00:28:45.404 starting I/O failed 00:28:45.404 Write completed with error (sct=0, sc=8) 00:28:45.404 starting I/O failed 00:28:45.404 Read completed with error (sct=0, sc=8) 00:28:45.404 starting I/O failed 00:28:45.404 Write completed with error (sct=0, sc=8) 00:28:45.404 starting I/O failed 00:28:45.404 Write completed with error (sct=0, sc=8) 00:28:45.404 starting I/O failed 00:28:45.404 Read completed with error (sct=0, sc=8) 00:28:45.404 starting I/O failed 00:28:45.404 Read completed with error (sct=0, sc=8) 00:28:45.404 starting I/O failed 00:28:45.404 Read completed with error (sct=0, sc=8) 00:28:45.404 starting I/O failed 00:28:45.404 Read completed with error (sct=0, sc=8) 00:28:45.404 starting I/O failed 00:28:45.404 Read completed with error (sct=0, sc=8) 00:28:45.404 starting I/O failed 00:28:45.404 Read completed with error (sct=0, sc=8) 00:28:45.404 starting I/O failed 00:28:45.404 Read completed with error (sct=0, sc=8) 00:28:45.404 starting I/O failed 00:28:45.404 Write completed with error (sct=0, sc=8) 00:28:45.404 starting I/O failed 00:28:45.404 Read completed with error (sct=0, sc=8) 00:28:45.404 starting I/O failed 00:28:45.404 Read completed with error (sct=0, sc=8) 00:28:45.404 starting I/O failed 00:28:45.404 Read completed with error (sct=0, sc=8) 00:28:45.404 starting I/O failed 00:28:45.404 Write completed with error (sct=0, sc=8) 00:28:45.404 starting I/O failed 00:28:45.404 Read completed with error (sct=0, sc=8) 00:28:45.404 starting I/O failed 00:28:45.404 Write completed with error (sct=0, sc=8) 00:28:45.404 starting I/O failed 00:28:45.404 Read completed with error (sct=0, sc=8) 00:28:45.404 starting I/O failed 00:28:45.404 Read completed with error (sct=0, sc=8) 00:28:45.404 starting I/O failed 00:28:45.404 [2024-07-25 10:17:30.480826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:45.404 Read completed with error (sct=0, sc=8) 00:28:45.404 starting I/O failed 00:28:45.404 Read completed with error (sct=0, sc=8) 00:28:45.404 starting I/O failed 00:28:45.404 Read completed with error (sct=0, sc=8) 00:28:45.404 starting I/O failed 00:28:45.404 Read completed with error (sct=0, sc=8) 00:28:45.404 starting I/O failed 00:28:45.404 Read completed with error (sct=0, sc=8) 00:28:45.404 starting I/O failed 00:28:45.404 Read completed with error (sct=0, sc=8) 00:28:45.404 starting I/O failed 00:28:45.404 Read completed with error (sct=0, sc=8) 00:28:45.404 starting I/O failed 00:28:45.404 Read completed with error (sct=0, sc=8) 00:28:45.404 starting I/O failed 00:28:45.404 Read completed with error (sct=0, sc=8) 00:28:45.404 starting I/O failed 00:28:45.404 Read completed with error (sct=0, sc=8) 00:28:45.404 starting I/O failed 00:28:45.404 Read completed with error (sct=0, sc=8) 00:28:45.404 starting I/O failed 00:28:45.404 Write completed with error (sct=0, sc=8) 00:28:45.404 starting I/O failed 00:28:45.404 Write completed with error (sct=0, sc=8) 00:28:45.404 starting I/O failed 00:28:45.404 Read completed with error (sct=0, sc=8) 00:28:45.404 starting I/O failed 00:28:45.404 Read completed with error (sct=0, sc=8) 00:28:45.404 starting I/O failed 00:28:45.404 Read completed with error (sct=0, sc=8) 00:28:45.404 starting I/O failed 00:28:45.404 Read completed with error (sct=0, sc=8) 00:28:45.404 starting I/O failed 00:28:45.404 Read completed with error (sct=0, sc=8) 00:28:45.404 starting I/O failed 00:28:45.404 Write completed with error (sct=0, sc=8) 00:28:45.404 starting I/O failed 00:28:45.404 Read completed with error (sct=0, sc=8) 00:28:45.404 starting I/O failed 00:28:45.404 Write completed with error (sct=0, sc=8) 00:28:45.404 starting I/O failed 00:28:45.404 Read completed with error (sct=0, sc=8) 00:28:45.405 starting I/O failed 00:28:45.405 Read completed with error (sct=0, sc=8) 00:28:45.405 starting I/O failed 00:28:45.405 Read completed with error (sct=0, sc=8) 00:28:45.405 starting I/O failed 00:28:45.405 Read completed with error (sct=0, sc=8) 00:28:45.405 starting I/O failed 00:28:45.405 Read completed with error (sct=0, sc=8) 00:28:45.405 starting I/O failed 00:28:45.405 Read completed with error (sct=0, sc=8) 00:28:45.405 starting I/O failed 00:28:45.405 Write completed with error (sct=0, sc=8) 00:28:45.405 starting I/O failed 00:28:45.405 Read completed with error (sct=0, sc=8) 00:28:45.405 starting I/O failed 00:28:45.405 Write completed with error (sct=0, sc=8) 00:28:45.405 starting I/O failed 00:28:45.405 Write completed with error (sct=0, sc=8) 00:28:45.405 starting I/O failed 00:28:45.405 Read completed with error (sct=0, sc=8) 00:28:45.405 starting I/O failed 00:28:45.405 [2024-07-25 10:17:30.481166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:45.405 [2024-07-25 10:17:30.481401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.405 [2024-07-25 10:17:30.481458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff98000b90 with addr=10.0.0.2, port=4420 00:28:45.405 qpair failed and we were unable to recover it. 00:28:45.405 [2024-07-25 10:17:30.481625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.405 [2024-07-25 10:17:30.481657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff98000b90 with addr=10.0.0.2, port=4420 00:28:45.405 qpair failed and we were unable to recover it. 00:28:45.405 [2024-07-25 10:17:30.481836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.405 [2024-07-25 10:17:30.481866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff98000b90 with addr=10.0.0.2, port=4420 00:28:45.405 qpair failed and we were unable to recover it. 00:28:45.405 [2024-07-25 10:17:30.482070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.405 [2024-07-25 10:17:30.482100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff98000b90 with addr=10.0.0.2, port=4420 00:28:45.405 qpair failed and we were unable to recover it. 00:28:45.405 [2024-07-25 10:17:30.482325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.405 [2024-07-25 10:17:30.482375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff98000b90 with addr=10.0.0.2, port=4420 00:28:45.405 qpair failed and we were unable to recover it. 00:28:45.405 [2024-07-25 10:17:30.482539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.405 [2024-07-25 10:17:30.482566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff98000b90 with addr=10.0.0.2, port=4420 00:28:45.405 qpair failed and we were unable to recover it. 00:28:45.405 [2024-07-25 10:17:30.482755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.405 [2024-07-25 10:17:30.482781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff98000b90 with addr=10.0.0.2, port=4420 00:28:45.405 qpair failed and we were unable to recover it. 00:28:45.405 [2024-07-25 10:17:30.482948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.405 [2024-07-25 10:17:30.482977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff98000b90 with addr=10.0.0.2, port=4420 00:28:45.405 qpair failed and we were unable to recover it. 00:28:45.405 [2024-07-25 10:17:30.483110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.405 [2024-07-25 10:17:30.483140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff98000b90 with addr=10.0.0.2, port=4420 00:28:45.405 qpair failed and we were unable to recover it. 00:28:45.405 [2024-07-25 10:17:30.483327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.405 [2024-07-25 10:17:30.483357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff98000b90 with addr=10.0.0.2, port=4420 00:28:45.405 qpair failed and we were unable to recover it. 00:28:45.405 [2024-07-25 10:17:30.483513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.405 [2024-07-25 10:17:30.483541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff98000b90 with addr=10.0.0.2, port=4420 00:28:45.405 qpair failed and we were unable to recover it. 00:28:45.405 [2024-07-25 10:17:30.483741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.405 [2024-07-25 10:17:30.483771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff98000b90 with addr=10.0.0.2, port=4420 00:28:45.405 qpair failed and we were unable to recover it. 00:28:45.405 [2024-07-25 10:17:30.483925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.405 [2024-07-25 10:17:30.483955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff98000b90 with addr=10.0.0.2, port=4420 00:28:45.405 qpair failed and we were unable to recover it. 00:28:45.405 [2024-07-25 10:17:30.484167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.405 [2024-07-25 10:17:30.484224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff98000b90 with addr=10.0.0.2, port=4420 00:28:45.405 qpair failed and we were unable to recover it. 00:28:45.405 [2024-07-25 10:17:30.484420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.405 [2024-07-25 10:17:30.484453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff98000b90 with addr=10.0.0.2, port=4420 00:28:45.405 qpair failed and we were unable to recover it. 00:28:45.405 [2024-07-25 10:17:30.484613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.405 [2024-07-25 10:17:30.484640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff98000b90 with addr=10.0.0.2, port=4420 00:28:45.405 qpair failed and we were unable to recover it. 00:28:45.405 [2024-07-25 10:17:30.484814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.405 [2024-07-25 10:17:30.484844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff98000b90 with addr=10.0.0.2, port=4420 00:28:45.405 qpair failed and we were unable to recover it. 00:28:45.405 [2024-07-25 10:17:30.485061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.405 [2024-07-25 10:17:30.485111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff98000b90 with addr=10.0.0.2, port=4420 00:28:45.405 qpair failed and we were unable to recover it. 00:28:45.405 [2024-07-25 10:17:30.485328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.405 [2024-07-25 10:17:30.485358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff98000b90 with addr=10.0.0.2, port=4420 00:28:45.405 qpair failed and we were unable to recover it. 00:28:45.405 [2024-07-25 10:17:30.485517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.405 [2024-07-25 10:17:30.485544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff98000b90 with addr=10.0.0.2, port=4420 00:28:45.405 qpair failed and we were unable to recover it. 00:28:45.405 [2024-07-25 10:17:30.485731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.405 [2024-07-25 10:17:30.485760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff98000b90 with addr=10.0.0.2, port=4420 00:28:45.405 qpair failed and we were unable to recover it. 00:28:45.405 [2024-07-25 10:17:30.485971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.405 [2024-07-25 10:17:30.486000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff98000b90 with addr=10.0.0.2, port=4420 00:28:45.405 qpair failed and we were unable to recover it. 00:28:45.405 [2024-07-25 10:17:30.486173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.405 [2024-07-25 10:17:30.486230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff98000b90 with addr=10.0.0.2, port=4420 00:28:45.405 qpair failed and we were unable to recover it. 00:28:45.405 [2024-07-25 10:17:30.486401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.405 [2024-07-25 10:17:30.486438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff98000b90 with addr=10.0.0.2, port=4420 00:28:45.405 qpair failed and we were unable to recover it. 00:28:45.405 [2024-07-25 10:17:30.486676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.405 [2024-07-25 10:17:30.486717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff98000b90 with addr=10.0.0.2, port=4420 00:28:45.406 qpair failed and we were unable to recover it. 00:28:45.406 [2024-07-25 10:17:30.486887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.406 [2024-07-25 10:17:30.486916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff98000b90 with addr=10.0.0.2, port=4420 00:28:45.406 qpair failed and we were unable to recover it. 00:28:45.406 [2024-07-25 10:17:30.487079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.406 [2024-07-25 10:17:30.487108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff98000b90 with addr=10.0.0.2, port=4420 00:28:45.406 qpair failed and we were unable to recover it. 00:28:45.406 [2024-07-25 10:17:30.487287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.406 [2024-07-25 10:17:30.487316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff98000b90 with addr=10.0.0.2, port=4420 00:28:45.406 qpair failed and we were unable to recover it. 00:28:45.406 [2024-07-25 10:17:30.487519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.406 [2024-07-25 10:17:30.487547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff98000b90 with addr=10.0.0.2, port=4420 00:28:45.406 qpair failed and we were unable to recover it. 00:28:45.406 [2024-07-25 10:17:30.487743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.406 [2024-07-25 10:17:30.487772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff98000b90 with addr=10.0.0.2, port=4420 00:28:45.406 qpair failed and we were unable to recover it. 00:28:45.406 [2024-07-25 10:17:30.487955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.406 [2024-07-25 10:17:30.487982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff98000b90 with addr=10.0.0.2, port=4420 00:28:45.406 qpair failed and we were unable to recover it. 00:28:45.406 [2024-07-25 10:17:30.488157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.406 [2024-07-25 10:17:30.488209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff98000b90 with addr=10.0.0.2, port=4420 00:28:45.406 qpair failed and we were unable to recover it. 00:28:45.406 [2024-07-25 10:17:30.488351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.406 [2024-07-25 10:17:30.488380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff98000b90 with addr=10.0.0.2, port=4420 00:28:45.406 qpair failed and we were unable to recover it. 00:28:45.406 [2024-07-25 10:17:30.488533] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18a0b00 is same with the state(5) to be set 00:28:45.406 [2024-07-25 10:17:30.488739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.406 [2024-07-25 10:17:30.488780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.406 qpair failed and we were unable to recover it. 00:28:45.406 [2024-07-25 10:17:30.489041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.406 [2024-07-25 10:17:30.489070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.406 qpair failed and we were unable to recover it. 00:28:45.406 [2024-07-25 10:17:30.489299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.406 [2024-07-25 10:17:30.489350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.406 qpair failed and we were unable to recover it. 00:28:45.406 [2024-07-25 10:17:30.489527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.406 [2024-07-25 10:17:30.489555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.406 qpair failed and we were unable to recover it. 00:28:45.406 [2024-07-25 10:17:30.489696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.406 [2024-07-25 10:17:30.489737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.406 qpair failed and we were unable to recover it. 00:28:45.406 [2024-07-25 10:17:30.489932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.406 [2024-07-25 10:17:30.489976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.406 qpair failed and we were unable to recover it. 00:28:45.406 [2024-07-25 10:17:30.490192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.406 [2024-07-25 10:17:30.490249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.406 qpair failed and we were unable to recover it. 00:28:45.406 [2024-07-25 10:17:30.490460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.406 [2024-07-25 10:17:30.490487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.406 qpair failed and we were unable to recover it. 00:28:45.406 [2024-07-25 10:17:30.490662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.406 [2024-07-25 10:17:30.490689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.406 qpair failed and we were unable to recover it. 00:28:45.406 [2024-07-25 10:17:30.490866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.406 [2024-07-25 10:17:30.490896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.406 qpair failed and we were unable to recover it. 00:28:45.406 [2024-07-25 10:17:30.491077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.406 [2024-07-25 10:17:30.491124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.406 qpair failed and we were unable to recover it. 00:28:45.406 [2024-07-25 10:17:30.491318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.406 [2024-07-25 10:17:30.491344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.406 qpair failed and we were unable to recover it. 00:28:45.406 [2024-07-25 10:17:30.491544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.406 [2024-07-25 10:17:30.491571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.406 qpair failed and we were unable to recover it. 00:28:45.406 [2024-07-25 10:17:30.491725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.406 [2024-07-25 10:17:30.491768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.406 qpair failed and we were unable to recover it. 00:28:45.406 [2024-07-25 10:17:30.491956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.406 [2024-07-25 10:17:30.492000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.406 qpair failed and we were unable to recover it. 00:28:45.406 [2024-07-25 10:17:30.492218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.406 [2024-07-25 10:17:30.492269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.406 qpair failed and we were unable to recover it. 00:28:45.406 [2024-07-25 10:17:30.492455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.406 [2024-07-25 10:17:30.492482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.406 qpair failed and we were unable to recover it. 00:28:45.406 [2024-07-25 10:17:30.492667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.406 [2024-07-25 10:17:30.492715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.406 qpair failed and we were unable to recover it. 00:28:45.406 [2024-07-25 10:17:30.492876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.406 [2024-07-25 10:17:30.492918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.406 qpair failed and we were unable to recover it. 00:28:45.406 [2024-07-25 10:17:30.493107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.406 [2024-07-25 10:17:30.493151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.406 qpair failed and we were unable to recover it. 00:28:45.406 [2024-07-25 10:17:30.493350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.406 [2024-07-25 10:17:30.493376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.406 qpair failed and we were unable to recover it. 00:28:45.406 [2024-07-25 10:17:30.493571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.406 [2024-07-25 10:17:30.493599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.406 qpair failed and we were unable to recover it. 00:28:45.406 [2024-07-25 10:17:30.493777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.406 [2024-07-25 10:17:30.493821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.406 qpair failed and we were unable to recover it. 00:28:45.406 [2024-07-25 10:17:30.494031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.406 [2024-07-25 10:17:30.494075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.406 qpair failed and we were unable to recover it. 00:28:45.406 [2024-07-25 10:17:30.494274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.406 [2024-07-25 10:17:30.494299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.406 qpair failed and we were unable to recover it. 00:28:45.406 [2024-07-25 10:17:30.494493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.406 [2024-07-25 10:17:30.494525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.406 qpair failed and we were unable to recover it. 00:28:45.406 [2024-07-25 10:17:30.494747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.406 [2024-07-25 10:17:30.494790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.406 qpair failed and we were unable to recover it. 00:28:45.406 [2024-07-25 10:17:30.494967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.406 [2024-07-25 10:17:30.495010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.406 qpair failed and we were unable to recover it. 00:28:45.406 [2024-07-25 10:17:30.495236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.406 [2024-07-25 10:17:30.495288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.407 qpair failed and we were unable to recover it. 00:28:45.407 [2024-07-25 10:17:30.495443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.407 [2024-07-25 10:17:30.495486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.407 qpair failed and we were unable to recover it. 00:28:45.407 [2024-07-25 10:17:30.495621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.407 [2024-07-25 10:17:30.495666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.407 qpair failed and we were unable to recover it. 00:28:45.407 [2024-07-25 10:17:30.495887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.407 [2024-07-25 10:17:30.495929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.407 qpair failed and we were unable to recover it. 00:28:45.407 [2024-07-25 10:17:30.496090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.407 [2024-07-25 10:17:30.496134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.407 qpair failed and we were unable to recover it. 00:28:45.407 [2024-07-25 10:17:30.496317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.407 [2024-07-25 10:17:30.496343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.407 qpair failed and we were unable to recover it. 00:28:45.407 [2024-07-25 10:17:30.496506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.407 [2024-07-25 10:17:30.496550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.407 qpair failed and we were unable to recover it. 00:28:45.407 [2024-07-25 10:17:30.496766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.407 [2024-07-25 10:17:30.496808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.407 qpair failed and we were unable to recover it. 00:28:45.407 [2024-07-25 10:17:30.496984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.407 [2024-07-25 10:17:30.497027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.407 qpair failed and we were unable to recover it. 00:28:45.407 [2024-07-25 10:17:30.497193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.407 [2024-07-25 10:17:30.497218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.407 qpair failed and we were unable to recover it. 00:28:45.407 [2024-07-25 10:17:30.497372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.407 [2024-07-25 10:17:30.497398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.407 qpair failed and we were unable to recover it. 00:28:45.407 [2024-07-25 10:17:30.497622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.407 [2024-07-25 10:17:30.497665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.407 qpair failed and we were unable to recover it. 00:28:45.407 [2024-07-25 10:17:30.497887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.407 [2024-07-25 10:17:30.497930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.407 qpair failed and we were unable to recover it. 00:28:45.407 [2024-07-25 10:17:30.498119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.407 [2024-07-25 10:17:30.498162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.407 qpair failed and we were unable to recover it. 00:28:45.407 [2024-07-25 10:17:30.498397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.407 [2024-07-25 10:17:30.498443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.407 qpair failed and we were unable to recover it. 00:28:45.407 [2024-07-25 10:17:30.498690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.407 [2024-07-25 10:17:30.498733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.407 qpair failed and we were unable to recover it. 00:28:45.407 [2024-07-25 10:17:30.498948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.407 [2024-07-25 10:17:30.498993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.407 qpair failed and we were unable to recover it. 00:28:45.407 [2024-07-25 10:17:30.499172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.407 [2024-07-25 10:17:30.499223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.407 qpair failed and we were unable to recover it. 00:28:45.407 [2024-07-25 10:17:30.499363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.407 [2024-07-25 10:17:30.499394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.407 qpair failed and we were unable to recover it. 00:28:45.407 [2024-07-25 10:17:30.499618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.407 [2024-07-25 10:17:30.499645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.407 qpair failed and we were unable to recover it. 00:28:45.407 [2024-07-25 10:17:30.499848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.407 [2024-07-25 10:17:30.499889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.407 qpair failed and we were unable to recover it. 00:28:45.407 [2024-07-25 10:17:30.500110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.407 [2024-07-25 10:17:30.500157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.407 qpair failed and we were unable to recover it. 00:28:45.407 [2024-07-25 10:17:30.500333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.407 [2024-07-25 10:17:30.500357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.407 qpair failed and we were unable to recover it. 00:28:45.407 [2024-07-25 10:17:30.500540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.407 [2024-07-25 10:17:30.500567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.407 qpair failed and we were unable to recover it. 00:28:45.407 [2024-07-25 10:17:30.500810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.407 [2024-07-25 10:17:30.500856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.407 qpair failed and we were unable to recover it. 00:28:45.407 [2024-07-25 10:17:30.501023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.407 [2024-07-25 10:17:30.501066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.407 qpair failed and we were unable to recover it. 00:28:45.407 [2024-07-25 10:17:30.501222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.407 [2024-07-25 10:17:30.501245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.407 qpair failed and we were unable to recover it. 00:28:45.407 [2024-07-25 10:17:30.501426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.407 [2024-07-25 10:17:30.501457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.407 qpair failed and we were unable to recover it. 00:28:45.407 [2024-07-25 10:17:30.501675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.407 [2024-07-25 10:17:30.501716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.407 qpair failed and we were unable to recover it. 00:28:45.407 [2024-07-25 10:17:30.501917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.407 [2024-07-25 10:17:30.501947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.407 qpair failed and we were unable to recover it. 00:28:45.407 [2024-07-25 10:17:30.502113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.407 [2024-07-25 10:17:30.502143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.407 qpair failed and we were unable to recover it. 00:28:45.407 [2024-07-25 10:17:30.502294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.407 [2024-07-25 10:17:30.502318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.407 qpair failed and we were unable to recover it. 00:28:45.407 [2024-07-25 10:17:30.502570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.407 [2024-07-25 10:17:30.502597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.407 qpair failed and we were unable to recover it. 00:28:45.407 [2024-07-25 10:17:30.502792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.407 [2024-07-25 10:17:30.502834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.407 qpair failed and we were unable to recover it. 00:28:45.407 [2024-07-25 10:17:30.503027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.408 [2024-07-25 10:17:30.503056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.408 qpair failed and we were unable to recover it. 00:28:45.408 [2024-07-25 10:17:30.503199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.408 [2024-07-25 10:17:30.503224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.408 qpair failed and we were unable to recover it. 00:28:45.408 [2024-07-25 10:17:30.503424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.408 [2024-07-25 10:17:30.503454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.408 qpair failed and we were unable to recover it. 00:28:45.408 [2024-07-25 10:17:30.503670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.408 [2024-07-25 10:17:30.503700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.408 qpair failed and we were unable to recover it. 00:28:45.408 [2024-07-25 10:17:30.503883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.408 [2024-07-25 10:17:30.503927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.408 qpair failed and we were unable to recover it. 00:28:45.408 [2024-07-25 10:17:30.504142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.408 [2024-07-25 10:17:30.504200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.408 qpair failed and we were unable to recover it. 00:28:45.408 [2024-07-25 10:17:30.504354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.408 [2024-07-25 10:17:30.504378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.408 qpair failed and we were unable to recover it. 00:28:45.408 [2024-07-25 10:17:30.504604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.408 [2024-07-25 10:17:30.504648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.408 qpair failed and we were unable to recover it. 00:28:45.408 [2024-07-25 10:17:30.504819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.408 [2024-07-25 10:17:30.504861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.408 qpair failed and we were unable to recover it. 00:28:45.408 [2024-07-25 10:17:30.505012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.408 [2024-07-25 10:17:30.505059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.408 qpair failed and we were unable to recover it. 00:28:45.408 [2024-07-25 10:17:30.505240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.408 [2024-07-25 10:17:30.505264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.408 qpair failed and we were unable to recover it. 00:28:45.408 [2024-07-25 10:17:30.505520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.408 [2024-07-25 10:17:30.505563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.408 qpair failed and we were unable to recover it. 00:28:45.408 [2024-07-25 10:17:30.505733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.408 [2024-07-25 10:17:30.505776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.408 qpair failed and we were unable to recover it. 00:28:45.408 [2024-07-25 10:17:30.505961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.408 [2024-07-25 10:17:30.506002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.408 qpair failed and we were unable to recover it. 00:28:45.408 [2024-07-25 10:17:30.506204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.408 [2024-07-25 10:17:30.506246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.408 qpair failed and we were unable to recover it. 00:28:45.408 [2024-07-25 10:17:30.506448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.408 [2024-07-25 10:17:30.506474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.408 qpair failed and we were unable to recover it. 00:28:45.408 [2024-07-25 10:17:30.506635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.408 [2024-07-25 10:17:30.506679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.408 qpair failed and we were unable to recover it. 00:28:45.408 [2024-07-25 10:17:30.506876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.408 [2024-07-25 10:17:30.506919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.408 qpair failed and we were unable to recover it. 00:28:45.408 [2024-07-25 10:17:30.507112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.408 [2024-07-25 10:17:30.507154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.408 qpair failed and we were unable to recover it. 00:28:45.408 [2024-07-25 10:17:30.507350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.408 [2024-07-25 10:17:30.507374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.408 qpair failed and we were unable to recover it. 00:28:45.408 [2024-07-25 10:17:30.507550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.408 [2024-07-25 10:17:30.507592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.408 qpair failed and we were unable to recover it. 00:28:45.408 [2024-07-25 10:17:30.507787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.408 [2024-07-25 10:17:30.507829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.408 qpair failed and we were unable to recover it. 00:28:45.408 [2024-07-25 10:17:30.508016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.408 [2024-07-25 10:17:30.508058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.408 qpair failed and we were unable to recover it. 00:28:45.408 [2024-07-25 10:17:30.508234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.408 [2024-07-25 10:17:30.508257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.408 qpair failed and we were unable to recover it. 00:28:45.408 [2024-07-25 10:17:30.508440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.408 [2024-07-25 10:17:30.508468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.408 qpair failed and we were unable to recover it. 00:28:45.408 [2024-07-25 10:17:30.508665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.408 [2024-07-25 10:17:30.508708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.408 qpair failed and we were unable to recover it. 00:28:45.408 [2024-07-25 10:17:30.508888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.408 [2024-07-25 10:17:30.508931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.408 qpair failed and we were unable to recover it. 00:28:45.408 [2024-07-25 10:17:30.509070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.408 [2024-07-25 10:17:30.509099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.408 qpair failed and we were unable to recover it. 00:28:45.408 [2024-07-25 10:17:30.509301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.408 [2024-07-25 10:17:30.509324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.408 qpair failed and we were unable to recover it. 00:28:45.408 [2024-07-25 10:17:30.509532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.408 [2024-07-25 10:17:30.509576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.408 qpair failed and we were unable to recover it. 00:28:45.408 [2024-07-25 10:17:30.509765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.408 [2024-07-25 10:17:30.509808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.408 qpair failed and we were unable to recover it. 00:28:45.408 [2024-07-25 10:17:30.509975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.408 [2024-07-25 10:17:30.510018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.408 qpair failed and we were unable to recover it. 00:28:45.408 [2024-07-25 10:17:30.510185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.408 [2024-07-25 10:17:30.510227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.408 qpair failed and we were unable to recover it. 00:28:45.408 [2024-07-25 10:17:30.510411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.408 [2024-07-25 10:17:30.510455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.408 qpair failed and we were unable to recover it. 00:28:45.408 [2024-07-25 10:17:30.510613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.408 [2024-07-25 10:17:30.510656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.408 qpair failed and we were unable to recover it. 00:28:45.408 [2024-07-25 10:17:30.510832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.408 [2024-07-25 10:17:30.510856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.408 qpair failed and we were unable to recover it. 00:28:45.408 [2024-07-25 10:17:30.511021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.408 [2024-07-25 10:17:30.511044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.408 qpair failed and we were unable to recover it. 00:28:45.408 [2024-07-25 10:17:30.511258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.408 [2024-07-25 10:17:30.511300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.408 qpair failed and we were unable to recover it. 00:28:45.409 [2024-07-25 10:17:30.511524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.409 [2024-07-25 10:17:30.511568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.409 qpair failed and we were unable to recover it. 00:28:45.409 [2024-07-25 10:17:30.511800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.409 [2024-07-25 10:17:30.511841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.409 qpair failed and we were unable to recover it. 00:28:45.409 [2024-07-25 10:17:30.511989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.409 [2024-07-25 10:17:30.512032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.409 qpair failed and we were unable to recover it. 00:28:45.409 [2024-07-25 10:17:30.512241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.409 [2024-07-25 10:17:30.512265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.409 qpair failed and we were unable to recover it. 00:28:45.409 [2024-07-25 10:17:30.512415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.409 [2024-07-25 10:17:30.512445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.409 qpair failed and we were unable to recover it. 00:28:45.409 [2024-07-25 10:17:30.512617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.409 [2024-07-25 10:17:30.512647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.409 qpair failed and we were unable to recover it. 00:28:45.409 [2024-07-25 10:17:30.512832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.409 [2024-07-25 10:17:30.512875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.409 qpair failed and we were unable to recover it. 00:28:45.409 [2024-07-25 10:17:30.513018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.409 [2024-07-25 10:17:30.513042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.409 qpair failed and we were unable to recover it. 00:28:45.409 [2024-07-25 10:17:30.513175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.409 [2024-07-25 10:17:30.513200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.409 qpair failed and we were unable to recover it. 00:28:45.409 [2024-07-25 10:17:30.513369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.409 [2024-07-25 10:17:30.513419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.409 qpair failed and we were unable to recover it. 00:28:45.409 [2024-07-25 10:17:30.513656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.409 [2024-07-25 10:17:30.513699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.409 qpair failed and we were unable to recover it. 00:28:45.409 [2024-07-25 10:17:30.513913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.409 [2024-07-25 10:17:30.513955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.409 qpair failed and we were unable to recover it. 00:28:45.409 [2024-07-25 10:17:30.514138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.409 [2024-07-25 10:17:30.514180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.409 qpair failed and we were unable to recover it. 00:28:45.409 [2024-07-25 10:17:30.514419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.409 [2024-07-25 10:17:30.514473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.409 qpair failed and we were unable to recover it. 00:28:45.409 [2024-07-25 10:17:30.514659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.409 [2024-07-25 10:17:30.514684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.409 qpair failed and we were unable to recover it. 00:28:45.409 [2024-07-25 10:17:30.514898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.409 [2024-07-25 10:17:30.514941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.409 qpair failed and we were unable to recover it. 00:28:45.409 [2024-07-25 10:17:30.515137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.409 [2024-07-25 10:17:30.515167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.409 qpair failed and we were unable to recover it. 00:28:45.409 [2024-07-25 10:17:30.515369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.409 [2024-07-25 10:17:30.515392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.409 qpair failed and we were unable to recover it. 00:28:45.409 [2024-07-25 10:17:30.515592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.409 [2024-07-25 10:17:30.515617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.409 qpair failed and we were unable to recover it. 00:28:45.409 [2024-07-25 10:17:30.515781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.409 [2024-07-25 10:17:30.515824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.409 qpair failed and we were unable to recover it. 00:28:45.409 [2024-07-25 10:17:30.516009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.409 [2024-07-25 10:17:30.516052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.409 qpair failed and we were unable to recover it. 00:28:45.409 [2024-07-25 10:17:30.516236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.409 [2024-07-25 10:17:30.516278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.409 qpair failed and we were unable to recover it. 00:28:45.409 [2024-07-25 10:17:30.516496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.409 [2024-07-25 10:17:30.516539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.409 qpair failed and we were unable to recover it. 00:28:45.409 [2024-07-25 10:17:30.516732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.409 [2024-07-25 10:17:30.516775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.409 qpair failed and we were unable to recover it. 00:28:45.409 [2024-07-25 10:17:30.516936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.409 [2024-07-25 10:17:30.516979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.409 qpair failed and we were unable to recover it. 00:28:45.409 [2024-07-25 10:17:30.517138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.409 [2024-07-25 10:17:30.517180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.409 qpair failed and we were unable to recover it. 00:28:45.409 [2024-07-25 10:17:30.517360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.409 [2024-07-25 10:17:30.517388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.409 qpair failed and we were unable to recover it. 00:28:45.409 [2024-07-25 10:17:30.517575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.409 [2024-07-25 10:17:30.517620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.409 qpair failed and we were unable to recover it. 00:28:45.409 [2024-07-25 10:17:30.517765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.409 [2024-07-25 10:17:30.517807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.409 qpair failed and we were unable to recover it. 00:28:45.409 [2024-07-25 10:17:30.518179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.409 [2024-07-25 10:17:30.518202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.409 qpair failed and we were unable to recover it. 00:28:45.409 [2024-07-25 10:17:30.518367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.410 [2024-07-25 10:17:30.518391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.410 qpair failed and we were unable to recover it. 00:28:45.410 [2024-07-25 10:17:30.518627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.410 [2024-07-25 10:17:30.518670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.410 qpair failed and we were unable to recover it. 00:28:45.410 [2024-07-25 10:17:30.518821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.410 [2024-07-25 10:17:30.518850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.410 qpair failed and we were unable to recover it. 00:28:45.410 [2024-07-25 10:17:30.519014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.410 [2024-07-25 10:17:30.519044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.410 qpair failed and we were unable to recover it. 00:28:45.410 [2024-07-25 10:17:30.519240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.410 [2024-07-25 10:17:30.519282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.410 qpair failed and we were unable to recover it. 00:28:45.410 [2024-07-25 10:17:30.519485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.410 [2024-07-25 10:17:30.519527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.410 qpair failed and we were unable to recover it. 00:28:45.410 [2024-07-25 10:17:30.519677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.410 [2024-07-25 10:17:30.519720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.410 qpair failed and we were unable to recover it. 00:28:45.410 [2024-07-25 10:17:30.519884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.410 [2024-07-25 10:17:30.519928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.410 qpair failed and we were unable to recover it. 00:28:45.410 [2024-07-25 10:17:30.520164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.410 [2024-07-25 10:17:30.520208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.410 qpair failed and we were unable to recover it. 00:28:45.410 [2024-07-25 10:17:30.520330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.410 [2024-07-25 10:17:30.520355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.410 qpair failed and we were unable to recover it. 00:28:45.410 [2024-07-25 10:17:30.520536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.410 [2024-07-25 10:17:30.520566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.410 qpair failed and we were unable to recover it. 00:28:45.410 [2024-07-25 10:17:30.520763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.410 [2024-07-25 10:17:30.520805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.410 qpair failed and we were unable to recover it. 00:28:45.410 [2024-07-25 10:17:30.520988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.410 [2024-07-25 10:17:30.521032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.410 qpair failed and we were unable to recover it. 00:28:45.410 [2024-07-25 10:17:30.521210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.410 [2024-07-25 10:17:30.521233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.410 qpair failed and we were unable to recover it. 00:28:45.410 [2024-07-25 10:17:30.521475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.410 [2024-07-25 10:17:30.521500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.410 qpair failed and we were unable to recover it. 00:28:45.410 [2024-07-25 10:17:30.521646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.410 [2024-07-25 10:17:30.521689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.410 qpair failed and we were unable to recover it. 00:28:45.410 [2024-07-25 10:17:30.521894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.410 [2024-07-25 10:17:30.521937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.410 qpair failed and we were unable to recover it. 00:28:45.410 [2024-07-25 10:17:30.522123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.410 [2024-07-25 10:17:30.522166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.410 qpair failed and we were unable to recover it. 00:28:45.410 [2024-07-25 10:17:30.522377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.410 [2024-07-25 10:17:30.522402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.410 qpair failed and we were unable to recover it. 00:28:45.410 [2024-07-25 10:17:30.522578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.410 [2024-07-25 10:17:30.522622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.410 qpair failed and we were unable to recover it. 00:28:45.410 [2024-07-25 10:17:30.522832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.410 [2024-07-25 10:17:30.522876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.410 qpair failed and we were unable to recover it. 00:28:45.410 [2024-07-25 10:17:30.523076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.410 [2024-07-25 10:17:30.523106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.410 qpair failed and we were unable to recover it. 00:28:45.410 [2024-07-25 10:17:30.523296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.410 [2024-07-25 10:17:30.523320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.410 qpair failed and we were unable to recover it. 00:28:45.410 [2024-07-25 10:17:30.523549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.410 [2024-07-25 10:17:30.523594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.410 qpair failed and we were unable to recover it. 00:28:45.410 [2024-07-25 10:17:30.523808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.410 [2024-07-25 10:17:30.523849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.410 qpair failed and we were unable to recover it. 00:28:45.410 [2024-07-25 10:17:30.524063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.410 [2024-07-25 10:17:30.524108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.410 qpair failed and we were unable to recover it. 00:28:45.690 [2024-07-25 10:17:30.524316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.690 [2024-07-25 10:17:30.524342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.690 qpair failed and we were unable to recover it. 00:28:45.690 [2024-07-25 10:17:30.524573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.690 [2024-07-25 10:17:30.524618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.690 qpair failed and we were unable to recover it. 00:28:45.690 [2024-07-25 10:17:30.524789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.690 [2024-07-25 10:17:30.524832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.690 qpair failed and we were unable to recover it. 00:28:45.690 [2024-07-25 10:17:30.525014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.690 [2024-07-25 10:17:30.525057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.690 qpair failed and we were unable to recover it. 00:28:45.690 [2024-07-25 10:17:30.525286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.690 [2024-07-25 10:17:30.525311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.690 qpair failed and we were unable to recover it. 00:28:45.690 [2024-07-25 10:17:30.525484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.690 [2024-07-25 10:17:30.525511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.690 qpair failed and we were unable to recover it. 00:28:45.690 [2024-07-25 10:17:30.525716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.690 [2024-07-25 10:17:30.525760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.690 qpair failed and we were unable to recover it. 00:28:45.690 [2024-07-25 10:17:30.525926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.690 [2024-07-25 10:17:30.525969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.690 qpair failed and we were unable to recover it. 00:28:45.690 [2024-07-25 10:17:30.526168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.690 [2024-07-25 10:17:30.526212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.690 qpair failed and we were unable to recover it. 00:28:45.690 [2024-07-25 10:17:30.526381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.690 [2024-07-25 10:17:30.526408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.690 qpair failed and we were unable to recover it. 00:28:45.690 [2024-07-25 10:17:30.526590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.690 [2024-07-25 10:17:30.526639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.690 qpair failed and we were unable to recover it. 00:28:45.690 [2024-07-25 10:17:30.526821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.690 [2024-07-25 10:17:30.526865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.690 qpair failed and we were unable to recover it. 00:28:45.690 [2024-07-25 10:17:30.527035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.690 [2024-07-25 10:17:30.527079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.690 qpair failed and we were unable to recover it. 00:28:45.690 [2024-07-25 10:17:30.527275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.690 [2024-07-25 10:17:30.527314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.690 qpair failed and we were unable to recover it. 00:28:45.690 [2024-07-25 10:17:30.527525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.690 [2024-07-25 10:17:30.527570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.690 qpair failed and we were unable to recover it. 00:28:45.690 [2024-07-25 10:17:30.527769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.690 [2024-07-25 10:17:30.527812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.690 qpair failed and we were unable to recover it. 00:28:45.690 [2024-07-25 10:17:30.527982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.690 [2024-07-25 10:17:30.528024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.690 qpair failed and we were unable to recover it. 00:28:45.690 [2024-07-25 10:17:30.528203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.690 [2024-07-25 10:17:30.528228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.690 qpair failed and we were unable to recover it. 00:28:45.690 [2024-07-25 10:17:30.528404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.690 [2024-07-25 10:17:30.528435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.690 qpair failed and we were unable to recover it. 00:28:45.690 [2024-07-25 10:17:30.528644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.690 [2024-07-25 10:17:30.528687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.690 qpair failed and we were unable to recover it. 00:28:45.690 [2024-07-25 10:17:30.528930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.690 [2024-07-25 10:17:30.528959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.690 qpair failed and we were unable to recover it. 00:28:45.690 [2024-07-25 10:17:30.529182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.690 [2024-07-25 10:17:30.529212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.690 qpair failed and we were unable to recover it. 00:28:45.690 [2024-07-25 10:17:30.529361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.690 [2024-07-25 10:17:30.529385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.690 qpair failed and we were unable to recover it. 00:28:45.690 [2024-07-25 10:17:30.529583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.690 [2024-07-25 10:17:30.529626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.690 qpair failed and we were unable to recover it. 00:28:45.690 [2024-07-25 10:17:30.529839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.691 [2024-07-25 10:17:30.529882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.691 qpair failed and we were unable to recover it. 00:28:45.691 [2024-07-25 10:17:30.530063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.691 [2024-07-25 10:17:30.530107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.691 qpair failed and we were unable to recover it. 00:28:45.691 [2024-07-25 10:17:30.530302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.691 [2024-07-25 10:17:30.530325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.691 qpair failed and we were unable to recover it. 00:28:45.691 [2024-07-25 10:17:30.530452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.691 [2024-07-25 10:17:30.530478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.691 qpair failed and we were unable to recover it. 00:28:45.691 [2024-07-25 10:17:30.530656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.691 [2024-07-25 10:17:30.530703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.691 qpair failed and we were unable to recover it. 00:28:45.691 [2024-07-25 10:17:30.530850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.691 [2024-07-25 10:17:30.530892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.691 qpair failed and we were unable to recover it. 00:28:45.691 [2024-07-25 10:17:30.531086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.691 [2024-07-25 10:17:30.531128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.691 qpair failed and we were unable to recover it. 00:28:45.691 [2024-07-25 10:17:30.531289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.691 [2024-07-25 10:17:30.531313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.691 qpair failed and we were unable to recover it. 00:28:45.691 [2024-07-25 10:17:30.531501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.691 [2024-07-25 10:17:30.531531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.691 qpair failed and we were unable to recover it. 00:28:45.691 [2024-07-25 10:17:30.531727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.691 [2024-07-25 10:17:30.531770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.691 qpair failed and we were unable to recover it. 00:28:45.691 [2024-07-25 10:17:30.531983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.691 [2024-07-25 10:17:30.532026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.691 qpair failed and we were unable to recover it. 00:28:45.691 [2024-07-25 10:17:30.532227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.691 [2024-07-25 10:17:30.532270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.691 qpair failed and we were unable to recover it. 00:28:45.691 [2024-07-25 10:17:30.532443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.691 [2024-07-25 10:17:30.532469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.691 qpair failed and we were unable to recover it. 00:28:45.691 [2024-07-25 10:17:30.532699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.691 [2024-07-25 10:17:30.532743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.691 qpair failed and we were unable to recover it. 00:28:45.691 [2024-07-25 10:17:30.532932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.691 [2024-07-25 10:17:30.532974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.691 qpair failed and we were unable to recover it. 00:28:45.691 [2024-07-25 10:17:30.533176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.691 [2024-07-25 10:17:30.533218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.691 qpair failed and we were unable to recover it. 00:28:45.691 [2024-07-25 10:17:30.533405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.691 [2024-07-25 10:17:30.533450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.691 qpair failed and we were unable to recover it. 00:28:45.691 [2024-07-25 10:17:30.533657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.691 [2024-07-25 10:17:30.533683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.691 qpair failed and we were unable to recover it. 00:28:45.691 [2024-07-25 10:17:30.533862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.691 [2024-07-25 10:17:30.533887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.691 qpair failed and we were unable to recover it. 00:28:45.691 [2024-07-25 10:17:30.534081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.691 [2024-07-25 10:17:30.534124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.691 qpair failed and we were unable to recover it. 00:28:45.691 [2024-07-25 10:17:30.534295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.691 [2024-07-25 10:17:30.534320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.691 qpair failed and we were unable to recover it. 00:28:45.691 [2024-07-25 10:17:30.534506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.691 [2024-07-25 10:17:30.534536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.691 qpair failed and we were unable to recover it. 00:28:45.691 [2024-07-25 10:17:30.534700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.691 [2024-07-25 10:17:30.534743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.691 qpair failed and we were unable to recover it. 00:28:45.691 [2024-07-25 10:17:30.534937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.691 [2024-07-25 10:17:30.534979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.691 qpair failed and we were unable to recover it. 00:28:45.691 [2024-07-25 10:17:30.535147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.691 [2024-07-25 10:17:30.535172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.691 qpair failed and we were unable to recover it. 00:28:45.691 [2024-07-25 10:17:30.535334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.691 [2024-07-25 10:17:30.535358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.691 qpair failed and we were unable to recover it. 00:28:45.691 [2024-07-25 10:17:30.535553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.691 [2024-07-25 10:17:30.535600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.691 qpair failed and we were unable to recover it. 00:28:45.691 [2024-07-25 10:17:30.535783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.691 [2024-07-25 10:17:30.535825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.691 qpair failed and we were unable to recover it. 00:28:45.691 [2024-07-25 10:17:30.535993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.691 [2024-07-25 10:17:30.536035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.691 qpair failed and we were unable to recover it. 00:28:45.691 [2024-07-25 10:17:30.536208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.691 [2024-07-25 10:17:30.536251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.691 qpair failed and we were unable to recover it. 00:28:45.691 [2024-07-25 10:17:30.536438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.691 [2024-07-25 10:17:30.536464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.691 qpair failed and we were unable to recover it. 00:28:45.691 [2024-07-25 10:17:30.536701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.691 [2024-07-25 10:17:30.536740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.691 qpair failed and we were unable to recover it. 00:28:45.691 [2024-07-25 10:17:30.536911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.691 [2024-07-25 10:17:30.536963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.691 qpair failed and we were unable to recover it. 00:28:45.691 [2024-07-25 10:17:30.537153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.691 [2024-07-25 10:17:30.537195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.691 qpair failed and we were unable to recover it. 00:28:45.691 [2024-07-25 10:17:30.537437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.692 [2024-07-25 10:17:30.537465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.692 qpair failed and we were unable to recover it. 00:28:45.692 [2024-07-25 10:17:30.537670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.692 [2024-07-25 10:17:30.537716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.692 qpair failed and we were unable to recover it. 00:28:45.692 [2024-07-25 10:17:30.537935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.692 [2024-07-25 10:17:30.537976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.692 qpair failed and we were unable to recover it. 00:28:45.692 [2024-07-25 10:17:30.538178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.692 [2024-07-25 10:17:30.538222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.692 qpair failed and we were unable to recover it. 00:28:45.692 [2024-07-25 10:17:30.538445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.692 [2024-07-25 10:17:30.538472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.692 qpair failed and we were unable to recover it. 00:28:45.692 [2024-07-25 10:17:30.538625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.692 [2024-07-25 10:17:30.538652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.692 qpair failed and we were unable to recover it. 00:28:45.692 [2024-07-25 10:17:30.538810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.692 [2024-07-25 10:17:30.538839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.692 qpair failed and we were unable to recover it. 00:28:45.692 [2024-07-25 10:17:30.539027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.692 [2024-07-25 10:17:30.539070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.692 qpair failed and we were unable to recover it. 00:28:45.692 [2024-07-25 10:17:30.539267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.692 [2024-07-25 10:17:30.539309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.692 qpair failed and we were unable to recover it. 00:28:45.692 [2024-07-25 10:17:30.539494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.692 [2024-07-25 10:17:30.539521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.692 qpair failed and we were unable to recover it. 00:28:45.692 [2024-07-25 10:17:30.539732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.692 [2024-07-25 10:17:30.539775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.692 qpair failed and we were unable to recover it. 00:28:45.692 [2024-07-25 10:17:30.539938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.692 [2024-07-25 10:17:30.539979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.692 qpair failed and we were unable to recover it. 00:28:45.692 [2024-07-25 10:17:30.540208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.692 [2024-07-25 10:17:30.540251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.692 qpair failed and we were unable to recover it. 00:28:45.692 [2024-07-25 10:17:30.540452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.692 [2024-07-25 10:17:30.540478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.692 qpair failed and we were unable to recover it. 00:28:45.692 [2024-07-25 10:17:30.540630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.692 [2024-07-25 10:17:30.540674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.692 qpair failed and we were unable to recover it. 00:28:45.692 [2024-07-25 10:17:30.540856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.692 [2024-07-25 10:17:30.540898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.692 qpair failed and we were unable to recover it. 00:28:45.692 [2024-07-25 10:17:30.541036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.692 [2024-07-25 10:17:30.541066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.692 qpair failed and we were unable to recover it. 00:28:45.692 [2024-07-25 10:17:30.541282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.692 [2024-07-25 10:17:30.541306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.692 qpair failed and we were unable to recover it. 00:28:45.692 [2024-07-25 10:17:30.541499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.692 [2024-07-25 10:17:30.541529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.692 qpair failed and we were unable to recover it. 00:28:45.692 [2024-07-25 10:17:30.541744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.692 [2024-07-25 10:17:30.541774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.692 qpair failed and we were unable to recover it. 00:28:45.692 [2024-07-25 10:17:30.541976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.692 [2024-07-25 10:17:30.542020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.692 qpair failed and we were unable to recover it. 00:28:45.692 [2024-07-25 10:17:30.542183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.692 [2024-07-25 10:17:30.542225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.692 qpair failed and we were unable to recover it. 00:28:45.692 [2024-07-25 10:17:30.542421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.692 [2024-07-25 10:17:30.542467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.692 qpair failed and we were unable to recover it. 00:28:45.692 [2024-07-25 10:17:30.542625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.692 [2024-07-25 10:17:30.542668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.692 qpair failed and we were unable to recover it. 00:28:45.692 [2024-07-25 10:17:30.542875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.692 [2024-07-25 10:17:30.542917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.692 qpair failed and we were unable to recover it. 00:28:45.692 [2024-07-25 10:17:30.543120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.692 [2024-07-25 10:17:30.543150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.692 qpair failed and we were unable to recover it. 00:28:45.692 [2024-07-25 10:17:30.543328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.692 [2024-07-25 10:17:30.543366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.692 qpair failed and we were unable to recover it. 00:28:45.692 [2024-07-25 10:17:30.543581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.692 [2024-07-25 10:17:30.543609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.692 qpair failed and we were unable to recover it. 00:28:45.692 [2024-07-25 10:17:30.543781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.692 [2024-07-25 10:17:30.543823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.692 qpair failed and we were unable to recover it. 00:28:45.692 [2024-07-25 10:17:30.543987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.692 [2024-07-25 10:17:30.544030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.692 qpair failed and we were unable to recover it. 00:28:45.692 [2024-07-25 10:17:30.544178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.692 [2024-07-25 10:17:30.544222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.692 qpair failed and we were unable to recover it. 00:28:45.692 [2024-07-25 10:17:30.544412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.692 [2024-07-25 10:17:30.544440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.692 qpair failed and we were unable to recover it. 00:28:45.692 [2024-07-25 10:17:30.544610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.692 [2024-07-25 10:17:30.544652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.692 qpair failed and we were unable to recover it. 00:28:45.692 [2024-07-25 10:17:30.544863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.692 [2024-07-25 10:17:30.544905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.692 qpair failed and we were unable to recover it. 00:28:45.692 [2024-07-25 10:17:30.545100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.692 [2024-07-25 10:17:30.545129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.692 qpair failed and we were unable to recover it. 00:28:45.693 [2024-07-25 10:17:30.545311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.693 [2024-07-25 10:17:30.545350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.693 qpair failed and we were unable to recover it. 00:28:45.693 [2024-07-25 10:17:30.545544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.693 [2024-07-25 10:17:30.545574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.693 qpair failed and we were unable to recover it. 00:28:45.693 [2024-07-25 10:17:30.545770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.693 [2024-07-25 10:17:30.545811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.693 qpair failed and we were unable to recover it. 00:28:45.693 [2024-07-25 10:17:30.545964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.693 [2024-07-25 10:17:30.546006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.693 qpair failed and we were unable to recover it. 00:28:45.693 [2024-07-25 10:17:30.546207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.693 [2024-07-25 10:17:30.546237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.693 qpair failed and we were unable to recover it. 00:28:45.693 [2024-07-25 10:17:30.546425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.693 [2024-07-25 10:17:30.546479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.693 qpair failed and we were unable to recover it. 00:28:45.693 [2024-07-25 10:17:30.546693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.693 [2024-07-25 10:17:30.546723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.693 qpair failed and we were unable to recover it. 00:28:45.693 [2024-07-25 10:17:30.546916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.693 [2024-07-25 10:17:30.546959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.693 qpair failed and we were unable to recover it. 00:28:45.693 [2024-07-25 10:17:30.547144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.693 [2024-07-25 10:17:30.547188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.693 qpair failed and we were unable to recover it. 00:28:45.693 [2024-07-25 10:17:30.547380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.693 [2024-07-25 10:17:30.547419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.693 qpair failed and we were unable to recover it. 00:28:45.693 [2024-07-25 10:17:30.547639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.693 [2024-07-25 10:17:30.547682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.693 qpair failed and we were unable to recover it. 00:28:45.693 [2024-07-25 10:17:30.547876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.693 [2024-07-25 10:17:30.547918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.693 qpair failed and we were unable to recover it. 00:28:45.693 [2024-07-25 10:17:30.548133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.693 [2024-07-25 10:17:30.548176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.693 qpair failed and we were unable to recover it. 00:28:45.693 [2024-07-25 10:17:30.548363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.693 [2024-07-25 10:17:30.548386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.693 qpair failed and we were unable to recover it. 00:28:45.693 [2024-07-25 10:17:30.548577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.693 [2024-07-25 10:17:30.548621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.693 qpair failed and we were unable to recover it. 00:28:45.693 [2024-07-25 10:17:30.548805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.693 [2024-07-25 10:17:30.548848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.693 qpair failed and we were unable to recover it. 00:28:45.693 [2024-07-25 10:17:30.549011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.693 [2024-07-25 10:17:30.549054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.693 qpair failed and we were unable to recover it. 00:28:45.693 [2024-07-25 10:17:30.549271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.693 [2024-07-25 10:17:30.549325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.693 qpair failed and we were unable to recover it. 00:28:45.693 [2024-07-25 10:17:30.549539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.693 [2024-07-25 10:17:30.549584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.693 qpair failed and we were unable to recover it. 00:28:45.693 [2024-07-25 10:17:30.549788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.693 [2024-07-25 10:17:30.549831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.693 qpair failed and we were unable to recover it. 00:28:45.693 [2024-07-25 10:17:30.550032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.693 [2024-07-25 10:17:30.550061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.693 qpair failed and we were unable to recover it. 00:28:45.693 [2024-07-25 10:17:30.550271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.693 [2024-07-25 10:17:30.550294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.693 qpair failed and we were unable to recover it. 00:28:45.693 [2024-07-25 10:17:30.550492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.693 [2024-07-25 10:17:30.550522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.693 qpair failed and we were unable to recover it. 00:28:45.693 [2024-07-25 10:17:30.550747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.693 [2024-07-25 10:17:30.550790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.693 qpair failed and we were unable to recover it. 00:28:45.693 [2024-07-25 10:17:30.550988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.693 [2024-07-25 10:17:30.551034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.693 qpair failed and we were unable to recover it. 00:28:45.693 [2024-07-25 10:17:30.551224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.693 [2024-07-25 10:17:30.551248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.693 qpair failed and we were unable to recover it. 00:28:45.693 [2024-07-25 10:17:30.551476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.693 [2024-07-25 10:17:30.551503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.693 qpair failed and we were unable to recover it. 00:28:45.693 [2024-07-25 10:17:30.551718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.693 [2024-07-25 10:17:30.551747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.693 qpair failed and we were unable to recover it. 00:28:45.693 [2024-07-25 10:17:30.551905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.693 [2024-07-25 10:17:30.551928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.693 qpair failed and we were unable to recover it. 00:28:45.693 [2024-07-25 10:17:30.552109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.693 [2024-07-25 10:17:30.552151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.693 qpair failed and we were unable to recover it. 00:28:45.693 [2024-07-25 10:17:30.552313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.693 [2024-07-25 10:17:30.552336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.693 qpair failed and we were unable to recover it. 00:28:45.693 [2024-07-25 10:17:30.552565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.693 [2024-07-25 10:17:30.552608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.693 qpair failed and we were unable to recover it. 00:28:45.693 [2024-07-25 10:17:30.552766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.693 [2024-07-25 10:17:30.552809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.693 qpair failed and we were unable to recover it. 00:28:45.693 [2024-07-25 10:17:30.553015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.693 [2024-07-25 10:17:30.553058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.693 qpair failed and we were unable to recover it. 00:28:45.693 [2024-07-25 10:17:30.553213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.693 [2024-07-25 10:17:30.553236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.694 qpair failed and we were unable to recover it. 00:28:45.694 [2024-07-25 10:17:30.553404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.694 [2024-07-25 10:17:30.553450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.694 qpair failed and we were unable to recover it. 00:28:45.694 [2024-07-25 10:17:30.553667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.694 [2024-07-25 10:17:30.553716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.694 qpair failed and we were unable to recover it. 00:28:45.694 [2024-07-25 10:17:30.553942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.694 [2024-07-25 10:17:30.553984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.694 qpair failed and we were unable to recover it. 00:28:45.694 [2024-07-25 10:17:30.554192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.694 [2024-07-25 10:17:30.554235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.694 qpair failed and we were unable to recover it. 00:28:45.694 [2024-07-25 10:17:30.554449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.694 [2024-07-25 10:17:30.554474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.694 qpair failed and we were unable to recover it. 00:28:45.694 [2024-07-25 10:17:30.554680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.694 [2024-07-25 10:17:30.554705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.694 qpair failed and we were unable to recover it. 00:28:45.694 [2024-07-25 10:17:30.554883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.694 [2024-07-25 10:17:30.554925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.694 qpair failed and we were unable to recover it. 00:28:45.694 [2024-07-25 10:17:30.555107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.694 [2024-07-25 10:17:30.555149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.694 qpair failed and we were unable to recover it. 00:28:45.694 [2024-07-25 10:17:30.555306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.694 [2024-07-25 10:17:30.555329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.694 qpair failed and we were unable to recover it. 00:28:45.694 [2024-07-25 10:17:30.555506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.694 [2024-07-25 10:17:30.555532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.694 qpair failed and we were unable to recover it. 00:28:45.694 [2024-07-25 10:17:30.555753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.694 [2024-07-25 10:17:30.555796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.694 qpair failed and we were unable to recover it. 00:28:45.694 [2024-07-25 10:17:30.555941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.694 [2024-07-25 10:17:30.555986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.694 qpair failed and we were unable to recover it. 00:28:45.694 [2024-07-25 10:17:30.556175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.694 [2024-07-25 10:17:30.556217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.694 qpair failed and we were unable to recover it. 00:28:45.694 [2024-07-25 10:17:30.556456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.694 [2024-07-25 10:17:30.556497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.694 qpair failed and we were unable to recover it. 00:28:45.694 [2024-07-25 10:17:30.556696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.694 [2024-07-25 10:17:30.556739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.694 qpair failed and we were unable to recover it. 00:28:45.694 [2024-07-25 10:17:30.556981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.694 [2024-07-25 10:17:30.557024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.694 qpair failed and we were unable to recover it. 00:28:45.694 [2024-07-25 10:17:30.557220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.694 [2024-07-25 10:17:30.557262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.694 qpair failed and we were unable to recover it. 00:28:45.694 [2024-07-25 10:17:30.557467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.694 [2024-07-25 10:17:30.557492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.694 qpair failed and we were unable to recover it. 00:28:45.694 [2024-07-25 10:17:30.557652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.694 [2024-07-25 10:17:30.557696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.694 qpair failed and we were unable to recover it. 00:28:45.694 [2024-07-25 10:17:30.557952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.694 [2024-07-25 10:17:30.557994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.694 qpair failed and we were unable to recover it. 00:28:45.694 [2024-07-25 10:17:30.558186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.694 [2024-07-25 10:17:30.558228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.694 qpair failed and we were unable to recover it. 00:28:45.694 [2024-07-25 10:17:30.558434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.694 [2024-07-25 10:17:30.558459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.694 qpair failed and we were unable to recover it. 00:28:45.694 [2024-07-25 10:17:30.558631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.694 [2024-07-25 10:17:30.558656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.694 qpair failed and we were unable to recover it. 00:28:45.694 [2024-07-25 10:17:30.558836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.694 [2024-07-25 10:17:30.558879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.694 qpair failed and we were unable to recover it. 00:28:45.694 [2024-07-25 10:17:30.559041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.694 [2024-07-25 10:17:30.559085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.694 qpair failed and we were unable to recover it. 00:28:45.694 [2024-07-25 10:17:30.559248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.694 [2024-07-25 10:17:30.559290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.694 qpair failed and we were unable to recover it. 00:28:45.694 [2024-07-25 10:17:30.559456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.694 [2024-07-25 10:17:30.559498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.694 qpair failed and we were unable to recover it. 00:28:45.694 [2024-07-25 10:17:30.559667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.694 [2024-07-25 10:17:30.559710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.694 qpair failed and we were unable to recover it. 00:28:45.694 [2024-07-25 10:17:30.559925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.694 [2024-07-25 10:17:30.559967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.694 qpair failed and we were unable to recover it. 00:28:45.694 [2024-07-25 10:17:30.560154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.694 [2024-07-25 10:17:30.560200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.694 qpair failed and we were unable to recover it. 00:28:45.694 [2024-07-25 10:17:30.560433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.694 [2024-07-25 10:17:30.560458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.694 qpair failed and we were unable to recover it. 00:28:45.694 [2024-07-25 10:17:30.560684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.694 [2024-07-25 10:17:30.560714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.694 qpair failed and we were unable to recover it. 00:28:45.694 [2024-07-25 10:17:30.560899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.694 [2024-07-25 10:17:30.560942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.694 qpair failed and we were unable to recover it. 00:28:45.694 [2024-07-25 10:17:30.561137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.694 [2024-07-25 10:17:30.561180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.694 qpair failed and we were unable to recover it. 00:28:45.694 [2024-07-25 10:17:30.561368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.695 [2024-07-25 10:17:30.561392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.695 qpair failed and we were unable to recover it. 00:28:45.695 [2024-07-25 10:17:30.561589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.695 [2024-07-25 10:17:30.561633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.695 qpair failed and we were unable to recover it. 00:28:45.695 [2024-07-25 10:17:30.561810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.695 [2024-07-25 10:17:30.561853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.695 qpair failed and we were unable to recover it. 00:28:45.695 [2024-07-25 10:17:30.562037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.695 [2024-07-25 10:17:30.562079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.695 qpair failed and we were unable to recover it. 00:28:45.695 [2024-07-25 10:17:30.562265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.695 [2024-07-25 10:17:30.562307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.695 qpair failed and we were unable to recover it. 00:28:45.695 [2024-07-25 10:17:30.562506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.695 [2024-07-25 10:17:30.562536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.695 qpair failed and we were unable to recover it. 00:28:45.695 [2024-07-25 10:17:30.562754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.695 [2024-07-25 10:17:30.562797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.695 qpair failed and we were unable to recover it. 00:28:45.695 [2024-07-25 10:17:30.562945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.695 [2024-07-25 10:17:30.562989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.695 qpair failed and we were unable to recover it. 00:28:45.695 [2024-07-25 10:17:30.563181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.695 [2024-07-25 10:17:30.563211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.695 qpair failed and we were unable to recover it. 00:28:45.695 [2024-07-25 10:17:30.563439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.695 [2024-07-25 10:17:30.563464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.695 qpair failed and we were unable to recover it. 00:28:45.695 [2024-07-25 10:17:30.563682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.695 [2024-07-25 10:17:30.563712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.695 qpair failed and we were unable to recover it. 00:28:45.695 [2024-07-25 10:17:30.563864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.695 [2024-07-25 10:17:30.563906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.695 qpair failed and we were unable to recover it. 00:28:45.695 [2024-07-25 10:17:30.564107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.695 [2024-07-25 10:17:30.564149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.695 qpair failed and we were unable to recover it. 00:28:45.695 [2024-07-25 10:17:30.564372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.695 [2024-07-25 10:17:30.564396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.695 qpair failed and we were unable to recover it. 00:28:45.695 [2024-07-25 10:17:30.564615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.695 [2024-07-25 10:17:30.564642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.695 qpair failed and we were unable to recover it. 00:28:45.695 [2024-07-25 10:17:30.564849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.695 [2024-07-25 10:17:30.564892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.695 qpair failed and we were unable to recover it. 00:28:45.695 [2024-07-25 10:17:30.565087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.695 [2024-07-25 10:17:30.565129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.695 qpair failed and we were unable to recover it. 00:28:45.695 [2024-07-25 10:17:30.565306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.695 [2024-07-25 10:17:30.565330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.695 qpair failed and we were unable to recover it. 00:28:45.695 [2024-07-25 10:17:30.565547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.695 [2024-07-25 10:17:30.565591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.695 qpair failed and we were unable to recover it. 00:28:45.695 [2024-07-25 10:17:30.565786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.695 [2024-07-25 10:17:30.565829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.695 qpair failed and we were unable to recover it. 00:28:45.695 [2024-07-25 10:17:30.565992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.695 [2024-07-25 10:17:30.566033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.695 qpair failed and we were unable to recover it. 00:28:45.695 [2024-07-25 10:17:30.566183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.695 [2024-07-25 10:17:30.566225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.695 qpair failed and we were unable to recover it. 00:28:45.695 [2024-07-25 10:17:30.566401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.695 [2024-07-25 10:17:30.566425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.695 qpair failed and we were unable to recover it. 00:28:45.695 [2024-07-25 10:17:30.566608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.695 [2024-07-25 10:17:30.566652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.695 qpair failed and we were unable to recover it. 00:28:45.695 [2024-07-25 10:17:30.566859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.695 [2024-07-25 10:17:30.566889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.695 qpair failed and we were unable to recover it. 00:28:45.695 [2024-07-25 10:17:30.567068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.695 [2024-07-25 10:17:30.567109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.695 qpair failed and we were unable to recover it. 00:28:45.695 [2024-07-25 10:17:30.567293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.696 [2024-07-25 10:17:30.567316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.696 qpair failed and we were unable to recover it. 00:28:45.696 [2024-07-25 10:17:30.567520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.696 [2024-07-25 10:17:30.567546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.696 qpair failed and we were unable to recover it. 00:28:45.696 [2024-07-25 10:17:30.567757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.696 [2024-07-25 10:17:30.567800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.696 qpair failed and we were unable to recover it. 00:28:45.696 [2024-07-25 10:17:30.567965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.696 [2024-07-25 10:17:30.568008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.696 qpair failed and we were unable to recover it. 00:28:45.696 [2024-07-25 10:17:30.568200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.696 [2024-07-25 10:17:30.568242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.696 qpair failed and we were unable to recover it. 00:28:45.696 [2024-07-25 10:17:30.568390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.696 [2024-07-25 10:17:30.568414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.696 qpair failed and we were unable to recover it. 00:28:45.696 [2024-07-25 10:17:30.568572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.696 [2024-07-25 10:17:30.568601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.696 qpair failed and we were unable to recover it. 00:28:45.696 [2024-07-25 10:17:30.568785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.696 [2024-07-25 10:17:30.568828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.696 qpair failed and we were unable to recover it. 00:28:45.696 [2024-07-25 10:17:30.569018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.696 [2024-07-25 10:17:30.569042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.696 qpair failed and we were unable to recover it. 00:28:45.696 [2024-07-25 10:17:30.569217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.696 [2024-07-25 10:17:30.569244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.696 qpair failed and we were unable to recover it. 00:28:45.696 [2024-07-25 10:17:30.569459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.696 [2024-07-25 10:17:30.569484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.696 qpair failed and we were unable to recover it. 00:28:45.696 [2024-07-25 10:17:30.569661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.696 [2024-07-25 10:17:30.569705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.696 qpair failed and we were unable to recover it. 00:28:45.696 [2024-07-25 10:17:30.569895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.696 [2024-07-25 10:17:30.569937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.696 qpair failed and we were unable to recover it. 00:28:45.696 [2024-07-25 10:17:30.570146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.696 [2024-07-25 10:17:30.570188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.696 qpair failed and we were unable to recover it. 00:28:45.696 [2024-07-25 10:17:30.570333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.696 [2024-07-25 10:17:30.570357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.696 qpair failed and we were unable to recover it. 00:28:45.696 [2024-07-25 10:17:30.570543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.696 [2024-07-25 10:17:30.570573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.696 qpair failed and we were unable to recover it. 00:28:45.696 [2024-07-25 10:17:30.570773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.696 [2024-07-25 10:17:30.570815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.696 qpair failed and we were unable to recover it. 00:28:45.696 [2024-07-25 10:17:30.570999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.696 [2024-07-25 10:17:30.571043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.696 qpair failed and we were unable to recover it. 00:28:45.696 [2024-07-25 10:17:30.571195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.696 [2024-07-25 10:17:30.571219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.696 qpair failed and we were unable to recover it. 00:28:45.696 [2024-07-25 10:17:30.571423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.696 [2024-07-25 10:17:30.571452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.696 qpair failed and we were unable to recover it. 00:28:45.696 [2024-07-25 10:17:30.571619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.696 [2024-07-25 10:17:30.571672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.696 qpair failed and we were unable to recover it. 00:28:45.696 [2024-07-25 10:17:30.571853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.696 [2024-07-25 10:17:30.571895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.696 qpair failed and we were unable to recover it. 00:28:45.696 [2024-07-25 10:17:30.572084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.696 [2024-07-25 10:17:30.572126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.696 qpair failed and we were unable to recover it. 00:28:45.696 [2024-07-25 10:17:30.572352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.696 [2024-07-25 10:17:30.572376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.696 qpair failed and we were unable to recover it. 00:28:45.696 [2024-07-25 10:17:30.572599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.696 [2024-07-25 10:17:30.572642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.696 qpair failed and we were unable to recover it. 00:28:45.696 [2024-07-25 10:17:30.572831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.696 [2024-07-25 10:17:30.572873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.696 qpair failed and we were unable to recover it. 00:28:45.696 [2024-07-25 10:17:30.573022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.696 [2024-07-25 10:17:30.573066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.696 qpair failed and we were unable to recover it. 00:28:45.696 [2024-07-25 10:17:30.573205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.696 [2024-07-25 10:17:30.573234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.696 qpair failed and we were unable to recover it. 00:28:45.696 [2024-07-25 10:17:30.573436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.696 [2024-07-25 10:17:30.573462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.696 qpair failed and we were unable to recover it. 00:28:45.696 [2024-07-25 10:17:30.573636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.696 [2024-07-25 10:17:30.573682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.696 qpair failed and we were unable to recover it. 00:28:45.696 [2024-07-25 10:17:30.573871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.696 [2024-07-25 10:17:30.573914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.696 qpair failed and we were unable to recover it. 00:28:45.696 [2024-07-25 10:17:30.574100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.696 [2024-07-25 10:17:30.574142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.696 qpair failed and we were unable to recover it. 00:28:45.696 [2024-07-25 10:17:30.574345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.696 [2024-07-25 10:17:30.574369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.696 qpair failed and we were unable to recover it. 00:28:45.696 [2024-07-25 10:17:30.574567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.696 [2024-07-25 10:17:30.574611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.696 qpair failed and we were unable to recover it. 00:28:45.696 [2024-07-25 10:17:30.574805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.696 [2024-07-25 10:17:30.574849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.696 qpair failed and we were unable to recover it. 00:28:45.696 [2024-07-25 10:17:30.574998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.696 [2024-07-25 10:17:30.575041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.697 qpair failed and we were unable to recover it. 00:28:45.697 [2024-07-25 10:17:30.575275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.697 [2024-07-25 10:17:30.575319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.697 qpair failed and we were unable to recover it. 00:28:45.697 [2024-07-25 10:17:30.575539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.697 [2024-07-25 10:17:30.575570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.697 qpair failed and we were unable to recover it. 00:28:45.697 [2024-07-25 10:17:30.575824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.697 [2024-07-25 10:17:30.575866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.697 qpair failed and we were unable to recover it. 00:28:45.697 [2024-07-25 10:17:30.576011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.697 [2024-07-25 10:17:30.576054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.697 qpair failed and we were unable to recover it. 00:28:45.697 [2024-07-25 10:17:30.576226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.697 [2024-07-25 10:17:30.576250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.697 qpair failed and we were unable to recover it. 00:28:45.697 [2024-07-25 10:17:30.576457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.697 [2024-07-25 10:17:30.576490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.697 qpair failed and we were unable to recover it. 00:28:45.697 [2024-07-25 10:17:30.576639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.697 [2024-07-25 10:17:30.576682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.697 qpair failed and we were unable to recover it. 00:28:45.697 [2024-07-25 10:17:30.576895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.697 [2024-07-25 10:17:30.576938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.697 qpair failed and we were unable to recover it. 00:28:45.697 [2024-07-25 10:17:30.577139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.697 [2024-07-25 10:17:30.577182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.697 qpair failed and we were unable to recover it. 00:28:45.697 [2024-07-25 10:17:30.577365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.697 [2024-07-25 10:17:30.577389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.697 qpair failed and we were unable to recover it. 00:28:45.697 [2024-07-25 10:17:30.577607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.697 [2024-07-25 10:17:30.577632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.697 qpair failed and we were unable to recover it. 00:28:45.697 [2024-07-25 10:17:30.577838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.697 [2024-07-25 10:17:30.577881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.697 qpair failed and we were unable to recover it. 00:28:45.697 [2024-07-25 10:17:30.578088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.697 [2024-07-25 10:17:30.578131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.697 qpair failed and we were unable to recover it. 00:28:45.697 [2024-07-25 10:17:30.578260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.697 [2024-07-25 10:17:30.578305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.697 qpair failed and we were unable to recover it. 00:28:45.697 [2024-07-25 10:17:30.578522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.697 [2024-07-25 10:17:30.578552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.697 qpair failed and we were unable to recover it. 00:28:45.697 [2024-07-25 10:17:30.578761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.697 [2024-07-25 10:17:30.578806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.697 qpair failed and we were unable to recover it. 00:28:45.697 [2024-07-25 10:17:30.578973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.697 [2024-07-25 10:17:30.579016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.697 qpair failed and we were unable to recover it. 00:28:45.697 [2024-07-25 10:17:30.579185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.697 [2024-07-25 10:17:30.579228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.697 qpair failed and we were unable to recover it. 00:28:45.697 [2024-07-25 10:17:30.579448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.697 [2024-07-25 10:17:30.579474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.697 qpair failed and we were unable to recover it. 00:28:45.697 [2024-07-25 10:17:30.579660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.697 [2024-07-25 10:17:30.579693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.697 qpair failed and we were unable to recover it. 00:28:45.697 [2024-07-25 10:17:30.579859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.697 [2024-07-25 10:17:30.579903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.697 qpair failed and we were unable to recover it. 00:28:45.697 [2024-07-25 10:17:30.580095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.697 [2024-07-25 10:17:30.580138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.697 qpair failed and we were unable to recover it. 00:28:45.697 [2024-07-25 10:17:30.580316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.697 [2024-07-25 10:17:30.580340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.697 qpair failed and we were unable to recover it. 00:28:45.697 [2024-07-25 10:17:30.580527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.697 [2024-07-25 10:17:30.580557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.697 qpair failed and we were unable to recover it. 00:28:45.697 [2024-07-25 10:17:30.580812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.697 [2024-07-25 10:17:30.580855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.697 qpair failed and we were unable to recover it. 00:28:45.697 [2024-07-25 10:17:30.581020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.697 [2024-07-25 10:17:30.581064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.697 qpair failed and we were unable to recover it. 00:28:45.697 [2024-07-25 10:17:30.581234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.697 [2024-07-25 10:17:30.581257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.697 qpair failed and we were unable to recover it. 00:28:45.697 [2024-07-25 10:17:30.581476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.697 [2024-07-25 10:17:30.581520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.697 qpair failed and we were unable to recover it. 00:28:45.697 [2024-07-25 10:17:30.581662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.697 [2024-07-25 10:17:30.581706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.697 qpair failed and we were unable to recover it. 00:28:45.697 [2024-07-25 10:17:30.581861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.697 [2024-07-25 10:17:30.581904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.697 qpair failed and we were unable to recover it. 00:28:45.697 [2024-07-25 10:17:30.582107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.697 [2024-07-25 10:17:30.582150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.697 qpair failed and we were unable to recover it. 00:28:45.697 [2024-07-25 10:17:30.582384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.697 [2024-07-25 10:17:30.582408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.697 qpair failed and we were unable to recover it. 00:28:45.697 [2024-07-25 10:17:30.582614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.697 [2024-07-25 10:17:30.582658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.697 qpair failed and we were unable to recover it. 00:28:45.697 [2024-07-25 10:17:30.582863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.697 [2024-07-25 10:17:30.582906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.697 qpair failed and we were unable to recover it. 00:28:45.697 [2024-07-25 10:17:30.583107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.698 [2024-07-25 10:17:30.583137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.698 qpair failed and we were unable to recover it. 00:28:45.698 [2024-07-25 10:17:30.583347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.698 [2024-07-25 10:17:30.583371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.698 qpair failed and we were unable to recover it. 00:28:45.698 [2024-07-25 10:17:30.583586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.698 [2024-07-25 10:17:30.583629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.698 qpair failed and we were unable to recover it. 00:28:45.698 [2024-07-25 10:17:30.583798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.698 [2024-07-25 10:17:30.583840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.698 qpair failed and we were unable to recover it. 00:28:45.698 [2024-07-25 10:17:30.584024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.698 [2024-07-25 10:17:30.584068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.698 qpair failed and we were unable to recover it. 00:28:45.698 [2024-07-25 10:17:30.584220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.698 [2024-07-25 10:17:30.584244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.698 qpair failed and we were unable to recover it. 00:28:45.698 [2024-07-25 10:17:30.584440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.698 [2024-07-25 10:17:30.584467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.698 qpair failed and we were unable to recover it. 00:28:45.698 [2024-07-25 10:17:30.584663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.698 [2024-07-25 10:17:30.584707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.698 qpair failed and we were unable to recover it. 00:28:45.698 [2024-07-25 10:17:30.584894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.698 [2024-07-25 10:17:30.584936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.698 qpair failed and we were unable to recover it. 00:28:45.698 [2024-07-25 10:17:30.585081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.698 [2024-07-25 10:17:30.585123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.698 qpair failed and we were unable to recover it. 00:28:45.698 [2024-07-25 10:17:30.585287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.698 [2024-07-25 10:17:30.585326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.698 qpair failed and we were unable to recover it. 00:28:45.698 [2024-07-25 10:17:30.585542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.698 [2024-07-25 10:17:30.585586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.698 qpair failed and we were unable to recover it. 00:28:45.698 [2024-07-25 10:17:30.585816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.698 [2024-07-25 10:17:30.585858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.698 qpair failed and we were unable to recover it. 00:28:45.698 [2024-07-25 10:17:30.586045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.698 [2024-07-25 10:17:30.586088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.698 qpair failed and we were unable to recover it. 00:28:45.698 [2024-07-25 10:17:30.586248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.698 [2024-07-25 10:17:30.586271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.698 qpair failed and we were unable to recover it. 00:28:45.698 [2024-07-25 10:17:30.586454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.698 [2024-07-25 10:17:30.586497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.698 qpair failed and we were unable to recover it. 00:28:45.698 [2024-07-25 10:17:30.586640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.698 [2024-07-25 10:17:30.586683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.698 qpair failed and we were unable to recover it. 00:28:45.698 [2024-07-25 10:17:30.586874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.698 [2024-07-25 10:17:30.586918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.698 qpair failed and we were unable to recover it. 00:28:45.698 [2024-07-25 10:17:30.587090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.698 [2024-07-25 10:17:30.587132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.698 qpair failed and we were unable to recover it. 00:28:45.698 [2024-07-25 10:17:30.587329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.698 [2024-07-25 10:17:30.587358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.698 qpair failed and we were unable to recover it. 00:28:45.698 [2024-07-25 10:17:30.587566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.698 [2024-07-25 10:17:30.587610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.698 qpair failed and we were unable to recover it. 00:28:45.698 [2024-07-25 10:17:30.587759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.698 [2024-07-25 10:17:30.587803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.698 qpair failed and we were unable to recover it. 00:28:45.698 [2024-07-25 10:17:30.587995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.698 [2024-07-25 10:17:30.588037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.698 qpair failed and we were unable to recover it. 00:28:45.698 [2024-07-25 10:17:30.588241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.698 [2024-07-25 10:17:30.588264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.698 qpair failed and we were unable to recover it. 00:28:45.698 [2024-07-25 10:17:30.588461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.698 [2024-07-25 10:17:30.588487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.698 qpair failed and we were unable to recover it. 00:28:45.698 [2024-07-25 10:17:30.588650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.698 [2024-07-25 10:17:30.588703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.698 qpair failed and we were unable to recover it. 00:28:45.698 [2024-07-25 10:17:30.588862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.698 [2024-07-25 10:17:30.588904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.698 qpair failed and we were unable to recover it. 00:28:45.698 [2024-07-25 10:17:30.589069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.698 [2024-07-25 10:17:30.589112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.698 qpair failed and we were unable to recover it. 00:28:45.698 [2024-07-25 10:17:30.589284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.698 [2024-07-25 10:17:30.589308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.698 qpair failed and we were unable to recover it. 00:28:45.698 [2024-07-25 10:17:30.589535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.698 [2024-07-25 10:17:30.589565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.698 qpair failed and we were unable to recover it. 00:28:45.698 [2024-07-25 10:17:30.589760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.698 [2024-07-25 10:17:30.589804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.698 qpair failed and we were unable to recover it. 00:28:45.698 [2024-07-25 10:17:30.590001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.698 [2024-07-25 10:17:30.590044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.698 qpair failed and we were unable to recover it. 00:28:45.698 [2024-07-25 10:17:30.590244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.698 [2024-07-25 10:17:30.590268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.698 qpair failed and we were unable to recover it. 00:28:45.698 [2024-07-25 10:17:30.590492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.698 [2024-07-25 10:17:30.590535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.698 qpair failed and we were unable to recover it. 00:28:45.698 [2024-07-25 10:17:30.590718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.698 [2024-07-25 10:17:30.590761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.698 qpair failed and we were unable to recover it. 00:28:45.698 [2024-07-25 10:17:30.590934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.699 [2024-07-25 10:17:30.590976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.699 qpair failed and we were unable to recover it. 00:28:45.699 [2024-07-25 10:17:30.591168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.699 [2024-07-25 10:17:30.591211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.699 qpair failed and we were unable to recover it. 00:28:45.699 [2024-07-25 10:17:30.591413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.699 [2024-07-25 10:17:30.591456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.699 qpair failed and we were unable to recover it. 00:28:45.699 [2024-07-25 10:17:30.591626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.699 [2024-07-25 10:17:30.591670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.699 qpair failed and we were unable to recover it. 00:28:45.699 [2024-07-25 10:17:30.591838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.699 [2024-07-25 10:17:30.591880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.699 qpair failed and we were unable to recover it. 00:28:45.699 [2024-07-25 10:17:30.592044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.699 [2024-07-25 10:17:30.592088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.699 qpair failed and we were unable to recover it. 00:28:45.699 [2024-07-25 10:17:30.592266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.699 [2024-07-25 10:17:30.592289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.699 qpair failed and we were unable to recover it. 00:28:45.699 [2024-07-25 10:17:30.592471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.699 [2024-07-25 10:17:30.592496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.699 qpair failed and we were unable to recover it. 00:28:45.699 [2024-07-25 10:17:30.593269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.699 [2024-07-25 10:17:30.593296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.699 qpair failed and we were unable to recover it. 00:28:45.699 [2024-07-25 10:17:30.593491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.699 [2024-07-25 10:17:30.593536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.699 qpair failed and we were unable to recover it. 00:28:45.699 [2024-07-25 10:17:30.593711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.699 [2024-07-25 10:17:30.593754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.699 qpair failed and we were unable to recover it. 00:28:45.699 [2024-07-25 10:17:30.593945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.699 [2024-07-25 10:17:30.593988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.699 qpair failed and we were unable to recover it. 00:28:45.699 [2024-07-25 10:17:30.594227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.699 [2024-07-25 10:17:30.594252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.699 qpair failed and we were unable to recover it. 00:28:45.699 [2024-07-25 10:17:30.594450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.699 [2024-07-25 10:17:30.594475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.699 qpair failed and we were unable to recover it. 00:28:45.699 [2024-07-25 10:17:30.594669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.699 [2024-07-25 10:17:30.594699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.699 qpair failed and we were unable to recover it. 00:28:45.699 [2024-07-25 10:17:30.594913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.699 [2024-07-25 10:17:30.594957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.699 qpair failed and we were unable to recover it. 00:28:45.699 [2024-07-25 10:17:30.595171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.699 [2024-07-25 10:17:30.595212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.699 qpair failed and we were unable to recover it. 00:28:45.699 [2024-07-25 10:17:30.595391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.699 [2024-07-25 10:17:30.595469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.699 qpair failed and we were unable to recover it. 00:28:45.699 [2024-07-25 10:17:30.595689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.699 [2024-07-25 10:17:30.595732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.699 qpair failed and we were unable to recover it. 00:28:45.699 [2024-07-25 10:17:30.595937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.699 [2024-07-25 10:17:30.595980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.699 qpair failed and we were unable to recover it. 00:28:45.699 [2024-07-25 10:17:30.596153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.699 [2024-07-25 10:17:30.596196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.699 qpair failed and we were unable to recover it. 00:28:45.699 [2024-07-25 10:17:30.596407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.699 [2024-07-25 10:17:30.596453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.699 qpair failed and we were unable to recover it. 00:28:45.699 [2024-07-25 10:17:30.596582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.699 [2024-07-25 10:17:30.596627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.699 qpair failed and we were unable to recover it. 00:28:45.699 [2024-07-25 10:17:30.596829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.699 [2024-07-25 10:17:30.596872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.699 qpair failed and we were unable to recover it. 00:28:45.699 [2024-07-25 10:17:30.597066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.699 [2024-07-25 10:17:30.597101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.699 qpair failed and we were unable to recover it. 00:28:45.699 [2024-07-25 10:17:30.597314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.699 [2024-07-25 10:17:30.597338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.699 qpair failed and we were unable to recover it. 00:28:45.699 [2024-07-25 10:17:30.597532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.699 [2024-07-25 10:17:30.597561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.699 qpair failed and we were unable to recover it. 00:28:45.699 [2024-07-25 10:17:30.597745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.699 [2024-07-25 10:17:30.597788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.699 qpair failed and we were unable to recover it. 00:28:45.699 [2024-07-25 10:17:30.597988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.699 [2024-07-25 10:17:30.598031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.699 qpair failed and we were unable to recover it. 00:28:45.699 [2024-07-25 10:17:30.598232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.699 [2024-07-25 10:17:30.598274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.699 qpair failed and we were unable to recover it. 00:28:45.699 [2024-07-25 10:17:30.599081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.699 [2024-07-25 10:17:30.599108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.699 qpair failed and we were unable to recover it. 00:28:45.699 [2024-07-25 10:17:30.599298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.699 [2024-07-25 10:17:30.599323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.699 qpair failed and we were unable to recover it. 00:28:45.699 [2024-07-25 10:17:30.599500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.699 [2024-07-25 10:17:30.599530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.699 qpair failed and we were unable to recover it. 00:28:45.699 [2024-07-25 10:17:30.599748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.699 [2024-07-25 10:17:30.599778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.699 qpair failed and we were unable to recover it. 00:28:45.699 [2024-07-25 10:17:30.599959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.699 [2024-07-25 10:17:30.600002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.699 qpair failed and we were unable to recover it. 00:28:45.699 [2024-07-25 10:17:30.600198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.699 [2024-07-25 10:17:30.600240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.700 qpair failed and we were unable to recover it. 00:28:45.700 [2024-07-25 10:17:30.600458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.700 [2024-07-25 10:17:30.600484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.700 qpair failed and we were unable to recover it. 00:28:45.700 [2024-07-25 10:17:30.600625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.700 [2024-07-25 10:17:30.600669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.700 qpair failed and we were unable to recover it. 00:28:45.700 [2024-07-25 10:17:30.600830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.700 [2024-07-25 10:17:30.600874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.700 qpair failed and we were unable to recover it. 00:28:45.700 [2024-07-25 10:17:30.601078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.700 [2024-07-25 10:17:30.601121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.700 qpair failed and we were unable to recover it. 00:28:45.700 [2024-07-25 10:17:30.601341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.700 [2024-07-25 10:17:30.601365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.700 qpair failed and we were unable to recover it. 00:28:45.700 [2024-07-25 10:17:30.601597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.700 [2024-07-25 10:17:30.601641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.700 qpair failed and we were unable to recover it. 00:28:45.700 [2024-07-25 10:17:30.601846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.700 [2024-07-25 10:17:30.601889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.700 qpair failed and we were unable to recover it. 00:28:45.700 [2024-07-25 10:17:30.602082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.700 [2024-07-25 10:17:30.602125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.700 qpair failed and we were unable to recover it. 00:28:45.700 [2024-07-25 10:17:30.602320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.700 [2024-07-25 10:17:30.602345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.700 qpair failed and we were unable to recover it. 00:28:45.700 [2024-07-25 10:17:30.602576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.700 [2024-07-25 10:17:30.602620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.700 qpair failed and we were unable to recover it. 00:28:45.700 [2024-07-25 10:17:30.602775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.700 [2024-07-25 10:17:30.602818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.700 qpair failed and we were unable to recover it. 00:28:45.700 [2024-07-25 10:17:30.602967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.700 [2024-07-25 10:17:30.603010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.700 qpair failed and we were unable to recover it. 00:28:45.700 [2024-07-25 10:17:30.603264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.700 [2024-07-25 10:17:30.603306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.700 qpair failed and we were unable to recover it. 00:28:45.700 [2024-07-25 10:17:30.603527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.700 [2024-07-25 10:17:30.603571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.700 qpair failed and we were unable to recover it. 00:28:45.700 [2024-07-25 10:17:30.603725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.700 [2024-07-25 10:17:30.603766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.700 qpair failed and we were unable to recover it. 00:28:45.700 [2024-07-25 10:17:30.603963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.700 [2024-07-25 10:17:30.604006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.700 qpair failed and we were unable to recover it. 00:28:45.700 [2024-07-25 10:17:30.604206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.700 [2024-07-25 10:17:30.604230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.700 qpair failed and we were unable to recover it. 00:28:45.700 [2024-07-25 10:17:30.604405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.700 [2024-07-25 10:17:30.604435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.700 qpair failed and we were unable to recover it. 00:28:45.700 [2024-07-25 10:17:30.604622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.700 [2024-07-25 10:17:30.604649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.700 qpair failed and we were unable to recover it. 00:28:45.700 [2024-07-25 10:17:30.604818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.700 [2024-07-25 10:17:30.604862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.700 qpair failed and we were unable to recover it. 00:28:45.700 [2024-07-25 10:17:30.605072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.700 [2024-07-25 10:17:30.605114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.700 qpair failed and we were unable to recover it. 00:28:45.700 [2024-07-25 10:17:30.605292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.700 [2024-07-25 10:17:30.605318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.700 qpair failed and we were unable to recover it. 00:28:45.700 [2024-07-25 10:17:30.605498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.700 [2024-07-25 10:17:30.605526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.700 qpair failed and we were unable to recover it. 00:28:45.700 [2024-07-25 10:17:30.605673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.700 [2024-07-25 10:17:30.605717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.700 qpair failed and we were unable to recover it. 00:28:45.700 [2024-07-25 10:17:30.605887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.700 [2024-07-25 10:17:30.605929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.700 qpair failed and we were unable to recover it. 00:28:45.700 [2024-07-25 10:17:30.606093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.700 [2024-07-25 10:17:30.606133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.700 qpair failed and we were unable to recover it. 00:28:45.700 [2024-07-25 10:17:30.606315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.700 [2024-07-25 10:17:30.606340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.700 qpair failed and we were unable to recover it. 00:28:45.700 [2024-07-25 10:17:30.606539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.700 [2024-07-25 10:17:30.606586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.700 qpair failed and we were unable to recover it. 00:28:45.700 [2024-07-25 10:17:30.606752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.701 [2024-07-25 10:17:30.606799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.701 qpair failed and we were unable to recover it. 00:28:45.701 [2024-07-25 10:17:30.606968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.701 [2024-07-25 10:17:30.607011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.701 qpair failed and we were unable to recover it. 00:28:45.701 [2024-07-25 10:17:30.607233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.701 [2024-07-25 10:17:30.607257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.701 qpair failed and we were unable to recover it. 00:28:45.701 [2024-07-25 10:17:30.607461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.701 [2024-07-25 10:17:30.607488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.701 qpair failed and we were unable to recover it. 00:28:45.701 [2024-07-25 10:17:30.607679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.701 [2024-07-25 10:17:30.607727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.701 qpair failed and we were unable to recover it. 00:28:45.701 [2024-07-25 10:17:30.607926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.701 [2024-07-25 10:17:30.607968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.701 qpair failed and we were unable to recover it. 00:28:45.701 [2024-07-25 10:17:30.608172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.701 [2024-07-25 10:17:30.608215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.701 qpair failed and we were unable to recover it. 00:28:45.701 [2024-07-25 10:17:30.608372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.701 [2024-07-25 10:17:30.608397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.701 qpair failed and we were unable to recover it. 00:28:45.701 [2024-07-25 10:17:30.608598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.701 [2024-07-25 10:17:30.608641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.701 qpair failed and we were unable to recover it. 00:28:45.701 [2024-07-25 10:17:30.608847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.701 [2024-07-25 10:17:30.608876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.701 qpair failed and we were unable to recover it. 00:28:45.701 [2024-07-25 10:17:30.609088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.701 [2024-07-25 10:17:30.609131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.701 qpair failed and we were unable to recover it. 00:28:45.701 [2024-07-25 10:17:30.609335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.701 [2024-07-25 10:17:30.609360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.701 qpair failed and we were unable to recover it. 00:28:45.701 [2024-07-25 10:17:30.609562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.701 [2024-07-25 10:17:30.609589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.701 qpair failed and we were unable to recover it. 00:28:45.701 [2024-07-25 10:17:30.609712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.701 [2024-07-25 10:17:30.609759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.701 qpair failed and we were unable to recover it. 00:28:45.701 [2024-07-25 10:17:30.609940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.701 [2024-07-25 10:17:30.609984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.701 qpair failed and we were unable to recover it. 00:28:45.701 [2024-07-25 10:17:30.610211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.701 [2024-07-25 10:17:30.610240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.701 qpair failed and we were unable to recover it. 00:28:45.701 [2024-07-25 10:17:30.610414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.701 [2024-07-25 10:17:30.610557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.701 qpair failed and we were unable to recover it. 00:28:45.701 [2024-07-25 10:17:30.610733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.701 [2024-07-25 10:17:30.610775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.701 qpair failed and we were unable to recover it. 00:28:45.701 [2024-07-25 10:17:30.610950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.701 [2024-07-25 10:17:30.610992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.701 qpair failed and we were unable to recover it. 00:28:45.701 [2024-07-25 10:17:30.611172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.701 [2024-07-25 10:17:30.611214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.701 qpair failed and we were unable to recover it. 00:28:45.701 [2024-07-25 10:17:30.611389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.701 [2024-07-25 10:17:30.611436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.701 qpair failed and we were unable to recover it. 00:28:45.701 [2024-07-25 10:17:30.611645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.701 [2024-07-25 10:17:30.611687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.701 qpair failed and we were unable to recover it. 00:28:45.701 [2024-07-25 10:17:30.611892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.701 [2024-07-25 10:17:30.611922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.701 qpair failed and we were unable to recover it. 00:28:45.701 [2024-07-25 10:17:30.612138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.701 [2024-07-25 10:17:30.612181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.701 qpair failed and we were unable to recover it. 00:28:45.701 [2024-07-25 10:17:30.612357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.701 [2024-07-25 10:17:30.612382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.701 qpair failed and we were unable to recover it. 00:28:45.701 [2024-07-25 10:17:30.612581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.701 [2024-07-25 10:17:30.612625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.701 qpair failed and we were unable to recover it. 00:28:45.701 [2024-07-25 10:17:30.612806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.701 [2024-07-25 10:17:30.612849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.701 qpair failed and we were unable to recover it. 00:28:45.701 [2024-07-25 10:17:30.613073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.701 [2024-07-25 10:17:30.613116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.701 qpair failed and we were unable to recover it. 00:28:45.701 [2024-07-25 10:17:30.613346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.701 [2024-07-25 10:17:30.613371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.701 qpair failed and we were unable to recover it. 00:28:45.701 [2024-07-25 10:17:30.613576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.701 [2024-07-25 10:17:30.613620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.701 qpair failed and we were unable to recover it. 00:28:45.701 [2024-07-25 10:17:30.613825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.701 [2024-07-25 10:17:30.613855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.701 qpair failed and we were unable to recover it. 00:28:45.701 [2024-07-25 10:17:30.614054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.701 [2024-07-25 10:17:30.614097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.701 qpair failed and we were unable to recover it. 00:28:45.701 [2024-07-25 10:17:30.614327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.701 [2024-07-25 10:17:30.614352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.701 qpair failed and we were unable to recover it. 00:28:45.701 [2024-07-25 10:17:30.614550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.701 [2024-07-25 10:17:30.614578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.701 qpair failed and we were unable to recover it. 00:28:45.701 [2024-07-25 10:17:30.614753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.701 [2024-07-25 10:17:30.614796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.702 qpair failed and we were unable to recover it. 00:28:45.702 [2024-07-25 10:17:30.614973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.702 [2024-07-25 10:17:30.615017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.702 qpair failed and we were unable to recover it. 00:28:45.702 [2024-07-25 10:17:30.615175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.702 [2024-07-25 10:17:30.615217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.702 qpair failed and we were unable to recover it. 00:28:45.702 [2024-07-25 10:17:30.615403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.702 [2024-07-25 10:17:30.615434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.702 qpair failed and we were unable to recover it. 00:28:45.702 [2024-07-25 10:17:30.615592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.702 [2024-07-25 10:17:30.615637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.702 qpair failed and we were unable to recover it. 00:28:45.702 [2024-07-25 10:17:30.615859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.702 [2024-07-25 10:17:30.615910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.702 qpair failed and we were unable to recover it. 00:28:45.702 [2024-07-25 10:17:30.616098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.702 [2024-07-25 10:17:30.616144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.702 qpair failed and we were unable to recover it. 00:28:45.702 [2024-07-25 10:17:30.616383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.702 [2024-07-25 10:17:30.616424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.702 qpair failed and we were unable to recover it. 00:28:45.702 [2024-07-25 10:17:30.616620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.702 [2024-07-25 10:17:30.616666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.702 qpair failed and we were unable to recover it. 00:28:45.702 [2024-07-25 10:17:30.616862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.702 [2024-07-25 10:17:30.616904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.702 qpair failed and we were unable to recover it. 00:28:45.702 [2024-07-25 10:17:30.617069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.702 [2024-07-25 10:17:30.617112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.702 qpair failed and we were unable to recover it. 00:28:45.702 [2024-07-25 10:17:30.617295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.702 [2024-07-25 10:17:30.617325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.702 qpair failed and we were unable to recover it. 00:28:45.702 [2024-07-25 10:17:30.617527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.702 [2024-07-25 10:17:30.617572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.702 qpair failed and we were unable to recover it. 00:28:45.702 [2024-07-25 10:17:30.617750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.702 [2024-07-25 10:17:30.617793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.702 qpair failed and we were unable to recover it. 00:28:45.702 [2024-07-25 10:17:30.617989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.702 [2024-07-25 10:17:30.618029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.702 qpair failed and we were unable to recover it. 00:28:45.702 [2024-07-25 10:17:30.618246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.702 [2024-07-25 10:17:30.618288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.702 qpair failed and we were unable to recover it. 00:28:45.702 [2024-07-25 10:17:30.618465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.702 [2024-07-25 10:17:30.618510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.702 qpair failed and we were unable to recover it. 00:28:45.702 [2024-07-25 10:17:30.618684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.702 [2024-07-25 10:17:30.618713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.702 qpair failed and we were unable to recover it. 00:28:45.702 [2024-07-25 10:17:30.619464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.702 [2024-07-25 10:17:30.619495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.702 qpair failed and we were unable to recover it. 00:28:45.702 [2024-07-25 10:17:30.619710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.702 [2024-07-25 10:17:30.619741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.702 qpair failed and we were unable to recover it. 00:28:45.702 [2024-07-25 10:17:30.619947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.702 [2024-07-25 10:17:30.619990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.702 qpair failed and we were unable to recover it. 00:28:45.702 [2024-07-25 10:17:30.620187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.702 [2024-07-25 10:17:30.620211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.702 qpair failed and we were unable to recover it. 00:28:45.702 [2024-07-25 10:17:30.620378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.702 [2024-07-25 10:17:30.620403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.702 qpair failed and we were unable to recover it. 00:28:45.702 [2024-07-25 10:17:30.620632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.702 [2024-07-25 10:17:30.620677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.702 qpair failed and we were unable to recover it. 00:28:45.702 [2024-07-25 10:17:30.620899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.702 [2024-07-25 10:17:30.620943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.702 qpair failed and we were unable to recover it. 00:28:45.702 [2024-07-25 10:17:30.621140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.702 [2024-07-25 10:17:30.621183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.702 qpair failed and we were unable to recover it. 00:28:45.702 [2024-07-25 10:17:30.621407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.702 [2024-07-25 10:17:30.621455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.702 qpair failed and we were unable to recover it. 00:28:45.702 [2024-07-25 10:17:30.621618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.702 [2024-07-25 10:17:30.621662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.702 qpair failed and we were unable to recover it. 00:28:45.702 [2024-07-25 10:17:30.621860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.702 [2024-07-25 10:17:30.621904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.702 qpair failed and we were unable to recover it. 00:28:45.702 [2024-07-25 10:17:30.622089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.702 [2024-07-25 10:17:30.622133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.702 qpair failed and we were unable to recover it. 00:28:45.702 [2024-07-25 10:17:30.622322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.702 [2024-07-25 10:17:30.622347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.702 qpair failed and we were unable to recover it. 00:28:45.702 [2024-07-25 10:17:30.622550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.702 [2024-07-25 10:17:30.622595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.702 qpair failed and we were unable to recover it. 00:28:45.702 [2024-07-25 10:17:30.622778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.702 [2024-07-25 10:17:30.622821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.702 qpair failed and we were unable to recover it. 00:28:45.702 [2024-07-25 10:17:30.623041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.702 [2024-07-25 10:17:30.623084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.702 qpair failed and we were unable to recover it. 00:28:45.702 [2024-07-25 10:17:30.623282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.702 [2024-07-25 10:17:30.623307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.702 qpair failed and we were unable to recover it. 00:28:45.702 [2024-07-25 10:17:30.623527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.702 [2024-07-25 10:17:30.623573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.703 qpair failed and we were unable to recover it. 00:28:45.703 [2024-07-25 10:17:30.623747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.703 [2024-07-25 10:17:30.623790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.703 qpair failed and we were unable to recover it. 00:28:45.703 [2024-07-25 10:17:30.624002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.703 [2024-07-25 10:17:30.624032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.703 qpair failed and we were unable to recover it. 00:28:45.703 [2024-07-25 10:17:30.624261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.703 [2024-07-25 10:17:30.624287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.703 qpair failed and we were unable to recover it. 00:28:45.703 [2024-07-25 10:17:30.624476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.703 [2024-07-25 10:17:30.624507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.703 qpair failed and we were unable to recover it. 00:28:45.703 [2024-07-25 10:17:30.624729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.703 [2024-07-25 10:17:30.624777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.703 qpair failed and we were unable to recover it. 00:28:45.703 [2024-07-25 10:17:30.624982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.703 [2024-07-25 10:17:30.625024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.703 qpair failed and we were unable to recover it. 00:28:45.703 [2024-07-25 10:17:30.625236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.703 [2024-07-25 10:17:30.625261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.703 qpair failed and we were unable to recover it. 00:28:45.703 [2024-07-25 10:17:30.625468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.703 [2024-07-25 10:17:30.625496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.703 qpair failed and we were unable to recover it. 00:28:45.703 [2024-07-25 10:17:30.625672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.703 [2024-07-25 10:17:30.625719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.703 qpair failed and we were unable to recover it. 00:28:45.703 [2024-07-25 10:17:30.625910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.703 [2024-07-25 10:17:30.625953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.703 qpair failed and we were unable to recover it. 00:28:45.703 [2024-07-25 10:17:30.626131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.703 [2024-07-25 10:17:30.626178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.703 qpair failed and we were unable to recover it. 00:28:45.703 [2024-07-25 10:17:30.626376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.703 [2024-07-25 10:17:30.626416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.703 qpair failed and we were unable to recover it. 00:28:45.703 [2024-07-25 10:17:30.626621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.703 [2024-07-25 10:17:30.626666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.703 qpair failed and we were unable to recover it. 00:28:45.703 [2024-07-25 10:17:30.626849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.703 [2024-07-25 10:17:30.626891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.703 qpair failed and we were unable to recover it. 00:28:45.703 [2024-07-25 10:17:30.627090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.703 [2024-07-25 10:17:30.627147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.703 qpair failed and we were unable to recover it. 00:28:45.703 [2024-07-25 10:17:30.627338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.703 [2024-07-25 10:17:30.627363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.703 qpair failed and we were unable to recover it. 00:28:45.703 [2024-07-25 10:17:30.627598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.703 [2024-07-25 10:17:30.627644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.703 qpair failed and we were unable to recover it. 00:28:45.703 [2024-07-25 10:17:30.627848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.703 [2024-07-25 10:17:30.627891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.703 qpair failed and we were unable to recover it. 00:28:45.703 [2024-07-25 10:17:30.628113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.703 [2024-07-25 10:17:30.628155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.703 qpair failed and we were unable to recover it. 00:28:45.703 [2024-07-25 10:17:30.628317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.703 [2024-07-25 10:17:30.628342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.703 qpair failed and we were unable to recover it. 00:28:45.703 [2024-07-25 10:17:30.628551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.703 [2024-07-25 10:17:30.628605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.703 qpair failed and we were unable to recover it. 00:28:45.703 [2024-07-25 10:17:30.628843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.703 [2024-07-25 10:17:30.628895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.703 qpair failed and we were unable to recover it. 00:28:45.703 [2024-07-25 10:17:30.629097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.703 [2024-07-25 10:17:30.629143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.703 qpair failed and we were unable to recover it. 00:28:45.703 [2024-07-25 10:17:30.629324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.703 [2024-07-25 10:17:30.629349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.703 qpair failed and we were unable to recover it. 00:28:45.703 [2024-07-25 10:17:30.629563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.703 [2024-07-25 10:17:30.629610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.703 qpair failed and we were unable to recover it. 00:28:45.703 [2024-07-25 10:17:30.629790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.703 [2024-07-25 10:17:30.629833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.703 qpair failed and we were unable to recover it. 00:28:45.703 [2024-07-25 10:17:30.629972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.703 [2024-07-25 10:17:30.630002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.703 qpair failed and we were unable to recover it. 00:28:45.703 [2024-07-25 10:17:30.630235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.703 [2024-07-25 10:17:30.630278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.703 qpair failed and we were unable to recover it. 00:28:45.703 [2024-07-25 10:17:30.630471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.703 [2024-07-25 10:17:30.630518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.703 qpair failed and we were unable to recover it. 00:28:45.703 [2024-07-25 10:17:30.630701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.703 [2024-07-25 10:17:30.630754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.703 qpair failed and we were unable to recover it. 00:28:45.703 [2024-07-25 10:17:30.630964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.703 [2024-07-25 10:17:30.631010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.703 qpair failed and we were unable to recover it. 00:28:45.703 [2024-07-25 10:17:30.631199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.703 [2024-07-25 10:17:30.631225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.703 qpair failed and we were unable to recover it. 00:28:45.703 [2024-07-25 10:17:30.631412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.703 [2024-07-25 10:17:30.631462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.703 qpair failed and we were unable to recover it. 00:28:45.703 [2024-07-25 10:17:30.631615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.703 [2024-07-25 10:17:30.631660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.703 qpair failed and we were unable to recover it. 00:28:45.703 [2024-07-25 10:17:30.631821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.704 [2024-07-25 10:17:30.631863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.704 qpair failed and we were unable to recover it. 00:28:45.704 [2024-07-25 10:17:30.632007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.704 [2024-07-25 10:17:30.632036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.704 qpair failed and we were unable to recover it. 00:28:45.704 [2024-07-25 10:17:30.632202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.704 [2024-07-25 10:17:30.632241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.704 qpair failed and we were unable to recover it. 00:28:45.704 [2024-07-25 10:17:30.633136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.704 [2024-07-25 10:17:30.633165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.704 qpair failed and we were unable to recover it. 00:28:45.704 [2024-07-25 10:17:30.633318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.704 [2024-07-25 10:17:30.633343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.704 qpair failed and we were unable to recover it. 00:28:45.704 [2024-07-25 10:17:30.634082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.704 [2024-07-25 10:17:30.634110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.704 qpair failed and we were unable to recover it. 00:28:45.704 [2024-07-25 10:17:30.634294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.704 [2024-07-25 10:17:30.634320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.704 qpair failed and we were unable to recover it. 00:28:45.704 [2024-07-25 10:17:30.634503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.704 [2024-07-25 10:17:30.634531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.704 qpair failed and we were unable to recover it. 00:28:45.704 [2024-07-25 10:17:30.634698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.704 [2024-07-25 10:17:30.634739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.704 qpair failed and we were unable to recover it. 00:28:45.704 [2024-07-25 10:17:30.634937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.704 [2024-07-25 10:17:30.634962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.704 qpair failed and we were unable to recover it. 00:28:45.704 [2024-07-25 10:17:30.635147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.704 [2024-07-25 10:17:30.635195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.704 qpair failed and we were unable to recover it. 00:28:45.704 [2024-07-25 10:17:30.635368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.704 [2024-07-25 10:17:30.635395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.704 qpair failed and we were unable to recover it. 00:28:45.704 [2024-07-25 10:17:30.635540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.704 [2024-07-25 10:17:30.635586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.704 qpair failed and we were unable to recover it. 00:28:45.704 [2024-07-25 10:17:30.635776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.704 [2024-07-25 10:17:30.635819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.704 qpair failed and we were unable to recover it. 00:28:45.704 [2024-07-25 10:17:30.636009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.704 [2024-07-25 10:17:30.636038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.704 qpair failed and we were unable to recover it. 00:28:45.704 [2024-07-25 10:17:30.636220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.704 [2024-07-25 10:17:30.636247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.704 qpair failed and we were unable to recover it. 00:28:45.704 [2024-07-25 10:17:30.636445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.704 [2024-07-25 10:17:30.636477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.704 qpair failed and we were unable to recover it. 00:28:45.704 [2024-07-25 10:17:30.636650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.704 [2024-07-25 10:17:30.636694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.704 qpair failed and we were unable to recover it. 00:28:45.704 [2024-07-25 10:17:30.636855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.704 [2024-07-25 10:17:30.636898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.704 qpair failed and we were unable to recover it. 00:28:45.704 [2024-07-25 10:17:30.637061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.704 [2024-07-25 10:17:30.637104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.704 qpair failed and we were unable to recover it. 00:28:45.704 [2024-07-25 10:17:30.637299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.704 [2024-07-25 10:17:30.637339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.704 qpair failed and we were unable to recover it. 00:28:45.704 [2024-07-25 10:17:30.637512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.704 [2024-07-25 10:17:30.637558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.704 qpair failed and we were unable to recover it. 00:28:45.704 [2024-07-25 10:17:30.637702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.704 [2024-07-25 10:17:30.637745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.704 qpair failed and we were unable to recover it. 00:28:45.704 [2024-07-25 10:17:30.637946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.704 [2024-07-25 10:17:30.637988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.704 qpair failed and we were unable to recover it. 00:28:45.704 [2024-07-25 10:17:30.638183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.704 [2024-07-25 10:17:30.638243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.704 qpair failed and we were unable to recover it. 00:28:45.704 [2024-07-25 10:17:30.638458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.704 [2024-07-25 10:17:30.638486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.704 qpair failed and we were unable to recover it. 00:28:45.704 [2024-07-25 10:17:30.638621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.704 [2024-07-25 10:17:30.638666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.704 qpair failed and we were unable to recover it. 00:28:45.704 [2024-07-25 10:17:30.638809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.704 [2024-07-25 10:17:30.638853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.704 qpair failed and we were unable to recover it. 00:28:45.704 [2024-07-25 10:17:30.639034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.704 [2024-07-25 10:17:30.639076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.704 qpair failed and we were unable to recover it. 00:28:45.704 [2024-07-25 10:17:30.639268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.704 [2024-07-25 10:17:30.639293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.704 qpair failed and we were unable to recover it. 00:28:45.704 [2024-07-25 10:17:30.639501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.704 [2024-07-25 10:17:30.639532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.704 qpair failed and we were unable to recover it. 00:28:45.704 [2024-07-25 10:17:30.639680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.704 [2024-07-25 10:17:30.639709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.704 qpair failed and we were unable to recover it. 00:28:45.704 [2024-07-25 10:17:30.639876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.704 [2024-07-25 10:17:30.639906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.704 qpair failed and we were unable to recover it. 00:28:45.704 [2024-07-25 10:17:30.640082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.704 [2024-07-25 10:17:30.640123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.704 qpair failed and we were unable to recover it. 00:28:45.704 [2024-07-25 10:17:30.640298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.704 [2024-07-25 10:17:30.640323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.704 qpair failed and we were unable to recover it. 00:28:45.704 [2024-07-25 10:17:30.640448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.704 [2024-07-25 10:17:30.640476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.704 qpair failed and we were unable to recover it. 00:28:45.704 [2024-07-25 10:17:30.640669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.704 [2024-07-25 10:17:30.640715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.704 qpair failed and we were unable to recover it. 00:28:45.704 [2024-07-25 10:17:30.640896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.705 [2024-07-25 10:17:30.640939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.705 qpair failed and we were unable to recover it. 00:28:45.705 [2024-07-25 10:17:30.641101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.705 [2024-07-25 10:17:30.641154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.705 qpair failed and we were unable to recover it. 00:28:45.705 [2024-07-25 10:17:30.641324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.705 [2024-07-25 10:17:30.641350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.705 qpair failed and we were unable to recover it. 00:28:45.705 [2024-07-25 10:17:30.641499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.705 [2024-07-25 10:17:30.641545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.705 qpair failed and we were unable to recover it. 00:28:45.705 [2024-07-25 10:17:30.641740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.705 [2024-07-25 10:17:30.641784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.705 qpair failed and we were unable to recover it. 00:28:45.705 [2024-07-25 10:17:30.641944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.705 [2024-07-25 10:17:30.641993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.705 qpair failed and we were unable to recover it. 00:28:45.705 [2024-07-25 10:17:30.642134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.705 [2024-07-25 10:17:30.642161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.705 qpair failed and we were unable to recover it. 00:28:45.705 [2024-07-25 10:17:30.642319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.705 [2024-07-25 10:17:30.642344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.705 qpair failed and we were unable to recover it. 00:28:45.705 [2024-07-25 10:17:30.642509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.705 [2024-07-25 10:17:30.642539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.705 qpair failed and we were unable to recover it. 00:28:45.705 [2024-07-25 10:17:30.642722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.705 [2024-07-25 10:17:30.642766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.705 qpair failed and we were unable to recover it. 00:28:45.705 [2024-07-25 10:17:30.642975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.705 [2024-07-25 10:17:30.643000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.705 qpair failed and we were unable to recover it. 00:28:45.705 [2024-07-25 10:17:30.643180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.705 [2024-07-25 10:17:30.643205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.705 qpair failed and we were unable to recover it. 00:28:45.705 [2024-07-25 10:17:30.643421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.705 [2024-07-25 10:17:30.643457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.705 qpair failed and we were unable to recover it. 00:28:45.705 [2024-07-25 10:17:30.643678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.705 [2024-07-25 10:17:30.643708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.705 qpair failed and we were unable to recover it. 00:28:45.705 [2024-07-25 10:17:30.643933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.705 [2024-07-25 10:17:30.643975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.705 qpair failed and we were unable to recover it. 00:28:45.705 [2024-07-25 10:17:30.644162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.705 [2024-07-25 10:17:30.644207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.705 qpair failed and we were unable to recover it. 00:28:45.705 [2024-07-25 10:17:30.644394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.705 [2024-07-25 10:17:30.644440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.705 qpair failed and we were unable to recover it. 00:28:45.705 [2024-07-25 10:17:30.644650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.705 [2024-07-25 10:17:30.644699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.705 qpair failed and we were unable to recover it. 00:28:45.705 [2024-07-25 10:17:30.644895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.705 [2024-07-25 10:17:30.644939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.705 qpair failed and we were unable to recover it. 00:28:45.705 [2024-07-25 10:17:30.645154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.705 [2024-07-25 10:17:30.645216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.705 qpair failed and we were unable to recover it. 00:28:45.705 [2024-07-25 10:17:30.645398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.705 [2024-07-25 10:17:30.645447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.705 qpair failed and we were unable to recover it. 00:28:45.705 [2024-07-25 10:17:30.645662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.705 [2024-07-25 10:17:30.645710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.705 qpair failed and we were unable to recover it. 00:28:45.705 [2024-07-25 10:17:30.645934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.705 [2024-07-25 10:17:30.645977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.705 qpair failed and we were unable to recover it. 00:28:45.705 [2024-07-25 10:17:30.646150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.705 [2024-07-25 10:17:30.646213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.705 qpair failed and we were unable to recover it. 00:28:45.705 [2024-07-25 10:17:30.646412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.705 [2024-07-25 10:17:30.646481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.705 qpair failed and we were unable to recover it. 00:28:45.705 [2024-07-25 10:17:30.646691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.705 [2024-07-25 10:17:30.646740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.705 qpair failed and we were unable to recover it. 00:28:45.705 [2024-07-25 10:17:30.646962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.705 [2024-07-25 10:17:30.647004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.705 qpair failed and we were unable to recover it. 00:28:45.705 [2024-07-25 10:17:30.647203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.705 [2024-07-25 10:17:30.647260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.705 qpair failed and we were unable to recover it. 00:28:45.705 [2024-07-25 10:17:30.647466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.705 [2024-07-25 10:17:30.647512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.705 qpair failed and we were unable to recover it. 00:28:45.705 [2024-07-25 10:17:30.647706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.705 [2024-07-25 10:17:30.647749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.705 qpair failed and we were unable to recover it. 00:28:45.705 [2024-07-25 10:17:30.647971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.706 [2024-07-25 10:17:30.648014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.706 qpair failed and we were unable to recover it. 00:28:45.706 [2024-07-25 10:17:30.648223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.706 [2024-07-25 10:17:30.648272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.706 qpair failed and we were unable to recover it. 00:28:45.706 [2024-07-25 10:17:30.648469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.706 [2024-07-25 10:17:30.648497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.706 qpair failed and we were unable to recover it. 00:28:45.706 [2024-07-25 10:17:30.648673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.706 [2024-07-25 10:17:30.648700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.706 qpair failed and we were unable to recover it. 00:28:45.706 [2024-07-25 10:17:30.648858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.706 [2024-07-25 10:17:30.648902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.706 qpair failed and we were unable to recover it. 00:28:45.706 [2024-07-25 10:17:30.649048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.706 [2024-07-25 10:17:30.649091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.706 qpair failed and we were unable to recover it. 00:28:45.706 [2024-07-25 10:17:30.649302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.706 [2024-07-25 10:17:30.649327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.706 qpair failed and we were unable to recover it. 00:28:45.706 [2024-07-25 10:17:30.649547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.706 [2024-07-25 10:17:30.649575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.706 qpair failed and we were unable to recover it. 00:28:45.706 [2024-07-25 10:17:30.649756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.706 [2024-07-25 10:17:30.649782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.706 qpair failed and we were unable to recover it. 00:28:45.706 [2024-07-25 10:17:30.649962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.706 [2024-07-25 10:17:30.649988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.706 qpair failed and we were unable to recover it. 00:28:45.706 [2024-07-25 10:17:30.650213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.706 [2024-07-25 10:17:30.650255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.706 qpair failed and we were unable to recover it. 00:28:45.706 [2024-07-25 10:17:30.650449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.706 [2024-07-25 10:17:30.650477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.706 qpair failed and we were unable to recover it. 00:28:45.706 [2024-07-25 10:17:30.650655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.706 [2024-07-25 10:17:30.650700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.706 qpair failed and we were unable to recover it. 00:28:45.706 [2024-07-25 10:17:30.650861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.706 [2024-07-25 10:17:30.650903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.706 qpair failed and we were unable to recover it. 00:28:45.706 [2024-07-25 10:17:30.651117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.706 [2024-07-25 10:17:30.651166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.706 qpair failed and we were unable to recover it. 00:28:45.706 [2024-07-25 10:17:30.651373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.706 [2024-07-25 10:17:30.651398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.706 qpair failed and we were unable to recover it. 00:28:45.706 [2024-07-25 10:17:30.651627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.706 [2024-07-25 10:17:30.651656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.706 qpair failed and we were unable to recover it. 00:28:45.706 [2024-07-25 10:17:30.651868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.706 [2024-07-25 10:17:30.651911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.706 qpair failed and we were unable to recover it. 00:28:45.706 [2024-07-25 10:17:30.652092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.706 [2024-07-25 10:17:30.652148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.706 qpair failed and we were unable to recover it. 00:28:45.706 [2024-07-25 10:17:30.652332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.706 [2024-07-25 10:17:30.652358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.706 qpair failed and we were unable to recover it. 00:28:45.706 [2024-07-25 10:17:30.652548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.706 [2024-07-25 10:17:30.652576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.706 qpair failed and we were unable to recover it. 00:28:45.706 [2024-07-25 10:17:30.652729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.706 [2024-07-25 10:17:30.652772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.706 qpair failed and we were unable to recover it. 00:28:45.706 [2024-07-25 10:17:30.652947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.706 [2024-07-25 10:17:30.652990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.706 qpair failed and we were unable to recover it. 00:28:45.706 [2024-07-25 10:17:30.653156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.706 [2024-07-25 10:17:30.653206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.706 qpair failed and we were unable to recover it. 00:28:45.706 [2024-07-25 10:17:30.653384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.706 [2024-07-25 10:17:30.653410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.706 qpair failed and we were unable to recover it. 00:28:45.706 [2024-07-25 10:17:30.653591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.706 [2024-07-25 10:17:30.653637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.706 qpair failed and we were unable to recover it. 00:28:45.706 [2024-07-25 10:17:30.653841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.706 [2024-07-25 10:17:30.653884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.706 qpair failed and we were unable to recover it. 00:28:45.706 [2024-07-25 10:17:30.654063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.706 [2024-07-25 10:17:30.654111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.706 qpair failed and we were unable to recover it. 00:28:45.706 [2024-07-25 10:17:30.654323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.706 [2024-07-25 10:17:30.654349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.706 qpair failed and we were unable to recover it. 00:28:45.706 [2024-07-25 10:17:30.654523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.706 [2024-07-25 10:17:30.654575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.706 qpair failed and we were unable to recover it. 00:28:45.706 [2024-07-25 10:17:30.654761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.706 [2024-07-25 10:17:30.654805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.706 qpair failed and we were unable to recover it. 00:28:45.706 [2024-07-25 10:17:30.654989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.706 [2024-07-25 10:17:30.655033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.706 qpair failed and we were unable to recover it. 00:28:45.706 [2024-07-25 10:17:30.655247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.706 [2024-07-25 10:17:30.655298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.706 qpair failed and we were unable to recover it. 00:28:45.706 [2024-07-25 10:17:30.655504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.706 [2024-07-25 10:17:30.655535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.706 qpair failed and we were unable to recover it. 00:28:45.706 [2024-07-25 10:17:30.655730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.706 [2024-07-25 10:17:30.655760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.706 qpair failed and we were unable to recover it. 00:28:45.706 [2024-07-25 10:17:30.655957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.706 [2024-07-25 10:17:30.656000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.706 qpair failed and we were unable to recover it. 00:28:45.706 [2024-07-25 10:17:30.656215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.706 [2024-07-25 10:17:30.656267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.706 qpair failed and we were unable to recover it. 00:28:45.706 [2024-07-25 10:17:30.656465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.707 [2024-07-25 10:17:30.656493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.707 qpair failed and we were unable to recover it. 00:28:45.707 [2024-07-25 10:17:30.656664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.707 [2024-07-25 10:17:30.656713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.707 qpair failed and we were unable to recover it. 00:28:45.707 [2024-07-25 10:17:30.656874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.707 [2024-07-25 10:17:30.656917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.707 qpair failed and we were unable to recover it. 00:28:45.707 [2024-07-25 10:17:30.657605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.707 [2024-07-25 10:17:30.657636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.707 qpair failed and we were unable to recover it. 00:28:45.707 [2024-07-25 10:17:30.657789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.707 [2024-07-25 10:17:30.657831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.707 qpair failed and we were unable to recover it. 00:28:45.707 [2024-07-25 10:17:30.658032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.707 [2024-07-25 10:17:30.658076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.707 qpair failed and we were unable to recover it. 00:28:45.707 [2024-07-25 10:17:30.658228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.707 [2024-07-25 10:17:30.658253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.707 qpair failed and we were unable to recover it. 00:28:45.707 [2024-07-25 10:17:30.658443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.707 [2024-07-25 10:17:30.658471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.707 qpair failed and we were unable to recover it. 00:28:45.707 [2024-07-25 10:17:30.658690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.707 [2024-07-25 10:17:30.658720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.707 qpair failed and we were unable to recover it. 00:28:45.707 [2024-07-25 10:17:30.658903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.707 [2024-07-25 10:17:30.658953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.707 qpair failed and we were unable to recover it. 00:28:45.707 [2024-07-25 10:17:30.659144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.707 [2024-07-25 10:17:30.659197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.707 qpair failed and we were unable to recover it. 00:28:45.707 [2024-07-25 10:17:30.659401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.707 [2024-07-25 10:17:30.659450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.707 qpair failed and we were unable to recover it. 00:28:45.707 [2024-07-25 10:17:30.659582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.707 [2024-07-25 10:17:30.659628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.707 qpair failed and we were unable to recover it. 00:28:45.707 [2024-07-25 10:17:30.659803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.707 [2024-07-25 10:17:30.659845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.707 qpair failed and we were unable to recover it. 00:28:45.707 [2024-07-25 10:17:30.660011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.707 [2024-07-25 10:17:30.660054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.707 qpair failed and we were unable to recover it. 00:28:45.707 [2024-07-25 10:17:30.660228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.707 [2024-07-25 10:17:30.660280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.707 qpair failed and we were unable to recover it. 00:28:45.707 [2024-07-25 10:17:30.660501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.707 [2024-07-25 10:17:30.660532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.707 qpair failed and we were unable to recover it. 00:28:45.707 [2024-07-25 10:17:30.660703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.707 [2024-07-25 10:17:30.660733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.707 qpair failed and we were unable to recover it. 00:28:45.707 [2024-07-25 10:17:30.660934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.707 [2024-07-25 10:17:30.660977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.707 qpair failed and we were unable to recover it. 00:28:45.707 [2024-07-25 10:17:30.661193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.707 [2024-07-25 10:17:30.661237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.707 qpair failed and we were unable to recover it. 00:28:45.707 [2024-07-25 10:17:30.661409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.707 [2024-07-25 10:17:30.661467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.707 qpair failed and we were unable to recover it. 00:28:45.707 [2024-07-25 10:17:30.661628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.707 [2024-07-25 10:17:30.661675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.707 qpair failed and we were unable to recover it. 00:28:45.707 [2024-07-25 10:17:30.661833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.707 [2024-07-25 10:17:30.661874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.707 qpair failed and we were unable to recover it. 00:28:45.707 [2024-07-25 10:17:30.662052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.707 [2024-07-25 10:17:30.662107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.707 qpair failed and we were unable to recover it. 00:28:45.707 [2024-07-25 10:17:30.662270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.707 [2024-07-25 10:17:30.662294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.707 qpair failed and we were unable to recover it. 00:28:45.707 [2024-07-25 10:17:30.662506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.707 [2024-07-25 10:17:30.662564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.707 qpair failed and we were unable to recover it. 00:28:45.707 [2024-07-25 10:17:30.662725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.707 [2024-07-25 10:17:30.662767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.707 qpair failed and we were unable to recover it. 00:28:45.707 [2024-07-25 10:17:30.662943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.707 [2024-07-25 10:17:30.662985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.707 qpair failed and we were unable to recover it. 00:28:45.707 [2024-07-25 10:17:30.663157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.707 [2024-07-25 10:17:30.663213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.707 qpair failed and we were unable to recover it. 00:28:45.707 [2024-07-25 10:17:30.663375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.707 [2024-07-25 10:17:30.663400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.707 qpair failed and we were unable to recover it. 00:28:45.707 [2024-07-25 10:17:30.663579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.707 [2024-07-25 10:17:30.663623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.707 qpair failed and we were unable to recover it. 00:28:45.707 [2024-07-25 10:17:30.663753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.707 [2024-07-25 10:17:30.663794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.707 qpair failed and we were unable to recover it. 00:28:45.707 [2024-07-25 10:17:30.663956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.707 [2024-07-25 10:17:30.663981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.707 qpair failed and we were unable to recover it. 00:28:45.707 [2024-07-25 10:17:30.664216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.707 [2024-07-25 10:17:30.664267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.707 qpair failed and we were unable to recover it. 00:28:45.707 [2024-07-25 10:17:30.664453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.707 [2024-07-25 10:17:30.664480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.707 qpair failed and we were unable to recover it. 00:28:45.707 [2024-07-25 10:17:30.664635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.707 [2024-07-25 10:17:30.664680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.707 qpair failed and we were unable to recover it. 00:28:45.707 [2024-07-25 10:17:30.664886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.707 [2024-07-25 10:17:30.664929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.707 qpair failed and we were unable to recover it. 00:28:45.707 [2024-07-25 10:17:30.665147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.707 [2024-07-25 10:17:30.665199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.708 qpair failed and we were unable to recover it. 00:28:45.708 [2024-07-25 10:17:30.665371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.708 [2024-07-25 10:17:30.665395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.708 qpair failed and we were unable to recover it. 00:28:45.708 [2024-07-25 10:17:30.665597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.708 [2024-07-25 10:17:30.665624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.708 qpair failed and we were unable to recover it. 00:28:45.708 [2024-07-25 10:17:30.665793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.708 [2024-07-25 10:17:30.665837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.708 qpair failed and we were unable to recover it. 00:28:45.708 [2024-07-25 10:17:30.666001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.708 [2024-07-25 10:17:30.666045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.708 qpair failed and we were unable to recover it. 00:28:45.708 [2024-07-25 10:17:30.666230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.708 [2024-07-25 10:17:30.666273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.708 qpair failed and we were unable to recover it. 00:28:45.708 [2024-07-25 10:17:30.666471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.708 [2024-07-25 10:17:30.666499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.708 qpair failed and we were unable to recover it. 00:28:45.708 [2024-07-25 10:17:30.666712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.708 [2024-07-25 10:17:30.666755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.708 qpair failed and we were unable to recover it. 00:28:45.708 [2024-07-25 10:17:30.666908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.708 [2024-07-25 10:17:30.666952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.708 qpair failed and we were unable to recover it. 00:28:45.708 [2024-07-25 10:17:30.667169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.708 [2024-07-25 10:17:30.667223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.708 qpair failed and we were unable to recover it. 00:28:45.708 [2024-07-25 10:17:30.667382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.708 [2024-07-25 10:17:30.667407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.708 qpair failed and we were unable to recover it. 00:28:45.708 [2024-07-25 10:17:30.667574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.708 [2024-07-25 10:17:30.667619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.708 qpair failed and we were unable to recover it. 00:28:45.708 [2024-07-25 10:17:30.667777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.708 [2024-07-25 10:17:30.667821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.708 qpair failed and we were unable to recover it. 00:28:45.708 [2024-07-25 10:17:30.668037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.708 [2024-07-25 10:17:30.668080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.708 qpair failed and we were unable to recover it. 00:28:45.708 [2024-07-25 10:17:30.668268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.708 [2024-07-25 10:17:30.668293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.708 qpair failed and we were unable to recover it. 00:28:45.708 [2024-07-25 10:17:30.668501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.708 [2024-07-25 10:17:30.668532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.708 qpair failed and we were unable to recover it. 00:28:45.708 [2024-07-25 10:17:30.668720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.708 [2024-07-25 10:17:30.668764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.708 qpair failed and we were unable to recover it. 00:28:45.708 [2024-07-25 10:17:30.668960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.708 [2024-07-25 10:17:30.668985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.708 qpair failed and we were unable to recover it. 00:28:45.708 [2024-07-25 10:17:30.669179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.708 [2024-07-25 10:17:30.669236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.708 qpair failed and we were unable to recover it. 00:28:45.708 [2024-07-25 10:17:30.669472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.708 [2024-07-25 10:17:30.669500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.708 qpair failed and we were unable to recover it. 00:28:45.708 [2024-07-25 10:17:30.669659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.708 [2024-07-25 10:17:30.669707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.708 qpair failed and we were unable to recover it. 00:28:45.708 [2024-07-25 10:17:30.669863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.708 [2024-07-25 10:17:30.669906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.708 qpair failed and we were unable to recover it. 00:28:45.708 [2024-07-25 10:17:30.670155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.708 [2024-07-25 10:17:30.670226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.708 qpair failed and we were unable to recover it. 00:28:45.708 [2024-07-25 10:17:30.670407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.708 [2024-07-25 10:17:30.670450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.708 qpair failed and we were unable to recover it. 00:28:45.708 [2024-07-25 10:17:30.670613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.708 [2024-07-25 10:17:30.670658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.708 qpair failed and we were unable to recover it. 00:28:45.708 [2024-07-25 10:17:30.670841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.708 [2024-07-25 10:17:30.670890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.708 qpair failed and we were unable to recover it. 00:28:45.708 [2024-07-25 10:17:30.671069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.708 [2024-07-25 10:17:30.671122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.708 qpair failed and we were unable to recover it. 00:28:45.708 [2024-07-25 10:17:30.671310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.708 [2024-07-25 10:17:30.671337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.708 qpair failed and we were unable to recover it. 00:28:45.708 [2024-07-25 10:17:30.671500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.708 [2024-07-25 10:17:30.671531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.708 qpair failed and we were unable to recover it. 00:28:45.708 [2024-07-25 10:17:30.671671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.708 [2024-07-25 10:17:30.671721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.708 qpair failed and we were unable to recover it. 00:28:45.708 [2024-07-25 10:17:30.671915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.708 [2024-07-25 10:17:30.671959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.708 qpair failed and we were unable to recover it. 00:28:45.708 [2024-07-25 10:17:30.672135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.708 [2024-07-25 10:17:30.672197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.708 qpair failed and we were unable to recover it. 00:28:45.708 [2024-07-25 10:17:30.672358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.708 [2024-07-25 10:17:30.672385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.708 qpair failed and we were unable to recover it. 00:28:45.708 [2024-07-25 10:17:30.672600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.708 [2024-07-25 10:17:30.672645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.708 qpair failed and we were unable to recover it. 00:28:45.708 [2024-07-25 10:17:30.672850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.708 [2024-07-25 10:17:30.672879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.708 qpair failed and we were unable to recover it. 00:28:45.708 [2024-07-25 10:17:30.673057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.708 [2024-07-25 10:17:30.673111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.708 qpair failed and we were unable to recover it. 00:28:45.708 [2024-07-25 10:17:30.673314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.708 [2024-07-25 10:17:30.673341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.708 qpair failed and we were unable to recover it. 00:28:45.708 [2024-07-25 10:17:30.673495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.708 [2024-07-25 10:17:30.673525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.708 qpair failed and we were unable to recover it. 00:28:45.708 [2024-07-25 10:17:30.673702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.708 [2024-07-25 10:17:30.673732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.708 qpair failed and we were unable to recover it. 00:28:45.708 [2024-07-25 10:17:30.673927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.708 [2024-07-25 10:17:30.673971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.708 qpair failed and we were unable to recover it. 00:28:45.709 [2024-07-25 10:17:30.674201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.709 [2024-07-25 10:17:30.674253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.709 qpair failed and we were unable to recover it. 00:28:45.709 [2024-07-25 10:17:30.674441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.709 [2024-07-25 10:17:30.674468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.709 qpair failed and we were unable to recover it. 00:28:45.709 [2024-07-25 10:17:30.674615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.709 [2024-07-25 10:17:30.674642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.709 qpair failed and we were unable to recover it. 00:28:45.709 [2024-07-25 10:17:30.674844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.709 [2024-07-25 10:17:30.674888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.709 qpair failed and we were unable to recover it. 00:28:45.709 [2024-07-25 10:17:30.675104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.709 [2024-07-25 10:17:30.675166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.709 qpair failed and we were unable to recover it. 00:28:45.709 [2024-07-25 10:17:30.675365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.709 [2024-07-25 10:17:30.675392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.709 qpair failed and we were unable to recover it. 00:28:45.709 [2024-07-25 10:17:30.675575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.709 [2024-07-25 10:17:30.675603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.709 qpair failed and we were unable to recover it. 00:28:45.709 [2024-07-25 10:17:30.675782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.709 [2024-07-25 10:17:30.675827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.709 qpair failed and we were unable to recover it. 00:28:45.709 [2024-07-25 10:17:30.675972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.709 [2024-07-25 10:17:30.676017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.709 qpair failed and we were unable to recover it. 00:28:45.709 [2024-07-25 10:17:30.676233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.709 [2024-07-25 10:17:30.676289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.709 qpair failed and we were unable to recover it. 00:28:45.709 [2024-07-25 10:17:30.676516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.709 [2024-07-25 10:17:30.676547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.709 qpair failed and we were unable to recover it. 00:28:45.709 [2024-07-25 10:17:30.676729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.709 [2024-07-25 10:17:30.676772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.709 qpair failed and we were unable to recover it. 00:28:45.709 [2024-07-25 10:17:30.676967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.709 [2024-07-25 10:17:30.676993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.709 qpair failed and we were unable to recover it. 00:28:45.709 [2024-07-25 10:17:30.677209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.709 [2024-07-25 10:17:30.677235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.709 qpair failed and we were unable to recover it. 00:28:45.709 [2024-07-25 10:17:30.677420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.709 [2024-07-25 10:17:30.677464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.709 qpair failed and we were unable to recover it. 00:28:45.709 [2024-07-25 10:17:30.677614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.709 [2024-07-25 10:17:30.677660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.709 qpair failed and we were unable to recover it. 00:28:45.709 [2024-07-25 10:17:30.677856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.709 [2024-07-25 10:17:30.677886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.709 qpair failed and we were unable to recover it. 00:28:45.709 [2024-07-25 10:17:30.678090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.709 [2024-07-25 10:17:30.678143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.709 qpair failed and we were unable to recover it. 00:28:45.709 [2024-07-25 10:17:30.678332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.709 [2024-07-25 10:17:30.678358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.709 qpair failed and we were unable to recover it. 00:28:45.709 [2024-07-25 10:17:30.678515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.709 [2024-07-25 10:17:30.678561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.709 qpair failed and we were unable to recover it. 00:28:45.709 [2024-07-25 10:17:30.678715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.709 [2024-07-25 10:17:30.678759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.709 qpair failed and we were unable to recover it. 00:28:45.709 [2024-07-25 10:17:30.678942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.709 [2024-07-25 10:17:30.678987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.709 qpair failed and we were unable to recover it. 00:28:45.709 [2024-07-25 10:17:30.679192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.709 [2024-07-25 10:17:30.679249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.709 qpair failed and we were unable to recover it. 00:28:45.709 [2024-07-25 10:17:30.679468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.709 [2024-07-25 10:17:30.679495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.709 qpair failed and we were unable to recover it. 00:28:45.709 [2024-07-25 10:17:30.679696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.709 [2024-07-25 10:17:30.679739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.709 qpair failed and we were unable to recover it. 00:28:45.709 [2024-07-25 10:17:30.679913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.709 [2024-07-25 10:17:30.679959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.709 qpair failed and we were unable to recover it. 00:28:45.709 [2024-07-25 10:17:30.680130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.709 [2024-07-25 10:17:30.680181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.709 qpair failed and we were unable to recover it. 00:28:45.709 [2024-07-25 10:17:30.680354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.709 [2024-07-25 10:17:30.680381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.709 qpair failed and we were unable to recover it. 00:28:45.709 [2024-07-25 10:17:30.680535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.709 [2024-07-25 10:17:30.680580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.709 qpair failed and we were unable to recover it. 00:28:45.709 [2024-07-25 10:17:30.680764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.709 [2024-07-25 10:17:30.680807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.709 qpair failed and we were unable to recover it. 00:28:45.709 [2024-07-25 10:17:30.681000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.709 [2024-07-25 10:17:30.681043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.709 qpair failed and we were unable to recover it. 00:28:45.709 [2024-07-25 10:17:30.681264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.709 [2024-07-25 10:17:30.681290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.709 qpair failed and we were unable to recover it. 00:28:45.709 [2024-07-25 10:17:30.681442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.709 [2024-07-25 10:17:30.681472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.709 qpair failed and we were unable to recover it. 00:28:45.710 [2024-07-25 10:17:30.681665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.710 [2024-07-25 10:17:30.681715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.710 qpair failed and we were unable to recover it. 00:28:45.710 [2024-07-25 10:17:30.681904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.710 [2024-07-25 10:17:30.681947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.710 qpair failed and we were unable to recover it. 00:28:45.710 [2024-07-25 10:17:30.682113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.710 [2024-07-25 10:17:30.682156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.710 qpair failed and we were unable to recover it. 00:28:45.710 [2024-07-25 10:17:30.682305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.710 [2024-07-25 10:17:30.682331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.710 qpair failed and we were unable to recover it. 00:28:45.710 [2024-07-25 10:17:30.682509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.710 [2024-07-25 10:17:30.682565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.710 qpair failed and we were unable to recover it. 00:28:45.710 [2024-07-25 10:17:30.682779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.710 [2024-07-25 10:17:30.682805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.710 qpair failed and we were unable to recover it. 00:28:45.710 [2024-07-25 10:17:30.682932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.710 [2024-07-25 10:17:30.682958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.710 qpair failed and we were unable to recover it. 00:28:45.710 [2024-07-25 10:17:30.683115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.710 [2024-07-25 10:17:30.683142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.710 qpair failed and we were unable to recover it. 00:28:45.710 [2024-07-25 10:17:30.683342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.710 [2024-07-25 10:17:30.683370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.710 qpair failed and we were unable to recover it. 00:28:45.710 [2024-07-25 10:17:30.683537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.710 [2024-07-25 10:17:30.683567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.710 qpair failed and we were unable to recover it. 00:28:45.710 [2024-07-25 10:17:30.683784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.710 [2024-07-25 10:17:30.683828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.710 qpair failed and we were unable to recover it. 00:28:45.710 [2024-07-25 10:17:30.684039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.710 [2024-07-25 10:17:30.684083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.710 qpair failed and we were unable to recover it. 00:28:45.710 [2024-07-25 10:17:30.684281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.710 [2024-07-25 10:17:30.684308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.710 qpair failed and we were unable to recover it. 00:28:45.710 [2024-07-25 10:17:30.684503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.710 [2024-07-25 10:17:30.684549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.710 qpair failed and we were unable to recover it. 00:28:45.710 [2024-07-25 10:17:30.684705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.710 [2024-07-25 10:17:30.684749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.710 qpair failed and we were unable to recover it. 00:28:45.710 [2024-07-25 10:17:30.684934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.710 [2024-07-25 10:17:30.684978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.710 qpair failed and we were unable to recover it. 00:28:45.710 [2024-07-25 10:17:30.685181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.710 [2024-07-25 10:17:30.685224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.710 qpair failed and we were unable to recover it. 00:28:45.710 [2024-07-25 10:17:30.685366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.710 [2024-07-25 10:17:30.685392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.710 qpair failed and we were unable to recover it. 00:28:45.710 [2024-07-25 10:17:30.685554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.710 [2024-07-25 10:17:30.685600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.710 qpair failed and we were unable to recover it. 00:28:45.710 [2024-07-25 10:17:30.685778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.710 [2024-07-25 10:17:30.685822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.710 qpair failed and we were unable to recover it. 00:28:45.710 [2024-07-25 10:17:30.686010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.710 [2024-07-25 10:17:30.686038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.710 qpair failed and we were unable to recover it. 00:28:45.710 [2024-07-25 10:17:30.686213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.710 [2024-07-25 10:17:30.686239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.710 qpair failed and we were unable to recover it. 00:28:45.710 [2024-07-25 10:17:30.686381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.710 [2024-07-25 10:17:30.686412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.710 qpair failed and we were unable to recover it. 00:28:45.710 [2024-07-25 10:17:30.686577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.710 [2024-07-25 10:17:30.686623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.710 qpair failed and we were unable to recover it. 00:28:45.710 [2024-07-25 10:17:30.686809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.710 [2024-07-25 10:17:30.686838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.710 qpair failed and we were unable to recover it. 00:28:45.710 [2024-07-25 10:17:30.687002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.710 [2024-07-25 10:17:30.687046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.710 qpair failed and we were unable to recover it. 00:28:45.710 [2024-07-25 10:17:30.687189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.710 [2024-07-25 10:17:30.687216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.710 qpair failed and we were unable to recover it. 00:28:45.710 [2024-07-25 10:17:30.687384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.710 [2024-07-25 10:17:30.687412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.710 qpair failed and we were unable to recover it. 00:28:45.710 [2024-07-25 10:17:30.687582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.710 [2024-07-25 10:17:30.687626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.710 qpair failed and we were unable to recover it. 00:28:45.710 [2024-07-25 10:17:30.687840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.710 [2024-07-25 10:17:30.687888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.710 qpair failed and we were unable to recover it. 00:28:45.710 [2024-07-25 10:17:30.688050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.710 [2024-07-25 10:17:30.688103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.710 qpair failed and we were unable to recover it. 00:28:45.710 [2024-07-25 10:17:30.688259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.710 [2024-07-25 10:17:30.688285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.710 qpair failed and we were unable to recover it. 00:28:45.710 [2024-07-25 10:17:30.688467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.710 [2024-07-25 10:17:30.688512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.710 qpair failed and we were unable to recover it. 00:28:45.710 [2024-07-25 10:17:30.688671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.710 [2024-07-25 10:17:30.688723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.710 qpair failed and we were unable to recover it. 00:28:45.710 [2024-07-25 10:17:30.688897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.710 [2024-07-25 10:17:30.688941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.710 qpair failed and we were unable to recover it. 00:28:45.710 [2024-07-25 10:17:30.689134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.710 [2024-07-25 10:17:30.689164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.710 qpair failed and we were unable to recover it. 00:28:45.710 [2024-07-25 10:17:30.689352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.710 [2024-07-25 10:17:30.689379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.710 qpair failed and we were unable to recover it. 00:28:45.710 [2024-07-25 10:17:30.689531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.710 [2024-07-25 10:17:30.689576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.710 qpair failed and we were unable to recover it. 00:28:45.710 [2024-07-25 10:17:30.689735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.710 [2024-07-25 10:17:30.689779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.711 qpair failed and we were unable to recover it. 00:28:45.711 [2024-07-25 10:17:30.689989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.711 [2024-07-25 10:17:30.690033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.711 qpair failed and we were unable to recover it. 00:28:45.711 [2024-07-25 10:17:30.690163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.711 [2024-07-25 10:17:30.690189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.711 qpair failed and we were unable to recover it. 00:28:45.711 [2024-07-25 10:17:30.690374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.711 [2024-07-25 10:17:30.690410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.711 qpair failed and we were unable to recover it. 00:28:45.711 [2024-07-25 10:17:30.690571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.711 [2024-07-25 10:17:30.690617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.711 qpair failed and we were unable to recover it. 00:28:45.711 [2024-07-25 10:17:30.690822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.711 [2024-07-25 10:17:30.690867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.711 qpair failed and we were unable to recover it. 00:28:45.711 [2024-07-25 10:17:30.691045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.711 [2024-07-25 10:17:30.691113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.711 qpair failed and we were unable to recover it. 00:28:45.711 [2024-07-25 10:17:30.691260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.711 [2024-07-25 10:17:30.691286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.711 qpair failed and we were unable to recover it. 00:28:45.711 [2024-07-25 10:17:30.691463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.711 [2024-07-25 10:17:30.691509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.711 qpair failed and we were unable to recover it. 00:28:45.711 [2024-07-25 10:17:30.691674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.711 [2024-07-25 10:17:30.691721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.711 qpair failed and we were unable to recover it. 00:28:45.711 [2024-07-25 10:17:30.691900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.711 [2024-07-25 10:17:30.691943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.711 qpair failed and we were unable to recover it. 00:28:45.711 [2024-07-25 10:17:30.692101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.711 [2024-07-25 10:17:30.692144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.711 qpair failed and we were unable to recover it. 00:28:45.711 [2024-07-25 10:17:30.692351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.711 [2024-07-25 10:17:30.692377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.711 qpair failed and we were unable to recover it. 00:28:45.711 [2024-07-25 10:17:30.692568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.711 [2024-07-25 10:17:30.692613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.711 qpair failed and we were unable to recover it. 00:28:45.711 [2024-07-25 10:17:30.692794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.711 [2024-07-25 10:17:30.692838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.711 qpair failed and we were unable to recover it. 00:28:45.711 [2024-07-25 10:17:30.693059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.711 [2024-07-25 10:17:30.693103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.711 qpair failed and we were unable to recover it. 00:28:45.711 [2024-07-25 10:17:30.693305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.711 [2024-07-25 10:17:30.693332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.711 qpair failed and we were unable to recover it. 00:28:45.711 [2024-07-25 10:17:30.693513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.711 [2024-07-25 10:17:30.693557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.711 qpair failed and we were unable to recover it. 00:28:45.711 [2024-07-25 10:17:30.693752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.711 [2024-07-25 10:17:30.693796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.711 qpair failed and we were unable to recover it. 00:28:45.711 [2024-07-25 10:17:30.693943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.711 [2024-07-25 10:17:30.693987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.711 qpair failed and we were unable to recover it. 00:28:45.711 [2024-07-25 10:17:30.694132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.711 [2024-07-25 10:17:30.694176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.711 qpair failed and we were unable to recover it. 00:28:45.711 [2024-07-25 10:17:30.694355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.711 [2024-07-25 10:17:30.694382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.711 qpair failed and we were unable to recover it. 00:28:45.711 [2024-07-25 10:17:30.694550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.711 [2024-07-25 10:17:30.694596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.711 qpair failed and we were unable to recover it. 00:28:45.711 [2024-07-25 10:17:30.694796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.711 [2024-07-25 10:17:30.694839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.711 qpair failed and we were unable to recover it. 00:28:45.711 [2024-07-25 10:17:30.695048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.711 [2024-07-25 10:17:30.695103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.711 qpair failed and we were unable to recover it. 00:28:45.711 [2024-07-25 10:17:30.695298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.711 [2024-07-25 10:17:30.695324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.711 qpair failed and we were unable to recover it. 00:28:45.711 [2024-07-25 10:17:30.695502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.711 [2024-07-25 10:17:30.695548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.711 qpair failed and we were unable to recover it. 00:28:45.711 [2024-07-25 10:17:30.695697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.711 [2024-07-25 10:17:30.695740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.711 qpair failed and we were unable to recover it. 00:28:45.711 [2024-07-25 10:17:30.695931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.711 [2024-07-25 10:17:30.695974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.711 qpair failed and we were unable to recover it. 00:28:45.711 [2024-07-25 10:17:30.696169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.711 [2024-07-25 10:17:30.696212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.711 qpair failed and we were unable to recover it. 00:28:45.711 [2024-07-25 10:17:30.696396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.711 [2024-07-25 10:17:30.696457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.711 qpair failed and we were unable to recover it. 00:28:45.711 [2024-07-25 10:17:30.696614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.711 [2024-07-25 10:17:30.696663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.711 qpair failed and we were unable to recover it. 00:28:45.711 [2024-07-25 10:17:30.696866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.711 [2024-07-25 10:17:30.696910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.711 qpair failed and we were unable to recover it. 00:28:45.711 [2024-07-25 10:17:30.697101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.711 [2024-07-25 10:17:30.697155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.711 qpair failed and we were unable to recover it. 00:28:45.711 [2024-07-25 10:17:30.697320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.711 [2024-07-25 10:17:30.697347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.711 qpair failed and we were unable to recover it. 00:28:45.711 [2024-07-25 10:17:30.697515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.711 [2024-07-25 10:17:30.697560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.711 qpair failed and we were unable to recover it. 00:28:45.711 [2024-07-25 10:17:30.697738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.711 [2024-07-25 10:17:30.697781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.711 qpair failed and we were unable to recover it. 00:28:45.711 [2024-07-25 10:17:30.697965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.711 [2024-07-25 10:17:30.698010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.711 qpair failed and we were unable to recover it. 00:28:45.711 [2024-07-25 10:17:30.698185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.711 [2024-07-25 10:17:30.698228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.711 qpair failed and we were unable to recover it. 00:28:45.711 [2024-07-25 10:17:30.698424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.712 [2024-07-25 10:17:30.698456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.712 qpair failed and we were unable to recover it. 00:28:45.712 [2024-07-25 10:17:30.698603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.712 [2024-07-25 10:17:30.698648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.712 qpair failed and we were unable to recover it. 00:28:45.712 [2024-07-25 10:17:30.698829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.712 [2024-07-25 10:17:30.698873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.712 qpair failed and we were unable to recover it. 00:28:45.712 [2024-07-25 10:17:30.699048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.712 [2024-07-25 10:17:30.699092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.712 qpair failed and we were unable to recover it. 00:28:45.712 [2024-07-25 10:17:30.699300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.712 [2024-07-25 10:17:30.699327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.712 qpair failed and we were unable to recover it. 00:28:45.712 [2024-07-25 10:17:30.699503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.712 [2024-07-25 10:17:30.699547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.712 qpair failed and we were unable to recover it. 00:28:45.712 [2024-07-25 10:17:30.699715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.712 [2024-07-25 10:17:30.699757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.712 qpair failed and we were unable to recover it. 00:28:45.712 [2024-07-25 10:17:30.700478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.712 [2024-07-25 10:17:30.700509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.712 qpair failed and we were unable to recover it. 00:28:45.712 [2024-07-25 10:17:30.700696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.712 [2024-07-25 10:17:30.700741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.712 qpair failed and we were unable to recover it. 00:28:45.712 [2024-07-25 10:17:30.700898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.712 [2024-07-25 10:17:30.700928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.712 qpair failed and we were unable to recover it. 00:28:45.712 [2024-07-25 10:17:30.701107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.712 [2024-07-25 10:17:30.701151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.712 qpair failed and we were unable to recover it. 00:28:45.712 [2024-07-25 10:17:30.701305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.712 [2024-07-25 10:17:30.701331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.712 qpair failed and we were unable to recover it. 00:28:45.712 [2024-07-25 10:17:30.701506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.712 [2024-07-25 10:17:30.701551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.712 qpair failed and we were unable to recover it. 00:28:45.712 [2024-07-25 10:17:30.701719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.712 [2024-07-25 10:17:30.701762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.712 qpair failed and we were unable to recover it. 00:28:45.712 [2024-07-25 10:17:30.701982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.712 [2024-07-25 10:17:30.702026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.712 qpair failed and we were unable to recover it. 00:28:45.712 [2024-07-25 10:17:30.702181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.712 [2024-07-25 10:17:30.702208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.712 qpair failed and we were unable to recover it. 00:28:45.712 [2024-07-25 10:17:30.702386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.712 [2024-07-25 10:17:30.702413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.712 qpair failed and we were unable to recover it. 00:28:45.712 [2024-07-25 10:17:30.702576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.712 [2024-07-25 10:17:30.702621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.712 qpair failed and we were unable to recover it. 00:28:45.712 [2024-07-25 10:17:30.703486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.712 [2024-07-25 10:17:30.703517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.712 qpair failed and we were unable to recover it. 00:28:45.712 [2024-07-25 10:17:30.703685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.712 [2024-07-25 10:17:30.703716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.712 qpair failed and we were unable to recover it. 00:28:45.712 [2024-07-25 10:17:30.703916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.712 [2024-07-25 10:17:30.703960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.712 qpair failed and we were unable to recover it. 00:28:45.712 [2024-07-25 10:17:30.704098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.712 [2024-07-25 10:17:30.704187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.712 qpair failed and we were unable to recover it. 00:28:45.712 [2024-07-25 10:17:30.704364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.712 [2024-07-25 10:17:30.704391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.712 qpair failed and we were unable to recover it. 00:28:45.712 [2024-07-25 10:17:30.704552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.712 [2024-07-25 10:17:30.704599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.712 qpair failed and we were unable to recover it. 00:28:45.712 [2024-07-25 10:17:30.704789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.712 [2024-07-25 10:17:30.704833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.712 qpair failed and we were unable to recover it. 00:28:45.712 [2024-07-25 10:17:30.705030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.712 [2024-07-25 10:17:30.705075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.712 qpair failed and we were unable to recover it. 00:28:45.712 [2024-07-25 10:17:30.705271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.712 [2024-07-25 10:17:30.705298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.712 qpair failed and we were unable to recover it. 00:28:45.712 [2024-07-25 10:17:30.705480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.712 [2024-07-25 10:17:30.705511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.712 qpair failed and we were unable to recover it. 00:28:45.712 [2024-07-25 10:17:30.705682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.712 [2024-07-25 10:17:30.705726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.712 qpair failed and we were unable to recover it. 00:28:45.712 [2024-07-25 10:17:30.705903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.712 [2024-07-25 10:17:30.705948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.712 qpair failed and we were unable to recover it. 00:28:45.712 [2024-07-25 10:17:30.706124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.712 [2024-07-25 10:17:30.706167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.712 qpair failed and we were unable to recover it. 00:28:45.712 [2024-07-25 10:17:30.706338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.712 [2024-07-25 10:17:30.706365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.712 qpair failed and we were unable to recover it. 00:28:45.712 [2024-07-25 10:17:30.706504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.712 [2024-07-25 10:17:30.706554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.712 qpair failed and we were unable to recover it. 00:28:45.712 [2024-07-25 10:17:30.706768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.712 [2024-07-25 10:17:30.706814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.712 qpair failed and we were unable to recover it. 00:28:45.712 [2024-07-25 10:17:30.707025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.712 [2024-07-25 10:17:30.707071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.712 qpair failed and we were unable to recover it. 00:28:45.712 [2024-07-25 10:17:30.707287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.712 [2024-07-25 10:17:30.707314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.712 qpair failed and we were unable to recover it. 00:28:45.712 [2024-07-25 10:17:30.707480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.712 [2024-07-25 10:17:30.707510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.712 qpair failed and we were unable to recover it. 00:28:45.712 [2024-07-25 10:17:30.707700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.712 [2024-07-25 10:17:30.707744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.712 qpair failed and we were unable to recover it. 00:28:45.712 [2024-07-25 10:17:30.707947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.712 [2024-07-25 10:17:30.707991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.712 qpair failed and we were unable to recover it. 00:28:45.712 [2024-07-25 10:17:30.708164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.712 [2024-07-25 10:17:30.708209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.712 qpair failed and we were unable to recover it. 00:28:45.712 [2024-07-25 10:17:30.708405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.712 [2024-07-25 10:17:30.708438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.712 qpair failed and we were unable to recover it. 00:28:45.712 [2024-07-25 10:17:30.708587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.712 [2024-07-25 10:17:30.708617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.712 qpair failed and we were unable to recover it. 00:28:45.712 [2024-07-25 10:17:30.708805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.713 [2024-07-25 10:17:30.708849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.713 qpair failed and we were unable to recover it. 00:28:45.713 [2024-07-25 10:17:30.709052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.713 [2024-07-25 10:17:30.709095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.713 qpair failed and we were unable to recover it. 00:28:45.713 [2024-07-25 10:17:30.709276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.713 [2024-07-25 10:17:30.709318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.713 qpair failed and we were unable to recover it. 00:28:45.713 [2024-07-25 10:17:30.710022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.713 [2024-07-25 10:17:30.710051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.713 qpair failed and we were unable to recover it. 00:28:45.713 [2024-07-25 10:17:30.710299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.713 [2024-07-25 10:17:30.710349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.713 qpair failed and we were unable to recover it. 00:28:45.713 [2024-07-25 10:17:30.710504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.713 [2024-07-25 10:17:30.710533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.713 qpair failed and we were unable to recover it. 00:28:45.713 [2024-07-25 10:17:30.710708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.713 [2024-07-25 10:17:30.710737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.713 qpair failed and we were unable to recover it. 00:28:45.713 [2024-07-25 10:17:30.710887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.713 [2024-07-25 10:17:30.710930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.713 qpair failed and we were unable to recover it. 00:28:45.713 [2024-07-25 10:17:30.711135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.713 [2024-07-25 10:17:30.711178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.713 qpair failed and we were unable to recover it. 00:28:45.713 [2024-07-25 10:17:30.711340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.713 [2024-07-25 10:17:30.711367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.713 qpair failed and we were unable to recover it. 00:28:45.713 [2024-07-25 10:17:30.711532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.713 [2024-07-25 10:17:30.711576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.713 qpair failed and we were unable to recover it. 00:28:45.713 [2024-07-25 10:17:30.711759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.713 [2024-07-25 10:17:30.711802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.713 qpair failed and we were unable to recover it. 00:28:45.713 [2024-07-25 10:17:30.711975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.713 [2024-07-25 10:17:30.712018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.713 qpair failed and we were unable to recover it. 00:28:45.713 [2024-07-25 10:17:30.712198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.713 [2024-07-25 10:17:30.712241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.713 qpair failed and we were unable to recover it. 00:28:45.713 [2024-07-25 10:17:30.712456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.713 [2024-07-25 10:17:30.712484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.713 qpair failed and we were unable to recover it. 00:28:45.713 [2024-07-25 10:17:30.712638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.713 [2024-07-25 10:17:30.712668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.713 qpair failed and we were unable to recover it. 00:28:45.713 [2024-07-25 10:17:30.712881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.713 [2024-07-25 10:17:30.712925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.713 qpair failed and we were unable to recover it. 00:28:45.713 [2024-07-25 10:17:30.713112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.713 [2024-07-25 10:17:30.713156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.713 qpair failed and we were unable to recover it. 00:28:45.713 [2024-07-25 10:17:30.713369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.713 [2024-07-25 10:17:30.713396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.713 qpair failed and we were unable to recover it. 00:28:45.713 [2024-07-25 10:17:30.713558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.713 [2024-07-25 10:17:30.713601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.713 qpair failed and we were unable to recover it. 00:28:45.713 [2024-07-25 10:17:30.713792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.713 [2024-07-25 10:17:30.713835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.713 qpair failed and we were unable to recover it. 00:28:45.713 [2024-07-25 10:17:30.714018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.713 [2024-07-25 10:17:30.714063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.713 qpair failed and we were unable to recover it. 00:28:45.713 [2024-07-25 10:17:30.714264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.713 [2024-07-25 10:17:30.714308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.713 qpair failed and we were unable to recover it. 00:28:45.713 [2024-07-25 10:17:30.714516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.713 [2024-07-25 10:17:30.714561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.713 qpair failed and we were unable to recover it. 00:28:45.713 [2024-07-25 10:17:30.714745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.713 [2024-07-25 10:17:30.714788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.713 qpair failed and we were unable to recover it. 00:28:45.713 [2024-07-25 10:17:30.714999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.713 [2024-07-25 10:17:30.715043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.713 qpair failed and we were unable to recover it. 00:28:45.713 [2024-07-25 10:17:30.715196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.713 [2024-07-25 10:17:30.715241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.713 qpair failed and we were unable to recover it. 00:28:45.713 [2024-07-25 10:17:30.715434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.713 [2024-07-25 10:17:30.715471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.713 qpair failed and we were unable to recover it. 00:28:45.713 [2024-07-25 10:17:30.715604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.713 [2024-07-25 10:17:30.715630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.713 qpair failed and we were unable to recover it. 00:28:45.713 [2024-07-25 10:17:30.715796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.713 [2024-07-25 10:17:30.715844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.713 qpair failed and we were unable to recover it. 00:28:45.713 [2024-07-25 10:17:30.716017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.713 [2024-07-25 10:17:30.716066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.713 qpair failed and we were unable to recover it. 00:28:45.713 [2024-07-25 10:17:30.716264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.713 [2024-07-25 10:17:30.716315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.713 qpair failed and we were unable to recover it. 00:28:45.713 [2024-07-25 10:17:30.716485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.713 [2024-07-25 10:17:30.716516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.713 qpair failed and we were unable to recover it. 00:28:45.713 [2024-07-25 10:17:30.716692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.713 [2024-07-25 10:17:30.716721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.713 qpair failed and we were unable to recover it. 00:28:45.713 [2024-07-25 10:17:30.716908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.713 [2024-07-25 10:17:30.716951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.713 qpair failed and we were unable to recover it. 00:28:45.713 [2024-07-25 10:17:30.717132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.713 [2024-07-25 10:17:30.717176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.713 qpair failed and we were unable to recover it. 00:28:45.713 [2024-07-25 10:17:30.717345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.713 [2024-07-25 10:17:30.717371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.713 qpair failed and we were unable to recover it. 00:28:45.713 [2024-07-25 10:17:30.717529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.713 [2024-07-25 10:17:30.717560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.713 qpair failed and we were unable to recover it. 00:28:45.713 [2024-07-25 10:17:30.717756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.713 [2024-07-25 10:17:30.717800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.713 qpair failed and we were unable to recover it. 00:28:45.713 [2024-07-25 10:17:30.718011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.713 [2024-07-25 10:17:30.718055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.713 qpair failed and we were unable to recover it. 00:28:45.713 [2024-07-25 10:17:30.718215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.713 [2024-07-25 10:17:30.718241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.713 qpair failed and we were unable to recover it. 00:28:45.713 [2024-07-25 10:17:30.718441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.713 [2024-07-25 10:17:30.718468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.713 qpair failed and we were unable to recover it. 00:28:45.713 [2024-07-25 10:17:30.718611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.713 [2024-07-25 10:17:30.718657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.713 qpair failed and we were unable to recover it. 00:28:45.713 [2024-07-25 10:17:30.718817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.713 [2024-07-25 10:17:30.718861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.713 qpair failed and we were unable to recover it. 00:28:45.713 [2024-07-25 10:17:30.719037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.713 [2024-07-25 10:17:30.719081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.713 qpair failed and we were unable to recover it. 00:28:45.713 [2024-07-25 10:17:30.719256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.713 [2024-07-25 10:17:30.719283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.713 qpair failed and we were unable to recover it. 00:28:45.713 [2024-07-25 10:17:30.719493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.713 [2024-07-25 10:17:30.719521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.713 qpair failed and we were unable to recover it. 00:28:45.713 [2024-07-25 10:17:30.719713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.713 [2024-07-25 10:17:30.719756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.713 qpair failed and we were unable to recover it. 00:28:45.713 [2024-07-25 10:17:30.719934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.713 [2024-07-25 10:17:30.719979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.713 qpair failed and we were unable to recover it. 00:28:45.713 [2024-07-25 10:17:30.720178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.713 [2024-07-25 10:17:30.720207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.713 qpair failed and we were unable to recover it. 00:28:45.713 [2024-07-25 10:17:30.720386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.714 [2024-07-25 10:17:30.720411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.714 qpair failed and we were unable to recover it. 00:28:45.714 [2024-07-25 10:17:30.720572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.714 [2024-07-25 10:17:30.720616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.714 qpair failed and we were unable to recover it. 00:28:45.714 [2024-07-25 10:17:30.720824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.714 [2024-07-25 10:17:30.720867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.714 qpair failed and we were unable to recover it. 00:28:45.714 [2024-07-25 10:17:30.721020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.714 [2024-07-25 10:17:30.721062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.714 qpair failed and we were unable to recover it. 00:28:45.714 [2024-07-25 10:17:30.721225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.714 [2024-07-25 10:17:30.721252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.714 qpair failed and we were unable to recover it. 00:28:45.714 [2024-07-25 10:17:30.721455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.714 [2024-07-25 10:17:30.721485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.714 qpair failed and we were unable to recover it. 00:28:45.714 [2024-07-25 10:17:30.721647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.714 [2024-07-25 10:17:30.721693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.714 qpair failed and we were unable to recover it. 00:28:45.714 [2024-07-25 10:17:30.721876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.714 [2024-07-25 10:17:30.721920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.714 qpair failed and we were unable to recover it. 00:28:45.714 [2024-07-25 10:17:30.722097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.714 [2024-07-25 10:17:30.722126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.714 qpair failed and we were unable to recover it. 00:28:45.714 [2024-07-25 10:17:30.722288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.714 [2024-07-25 10:17:30.722315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.714 qpair failed and we were unable to recover it. 00:28:45.714 [2024-07-25 10:17:30.722482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.714 [2024-07-25 10:17:30.722512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.714 qpair failed and we were unable to recover it. 00:28:45.714 [2024-07-25 10:17:30.722684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.714 [2024-07-25 10:17:30.722713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.714 qpair failed and we were unable to recover it. 00:28:45.714 [2024-07-25 10:17:30.722895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.714 [2024-07-25 10:17:30.722939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.714 qpair failed and we were unable to recover it. 00:28:45.714 [2024-07-25 10:17:30.723151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.714 [2024-07-25 10:17:30.723195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.714 qpair failed and we were unable to recover it. 00:28:45.714 [2024-07-25 10:17:30.723364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.714 [2024-07-25 10:17:30.723391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.714 qpair failed and we were unable to recover it. 00:28:45.714 [2024-07-25 10:17:30.723584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.714 [2024-07-25 10:17:30.723629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.714 qpair failed and we were unable to recover it. 00:28:45.714 [2024-07-25 10:17:30.723817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.714 [2024-07-25 10:17:30.723861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.714 qpair failed and we were unable to recover it. 00:28:45.714 [2024-07-25 10:17:30.724066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.714 [2024-07-25 10:17:30.724110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.714 qpair failed and we were unable to recover it. 00:28:45.714 [2024-07-25 10:17:30.724311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.714 [2024-07-25 10:17:30.724337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.714 qpair failed and we were unable to recover it. 00:28:45.714 [2024-07-25 10:17:30.724542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.714 [2024-07-25 10:17:30.724587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.714 qpair failed and we were unable to recover it. 00:28:45.714 [2024-07-25 10:17:30.724760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.714 [2024-07-25 10:17:30.724806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.714 qpair failed and we were unable to recover it. 00:28:45.714 [2024-07-25 10:17:30.725012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.714 [2024-07-25 10:17:30.725057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.714 qpair failed and we were unable to recover it. 00:28:45.714 [2024-07-25 10:17:30.725247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.714 [2024-07-25 10:17:30.725274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.714 qpair failed and we were unable to recover it. 00:28:45.714 [2024-07-25 10:17:30.725442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.714 [2024-07-25 10:17:30.725469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.714 qpair failed and we were unable to recover it. 00:28:45.714 [2024-07-25 10:17:30.725639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.714 [2024-07-25 10:17:30.725667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.714 qpair failed and we were unable to recover it. 00:28:45.714 [2024-07-25 10:17:30.725807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.714 [2024-07-25 10:17:30.725852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.714 qpair failed and we were unable to recover it. 00:28:45.714 [2024-07-25 10:17:30.726061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.714 [2024-07-25 10:17:30.726103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.714 qpair failed and we were unable to recover it. 00:28:45.714 [2024-07-25 10:17:30.726322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.714 [2024-07-25 10:17:30.726349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.714 qpair failed and we were unable to recover it. 00:28:45.714 [2024-07-25 10:17:30.726568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.714 [2024-07-25 10:17:30.726595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.714 qpair failed and we were unable to recover it. 00:28:45.714 [2024-07-25 10:17:30.726786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.714 [2024-07-25 10:17:30.726830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.714 qpair failed and we were unable to recover it. 00:28:45.714 [2024-07-25 10:17:30.727011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.714 [2024-07-25 10:17:30.727055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.714 qpair failed and we were unable to recover it. 00:28:45.714 [2024-07-25 10:17:30.727269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.714 [2024-07-25 10:17:30.727313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.714 qpair failed and we were unable to recover it. 00:28:45.714 [2024-07-25 10:17:30.727498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.714 [2024-07-25 10:17:30.727544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.714 qpair failed and we were unable to recover it. 00:28:45.714 [2024-07-25 10:17:30.727735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.714 [2024-07-25 10:17:30.727781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.714 qpair failed and we were unable to recover it. 00:28:45.714 [2024-07-25 10:17:30.727993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.714 [2024-07-25 10:17:30.728038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.714 qpair failed and we were unable to recover it. 00:28:45.714 [2024-07-25 10:17:30.728240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.714 [2024-07-25 10:17:30.728283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.714 qpair failed and we were unable to recover it. 00:28:45.714 [2024-07-25 10:17:30.728494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.714 [2024-07-25 10:17:30.728521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.714 qpair failed and we were unable to recover it. 00:28:45.714 [2024-07-25 10:17:30.728689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.714 [2024-07-25 10:17:30.728718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.714 qpair failed and we were unable to recover it. 00:28:45.714 [2024-07-25 10:17:30.728896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.714 [2024-07-25 10:17:30.728940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.714 qpair failed and we were unable to recover it. 00:28:45.714 [2024-07-25 10:17:30.729088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.714 [2024-07-25 10:17:30.729131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.714 qpair failed and we were unable to recover it. 00:28:45.714 [2024-07-25 10:17:30.729331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.714 [2024-07-25 10:17:30.729358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.714 qpair failed and we were unable to recover it. 00:28:45.714 [2024-07-25 10:17:30.729532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.714 [2024-07-25 10:17:30.729578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.714 qpair failed and we were unable to recover it. 00:28:45.714 [2024-07-25 10:17:30.729760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.714 [2024-07-25 10:17:30.729804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.714 qpair failed and we were unable to recover it. 00:28:45.714 [2024-07-25 10:17:30.730002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.714 [2024-07-25 10:17:30.730046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.714 qpair failed and we were unable to recover it. 00:28:45.714 [2024-07-25 10:17:30.730212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.714 [2024-07-25 10:17:30.730239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.714 qpair failed and we were unable to recover it. 00:28:45.714 [2024-07-25 10:17:30.730439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.714 [2024-07-25 10:17:30.730466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.714 qpair failed and we were unable to recover it. 00:28:45.714 [2024-07-25 10:17:30.730643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.714 [2024-07-25 10:17:30.730688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.714 qpair failed and we were unable to recover it. 00:28:45.714 [2024-07-25 10:17:30.730892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.714 [2024-07-25 10:17:30.730922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.714 qpair failed and we were unable to recover it. 00:28:45.714 [2024-07-25 10:17:30.731102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.714 [2024-07-25 10:17:30.731146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.714 qpair failed and we were unable to recover it. 00:28:45.714 [2024-07-25 10:17:30.731291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.714 [2024-07-25 10:17:30.731318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.714 qpair failed and we were unable to recover it. 00:28:45.714 [2024-07-25 10:17:30.731517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.714 [2024-07-25 10:17:30.731561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.714 qpair failed and we were unable to recover it. 00:28:45.714 [2024-07-25 10:17:30.731751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.714 [2024-07-25 10:17:30.731795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.714 qpair failed and we were unable to recover it. 00:28:45.714 [2024-07-25 10:17:30.731946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.714 [2024-07-25 10:17:30.731990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.714 qpair failed and we were unable to recover it. 00:28:45.714 [2024-07-25 10:17:30.732200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.714 [2024-07-25 10:17:30.732229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.714 qpair failed and we were unable to recover it. 00:28:45.714 [2024-07-25 10:17:30.732458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.714 [2024-07-25 10:17:30.732485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.714 qpair failed and we were unable to recover it. 00:28:45.714 [2024-07-25 10:17:30.732634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.714 [2024-07-25 10:17:30.732679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.714 qpair failed and we were unable to recover it. 00:28:45.714 [2024-07-25 10:17:30.732882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.714 [2024-07-25 10:17:30.732925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.714 qpair failed and we were unable to recover it. 00:28:45.714 [2024-07-25 10:17:30.733076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.714 [2024-07-25 10:17:30.733104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.714 qpair failed and we were unable to recover it. 00:28:45.714 [2024-07-25 10:17:30.733298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.714 [2024-07-25 10:17:30.733325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.714 qpair failed and we were unable to recover it. 00:28:45.714 [2024-07-25 10:17:30.733505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.714 [2024-07-25 10:17:30.733550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.714 qpair failed and we were unable to recover it. 00:28:45.715 [2024-07-25 10:17:30.733759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.715 [2024-07-25 10:17:30.733806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.715 qpair failed and we were unable to recover it. 00:28:45.715 [2024-07-25 10:17:30.733990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.715 [2024-07-25 10:17:30.734033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.715 qpair failed and we were unable to recover it. 00:28:45.715 [2024-07-25 10:17:30.734244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.715 [2024-07-25 10:17:30.734273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.715 qpair failed and we were unable to recover it. 00:28:45.715 [2024-07-25 10:17:30.734501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.715 [2024-07-25 10:17:30.734545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.715 qpair failed and we were unable to recover it. 00:28:45.715 [2024-07-25 10:17:30.734740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.715 [2024-07-25 10:17:30.734783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.715 qpair failed and we were unable to recover it. 00:28:45.715 [2024-07-25 10:17:30.734933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.715 [2024-07-25 10:17:30.734976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.715 qpair failed and we were unable to recover it. 00:28:45.715 [2024-07-25 10:17:30.735179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.715 [2024-07-25 10:17:30.735222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.715 qpair failed and we were unable to recover it. 00:28:45.715 [2024-07-25 10:17:30.735418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.715 [2024-07-25 10:17:30.735450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.715 qpair failed and we were unable to recover it. 00:28:45.715 [2024-07-25 10:17:30.735626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.715 [2024-07-25 10:17:30.735672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.715 qpair failed and we were unable to recover it. 00:28:45.715 [2024-07-25 10:17:30.735849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.715 [2024-07-25 10:17:30.735892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.715 qpair failed and we were unable to recover it. 00:28:45.715 [2024-07-25 10:17:30.736040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.715 [2024-07-25 10:17:30.736082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.715 qpair failed and we were unable to recover it. 00:28:45.715 [2024-07-25 10:17:30.736277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.715 [2024-07-25 10:17:30.736303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.715 qpair failed and we were unable to recover it. 00:28:45.715 [2024-07-25 10:17:30.736481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.715 [2024-07-25 10:17:30.736512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.715 qpair failed and we were unable to recover it. 00:28:45.715 [2024-07-25 10:17:30.736702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.715 [2024-07-25 10:17:30.736748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.715 qpair failed and we were unable to recover it. 00:28:45.715 [2024-07-25 10:17:30.736912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.715 [2024-07-25 10:17:30.736956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.715 qpair failed and we were unable to recover it. 00:28:45.715 [2024-07-25 10:17:30.737121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.715 [2024-07-25 10:17:30.737175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.715 qpair failed and we were unable to recover it. 00:28:45.715 [2024-07-25 10:17:30.737379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.715 [2024-07-25 10:17:30.737406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.715 qpair failed and we were unable to recover it. 00:28:45.715 [2024-07-25 10:17:30.737647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.715 [2024-07-25 10:17:30.737691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.715 qpair failed and we were unable to recover it. 00:28:45.715 [2024-07-25 10:17:30.737879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.715 [2024-07-25 10:17:30.737923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.715 qpair failed and we were unable to recover it. 00:28:45.715 [2024-07-25 10:17:30.738122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.715 [2024-07-25 10:17:30.738164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.715 qpair failed and we were unable to recover it. 00:28:45.715 [2024-07-25 10:17:30.738346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.715 [2024-07-25 10:17:30.738373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.715 qpair failed and we were unable to recover it. 00:28:45.715 [2024-07-25 10:17:30.738536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.715 [2024-07-25 10:17:30.738579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.715 qpair failed and we were unable to recover it. 00:28:45.715 [2024-07-25 10:17:30.738790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.715 [2024-07-25 10:17:30.738833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.715 qpair failed and we were unable to recover it. 00:28:45.715 [2024-07-25 10:17:30.739063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.715 [2024-07-25 10:17:30.739107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.715 qpair failed and we were unable to recover it. 00:28:45.715 [2024-07-25 10:17:30.739247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.715 [2024-07-25 10:17:30.739272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.715 qpair failed and we were unable to recover it. 00:28:45.715 [2024-07-25 10:17:30.739444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.715 [2024-07-25 10:17:30.739471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.715 qpair failed and we were unable to recover it. 00:28:45.715 [2024-07-25 10:17:30.739651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.715 [2024-07-25 10:17:30.739695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.715 qpair failed and we were unable to recover it. 00:28:45.715 [2024-07-25 10:17:30.739881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.715 [2024-07-25 10:17:30.739925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.715 qpair failed and we were unable to recover it. 00:28:45.715 [2024-07-25 10:17:30.740139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.715 [2024-07-25 10:17:30.740182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.715 qpair failed and we were unable to recover it. 00:28:45.715 [2024-07-25 10:17:30.740360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.715 [2024-07-25 10:17:30.740386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.715 qpair failed and we were unable to recover it. 00:28:45.715 [2024-07-25 10:17:30.740565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.715 [2024-07-25 10:17:30.740592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.715 qpair failed and we were unable to recover it. 00:28:45.715 [2024-07-25 10:17:30.740785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.715 [2024-07-25 10:17:30.740830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.715 qpair failed and we were unable to recover it. 00:28:45.715 [2024-07-25 10:17:30.741032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.715 [2024-07-25 10:17:30.741076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.715 qpair failed and we were unable to recover it. 00:28:45.715 [2024-07-25 10:17:30.741286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.715 [2024-07-25 10:17:30.741330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.715 qpair failed and we were unable to recover it. 00:28:45.715 [2024-07-25 10:17:30.741490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.715 [2024-07-25 10:17:30.741521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.715 qpair failed and we were unable to recover it. 00:28:45.715 [2024-07-25 10:17:30.741718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.715 [2024-07-25 10:17:30.741762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.715 qpair failed and we were unable to recover it. 00:28:45.715 [2024-07-25 10:17:30.741984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.715 [2024-07-25 10:17:30.742037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.715 qpair failed and we were unable to recover it. 00:28:45.715 [2024-07-25 10:17:30.742249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.715 [2024-07-25 10:17:30.742294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.715 qpair failed and we were unable to recover it. 00:28:45.715 [2024-07-25 10:17:30.742516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.715 [2024-07-25 10:17:30.742546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.715 qpair failed and we were unable to recover it. 00:28:45.715 [2024-07-25 10:17:30.742731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.715 [2024-07-25 10:17:30.742778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.715 qpair failed and we were unable to recover it. 00:28:45.715 [2024-07-25 10:17:30.742949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.715 [2024-07-25 10:17:30.742996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.715 qpair failed and we were unable to recover it. 00:28:45.715 [2024-07-25 10:17:30.743202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.715 [2024-07-25 10:17:30.743246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.715 qpair failed and we were unable to recover it. 00:28:45.715 [2024-07-25 10:17:30.743452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.715 [2024-07-25 10:17:30.743479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.715 qpair failed and we were unable to recover it. 00:28:45.715 [2024-07-25 10:17:30.743634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.715 [2024-07-25 10:17:30.743679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.715 qpair failed and we were unable to recover it. 00:28:45.715 [2024-07-25 10:17:30.743850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.715 [2024-07-25 10:17:30.743893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.715 qpair failed and we were unable to recover it. 00:28:45.715 [2024-07-25 10:17:30.744080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.715 [2024-07-25 10:17:30.744123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.715 qpair failed and we were unable to recover it. 00:28:45.715 [2024-07-25 10:17:30.744293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.715 [2024-07-25 10:17:30.744320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.715 qpair failed and we were unable to recover it. 00:28:45.715 [2024-07-25 10:17:30.744538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.715 [2024-07-25 10:17:30.744583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.715 qpair failed and we were unable to recover it. 00:28:45.715 [2024-07-25 10:17:30.744749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.715 [2024-07-25 10:17:30.744785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.715 qpair failed and we were unable to recover it. 00:28:45.715 [2024-07-25 10:17:30.744949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.715 [2024-07-25 10:17:30.744993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.715 qpair failed and we were unable to recover it. 00:28:45.715 [2024-07-25 10:17:30.745165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.715 [2024-07-25 10:17:30.745208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.715 qpair failed and we were unable to recover it. 00:28:45.715 [2024-07-25 10:17:30.745407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.715 [2024-07-25 10:17:30.745440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.715 qpair failed and we were unable to recover it. 00:28:45.715 [2024-07-25 10:17:30.745621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.715 [2024-07-25 10:17:30.745666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.715 qpair failed and we were unable to recover it. 00:28:45.715 [2024-07-25 10:17:30.745872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.715 [2024-07-25 10:17:30.745914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.715 qpair failed and we were unable to recover it. 00:28:45.715 [2024-07-25 10:17:30.746130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.715 [2024-07-25 10:17:30.746173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.715 qpair failed and we were unable to recover it. 00:28:45.715 [2024-07-25 10:17:30.746339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.715 [2024-07-25 10:17:30.746366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.715 qpair failed and we were unable to recover it. 00:28:45.715 [2024-07-25 10:17:30.746539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.715 [2024-07-25 10:17:30.746567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.715 qpair failed and we were unable to recover it. 00:28:45.715 [2024-07-25 10:17:30.746776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.715 [2024-07-25 10:17:30.746820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.715 qpair failed and we were unable to recover it. 00:28:45.715 [2024-07-25 10:17:30.747028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.715 [2024-07-25 10:17:30.747070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.715 qpair failed and we were unable to recover it. 00:28:45.715 [2024-07-25 10:17:30.747281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.716 [2024-07-25 10:17:30.747308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.716 qpair failed and we were unable to recover it. 00:28:45.716 [2024-07-25 10:17:30.747482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.716 [2024-07-25 10:17:30.747512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.716 qpair failed and we were unable to recover it. 00:28:45.716 [2024-07-25 10:17:30.747740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.716 [2024-07-25 10:17:30.747784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.716 qpair failed and we were unable to recover it. 00:28:45.716 [2024-07-25 10:17:30.747993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.716 [2024-07-25 10:17:30.748036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.716 qpair failed and we were unable to recover it. 00:28:45.716 [2024-07-25 10:17:30.748214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.716 [2024-07-25 10:17:30.748258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.716 qpair failed and we were unable to recover it. 00:28:45.716 [2024-07-25 10:17:30.748422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.716 [2024-07-25 10:17:30.748459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.716 qpair failed and we were unable to recover it. 00:28:45.716 [2024-07-25 10:17:30.748616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.716 [2024-07-25 10:17:30.748643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.716 qpair failed and we were unable to recover it. 00:28:45.716 [2024-07-25 10:17:30.748792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.716 [2024-07-25 10:17:30.748820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.716 qpair failed and we were unable to recover it. 00:28:45.716 [2024-07-25 10:17:30.748998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.716 [2024-07-25 10:17:30.749042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.716 qpair failed and we were unable to recover it. 00:28:45.716 [2024-07-25 10:17:30.749247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.716 [2024-07-25 10:17:30.749277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.716 qpair failed and we were unable to recover it. 00:28:45.716 [2024-07-25 10:17:30.749497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.716 [2024-07-25 10:17:30.749540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.716 qpair failed and we were unable to recover it. 00:28:45.716 [2024-07-25 10:17:30.749723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.716 [2024-07-25 10:17:30.749767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.716 qpair failed and we were unable to recover it. 00:28:45.716 [2024-07-25 10:17:30.749935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.716 [2024-07-25 10:17:30.749980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.716 qpair failed and we were unable to recover it. 00:28:45.716 [2024-07-25 10:17:30.750183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.716 [2024-07-25 10:17:30.750226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.716 qpair failed and we were unable to recover it. 00:28:45.716 [2024-07-25 10:17:30.750424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.716 [2024-07-25 10:17:30.750458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.716 qpair failed and we were unable to recover it. 00:28:45.716 [2024-07-25 10:17:30.750630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.716 [2024-07-25 10:17:30.750673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.716 qpair failed and we were unable to recover it. 00:28:45.716 [2024-07-25 10:17:30.750876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.716 [2024-07-25 10:17:30.750920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.716 qpair failed and we were unable to recover it. 00:28:45.716 [2024-07-25 10:17:30.751118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.716 [2024-07-25 10:17:30.751160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.716 qpair failed and we were unable to recover it. 00:28:45.716 [2024-07-25 10:17:30.751327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.716 [2024-07-25 10:17:30.751354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.716 qpair failed and we were unable to recover it. 00:28:45.716 [2024-07-25 10:17:30.751532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.716 [2024-07-25 10:17:30.751558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.716 qpair failed and we were unable to recover it. 00:28:45.716 [2024-07-25 10:17:30.751732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.716 [2024-07-25 10:17:30.751776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.716 qpair failed and we were unable to recover it. 00:28:45.716 [2024-07-25 10:17:30.751954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.716 [2024-07-25 10:17:30.752002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.716 qpair failed and we were unable to recover it. 00:28:45.716 [2024-07-25 10:17:30.752212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.716 [2024-07-25 10:17:30.752256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.716 qpair failed and we were unable to recover it. 00:28:45.716 [2024-07-25 10:17:30.752396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.716 [2024-07-25 10:17:30.752421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.716 qpair failed and we were unable to recover it. 00:28:45.716 [2024-07-25 10:17:30.752640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.716 [2024-07-25 10:17:30.752684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.716 qpair failed and we were unable to recover it. 00:28:45.716 [2024-07-25 10:17:30.752841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.716 [2024-07-25 10:17:30.752883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.716 qpair failed and we were unable to recover it. 00:28:45.716 [2024-07-25 10:17:30.753079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.716 [2024-07-25 10:17:30.753122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.716 qpair failed and we were unable to recover it. 00:28:45.716 [2024-07-25 10:17:30.753300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.716 [2024-07-25 10:17:30.753327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.716 qpair failed and we were unable to recover it. 00:28:45.716 [2024-07-25 10:17:30.753488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.716 [2024-07-25 10:17:30.753518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.716 qpair failed and we were unable to recover it. 00:28:45.716 [2024-07-25 10:17:30.753766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.716 [2024-07-25 10:17:30.753810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.716 qpair failed and we were unable to recover it. 00:28:45.716 [2024-07-25 10:17:30.753952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.716 [2024-07-25 10:17:30.753996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.716 qpair failed and we were unable to recover it. 00:28:45.716 [2024-07-25 10:17:30.754192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.716 [2024-07-25 10:17:30.754235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.716 qpair failed and we were unable to recover it. 00:28:45.716 [2024-07-25 10:17:30.754404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.716 [2024-07-25 10:17:30.754436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.716 qpair failed and we were unable to recover it. 00:28:45.716 [2024-07-25 10:17:30.754615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.716 [2024-07-25 10:17:30.754659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.716 qpair failed and we were unable to recover it. 00:28:45.716 [2024-07-25 10:17:30.754866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.716 [2024-07-25 10:17:30.754909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.716 qpair failed and we were unable to recover it. 00:28:45.716 [2024-07-25 10:17:30.755082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.716 [2024-07-25 10:17:30.755125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.716 qpair failed and we were unable to recover it. 00:28:45.716 [2024-07-25 10:17:30.755319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.716 [2024-07-25 10:17:30.755346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.716 qpair failed and we were unable to recover it. 00:28:45.716 [2024-07-25 10:17:30.755557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.716 [2024-07-25 10:17:30.755600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.716 qpair failed and we were unable to recover it. 00:28:45.716 [2024-07-25 10:17:30.755808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.716 [2024-07-25 10:17:30.755852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.716 qpair failed and we were unable to recover it. 00:28:45.716 [2024-07-25 10:17:30.756056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.716 [2024-07-25 10:17:30.756099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.716 qpair failed and we were unable to recover it. 00:28:45.716 [2024-07-25 10:17:30.756241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.716 [2024-07-25 10:17:30.756267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.716 qpair failed and we were unable to recover it. 00:28:45.716 [2024-07-25 10:17:30.756477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.716 [2024-07-25 10:17:30.756504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.716 qpair failed and we were unable to recover it. 00:28:45.716 [2024-07-25 10:17:30.756723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.716 [2024-07-25 10:17:30.756766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.716 qpair failed and we were unable to recover it. 00:28:45.716 [2024-07-25 10:17:30.756945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.716 [2024-07-25 10:17:30.756987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.716 qpair failed and we were unable to recover it. 00:28:45.716 [2024-07-25 10:17:30.757138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.716 [2024-07-25 10:17:30.757182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.716 qpair failed and we were unable to recover it. 00:28:45.716 [2024-07-25 10:17:30.757379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.716 [2024-07-25 10:17:30.757406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.716 qpair failed and we were unable to recover it. 00:28:45.716 [2024-07-25 10:17:30.757638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.716 [2024-07-25 10:17:30.757665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.716 qpair failed and we were unable to recover it. 00:28:45.716 [2024-07-25 10:17:30.757848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.716 [2024-07-25 10:17:30.757891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.716 qpair failed and we were unable to recover it. 00:28:45.716 [2024-07-25 10:17:30.758064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.716 [2024-07-25 10:17:30.758108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.716 qpair failed and we were unable to recover it. 00:28:45.716 [2024-07-25 10:17:30.758279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.716 [2024-07-25 10:17:30.758306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.716 qpair failed and we were unable to recover it. 00:28:45.716 [2024-07-25 10:17:30.758504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.716 [2024-07-25 10:17:30.758548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.716 qpair failed and we were unable to recover it. 00:28:45.716 [2024-07-25 10:17:30.758749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.716 [2024-07-25 10:17:30.758793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.716 qpair failed and we were unable to recover it. 00:28:45.716 [2024-07-25 10:17:30.758979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.716 [2024-07-25 10:17:30.759022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.716 qpair failed and we were unable to recover it. 00:28:45.717 [2024-07-25 10:17:30.759223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.717 [2024-07-25 10:17:30.759266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.717 qpair failed and we were unable to recover it. 00:28:45.717 [2024-07-25 10:17:30.759423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.717 [2024-07-25 10:17:30.759453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.717 qpair failed and we were unable to recover it. 00:28:45.717 [2024-07-25 10:17:30.759657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.717 [2024-07-25 10:17:30.759683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.717 qpair failed and we were unable to recover it. 00:28:45.717 [2024-07-25 10:17:30.759899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.717 [2024-07-25 10:17:30.759942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.717 qpair failed and we were unable to recover it. 00:28:45.717 [2024-07-25 10:17:30.760115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.717 [2024-07-25 10:17:30.760158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.717 qpair failed and we were unable to recover it. 00:28:45.717 [2024-07-25 10:17:30.760300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.717 [2024-07-25 10:17:30.760326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.717 qpair failed and we were unable to recover it. 00:28:45.717 [2024-07-25 10:17:30.760541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.717 [2024-07-25 10:17:30.760585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.717 qpair failed and we were unable to recover it. 00:28:45.717 [2024-07-25 10:17:30.760790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.717 [2024-07-25 10:17:30.760832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.717 qpair failed and we were unable to recover it. 00:28:45.717 [2024-07-25 10:17:30.761009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.717 [2024-07-25 10:17:30.761055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.717 qpair failed and we were unable to recover it. 00:28:45.717 [2024-07-25 10:17:30.761265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.717 [2024-07-25 10:17:30.761307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.717 qpair failed and we were unable to recover it. 00:28:45.717 [2024-07-25 10:17:30.761495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.717 [2024-07-25 10:17:30.761525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.717 qpair failed and we were unable to recover it. 00:28:45.717 [2024-07-25 10:17:30.761694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.717 [2024-07-25 10:17:30.761738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.717 qpair failed and we were unable to recover it. 00:28:45.717 [2024-07-25 10:17:30.761929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.717 [2024-07-25 10:17:30.761971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.717 qpair failed and we were unable to recover it. 00:28:45.717 [2024-07-25 10:17:30.762200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.717 [2024-07-25 10:17:30.762243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.717 qpair failed and we were unable to recover it. 00:28:45.717 [2024-07-25 10:17:30.762419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.717 [2024-07-25 10:17:30.762450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.717 qpair failed and we were unable to recover it. 00:28:45.717 [2024-07-25 10:17:30.762613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.717 [2024-07-25 10:17:30.762657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.717 qpair failed and we were unable to recover it. 00:28:45.717 [2024-07-25 10:17:30.762853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.717 [2024-07-25 10:17:30.762896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.717 qpair failed and we were unable to recover it. 00:28:45.717 [2024-07-25 10:17:30.763037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.717 [2024-07-25 10:17:30.763079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.717 qpair failed and we were unable to recover it. 00:28:45.717 [2024-07-25 10:17:30.763283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.717 [2024-07-25 10:17:30.763309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.717 qpair failed and we were unable to recover it. 00:28:45.717 [2024-07-25 10:17:30.763510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.717 [2024-07-25 10:17:30.763553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.717 qpair failed and we were unable to recover it. 00:28:45.717 [2024-07-25 10:17:30.763691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.717 [2024-07-25 10:17:30.763734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.717 qpair failed and we were unable to recover it. 00:28:45.717 [2024-07-25 10:17:30.763898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.717 [2024-07-25 10:17:30.763924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.717 qpair failed and we were unable to recover it. 00:28:45.717 [2024-07-25 10:17:30.764109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.717 [2024-07-25 10:17:30.764135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.717 qpair failed and we were unable to recover it. 00:28:45.717 [2024-07-25 10:17:30.764295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.717 [2024-07-25 10:17:30.764320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.717 qpair failed and we were unable to recover it. 00:28:45.717 [2024-07-25 10:17:30.764484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.717 [2024-07-25 10:17:30.764510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.717 qpair failed and we were unable to recover it. 00:28:45.717 [2024-07-25 10:17:30.764717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.717 [2024-07-25 10:17:30.764743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.717 qpair failed and we were unable to recover it. 00:28:45.717 [2024-07-25 10:17:30.764958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.717 [2024-07-25 10:17:30.764999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.717 qpair failed and we were unable to recover it. 00:28:45.717 [2024-07-25 10:17:30.765162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.717 [2024-07-25 10:17:30.765206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.717 qpair failed and we were unable to recover it. 00:28:45.717 [2024-07-25 10:17:30.765376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.717 [2024-07-25 10:17:30.765402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.717 qpair failed and we were unable to recover it. 00:28:45.717 [2024-07-25 10:17:30.765591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.717 [2024-07-25 10:17:30.765634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.717 qpair failed and we were unable to recover it. 00:28:45.717 [2024-07-25 10:17:30.765820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.717 [2024-07-25 10:17:30.765863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.717 qpair failed and we were unable to recover it. 00:28:45.717 [2024-07-25 10:17:30.766060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.717 [2024-07-25 10:17:30.766103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.717 qpair failed and we were unable to recover it. 00:28:45.717 [2024-07-25 10:17:30.766304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.717 [2024-07-25 10:17:30.766329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.717 qpair failed and we were unable to recover it. 00:28:45.717 [2024-07-25 10:17:30.766527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.717 [2024-07-25 10:17:30.766571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.717 qpair failed and we were unable to recover it. 00:28:45.717 [2024-07-25 10:17:30.766749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.717 [2024-07-25 10:17:30.766791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.717 qpair failed and we were unable to recover it. 00:28:45.717 [2024-07-25 10:17:30.766987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.717 [2024-07-25 10:17:30.767030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.717 qpair failed and we were unable to recover it. 00:28:45.717 [2024-07-25 10:17:30.767232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.717 [2024-07-25 10:17:30.767275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.717 qpair failed and we were unable to recover it. 00:28:45.717 [2024-07-25 10:17:30.767476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.717 [2024-07-25 10:17:30.767503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.717 qpair failed and we were unable to recover it. 00:28:45.717 [2024-07-25 10:17:30.767710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.717 [2024-07-25 10:17:30.767753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.717 qpair failed and we were unable to recover it. 00:28:45.717 [2024-07-25 10:17:30.767947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.717 [2024-07-25 10:17:30.767991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.717 qpair failed and we were unable to recover it. 00:28:45.717 [2024-07-25 10:17:30.768163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.717 [2024-07-25 10:17:30.768207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.717 qpair failed and we were unable to recover it. 00:28:45.717 [2024-07-25 10:17:30.768376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.717 [2024-07-25 10:17:30.768402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.717 qpair failed and we were unable to recover it. 00:28:45.717 [2024-07-25 10:17:30.768603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.717 [2024-07-25 10:17:30.768629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.717 qpair failed and we were unable to recover it. 00:28:45.717 [2024-07-25 10:17:30.768833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.717 [2024-07-25 10:17:30.768877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.717 qpair failed and we were unable to recover it. 00:28:45.717 [2024-07-25 10:17:30.769080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.717 [2024-07-25 10:17:30.769123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.717 qpair failed and we were unable to recover it. 00:28:45.717 [2024-07-25 10:17:30.769310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.717 [2024-07-25 10:17:30.769333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.717 qpair failed and we were unable to recover it. 00:28:45.717 [2024-07-25 10:17:30.769523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.717 [2024-07-25 10:17:30.769554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.717 qpair failed and we were unable to recover it. 00:28:45.717 [2024-07-25 10:17:30.769709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.717 [2024-07-25 10:17:30.769752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.717 qpair failed and we were unable to recover it. 00:28:45.717 [2024-07-25 10:17:30.769894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.717 [2024-07-25 10:17:30.769942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.717 qpair failed and we were unable to recover it. 00:28:45.717 [2024-07-25 10:17:30.770111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.717 [2024-07-25 10:17:30.770153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.717 qpair failed and we were unable to recover it. 00:28:45.717 [2024-07-25 10:17:30.770367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.717 [2024-07-25 10:17:30.770390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.717 qpair failed and we were unable to recover it. 00:28:45.717 [2024-07-25 10:17:30.770617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.717 [2024-07-25 10:17:30.770660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.717 qpair failed and we were unable to recover it. 00:28:45.717 [2024-07-25 10:17:30.770834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.717 [2024-07-25 10:17:30.770876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.717 qpair failed and we were unable to recover it. 00:28:45.717 [2024-07-25 10:17:30.771081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.717 [2024-07-25 10:17:30.771124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.717 qpair failed and we were unable to recover it. 00:28:45.717 [2024-07-25 10:17:30.771275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.717 [2024-07-25 10:17:30.771299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.717 qpair failed and we were unable to recover it. 00:28:45.717 [2024-07-25 10:17:30.771493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.717 [2024-07-25 10:17:30.771519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.717 qpair failed and we were unable to recover it. 00:28:45.717 [2024-07-25 10:17:30.771692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.717 [2024-07-25 10:17:30.771736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.717 qpair failed and we were unable to recover it. 00:28:45.717 [2024-07-25 10:17:30.771901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.717 [2024-07-25 10:17:30.771944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.717 qpair failed and we were unable to recover it. 00:28:45.717 [2024-07-25 10:17:30.772110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.717 [2024-07-25 10:17:30.772152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.717 qpair failed and we were unable to recover it. 00:28:45.717 [2024-07-25 10:17:30.772342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.717 [2024-07-25 10:17:30.772366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.718 qpair failed and we were unable to recover it. 00:28:45.718 [2024-07-25 10:17:30.772560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.718 [2024-07-25 10:17:30.772604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.718 qpair failed and we were unable to recover it. 00:28:45.718 [2024-07-25 10:17:30.772812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.718 [2024-07-25 10:17:30.772854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.718 qpair failed and we were unable to recover it. 00:28:45.718 [2024-07-25 10:17:30.773050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.718 [2024-07-25 10:17:30.773092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.718 qpair failed and we were unable to recover it. 00:28:45.718 [2024-07-25 10:17:30.773245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.718 [2024-07-25 10:17:30.773268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.718 qpair failed and we were unable to recover it. 00:28:45.718 [2024-07-25 10:17:30.773482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.718 [2024-07-25 10:17:30.773527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.718 qpair failed and we were unable to recover it. 00:28:45.718 [2024-07-25 10:17:30.773716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.718 [2024-07-25 10:17:30.773760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.718 qpair failed and we were unable to recover it. 00:28:45.718 [2024-07-25 10:17:30.773956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.718 [2024-07-25 10:17:30.773986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.718 qpair failed and we were unable to recover it. 00:28:45.718 [2024-07-25 10:17:30.774170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.718 [2024-07-25 10:17:30.774208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.718 qpair failed and we were unable to recover it. 00:28:45.718 [2024-07-25 10:17:30.774419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.718 [2024-07-25 10:17:30.774463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.718 qpair failed and we were unable to recover it. 00:28:45.718 [2024-07-25 10:17:30.774677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.718 [2024-07-25 10:17:30.774707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.718 qpair failed and we were unable to recover it. 00:28:45.718 [2024-07-25 10:17:30.774896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.718 [2024-07-25 10:17:30.774940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.718 qpair failed and we were unable to recover it. 00:28:45.718 [2024-07-25 10:17:30.775126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.718 [2024-07-25 10:17:30.775169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.718 qpair failed and we were unable to recover it. 00:28:45.718 [2024-07-25 10:17:30.775334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.718 [2024-07-25 10:17:30.775357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.718 qpair failed and we were unable to recover it. 00:28:45.718 [2024-07-25 10:17:30.775548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.718 [2024-07-25 10:17:30.775593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.718 qpair failed and we were unable to recover it. 00:28:45.718 [2024-07-25 10:17:30.775796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.718 [2024-07-25 10:17:30.775839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.718 qpair failed and we were unable to recover it. 00:28:45.718 [2024-07-25 10:17:30.776038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.718 [2024-07-25 10:17:30.776068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.718 qpair failed and we were unable to recover it. 00:28:45.718 [2024-07-25 10:17:30.776248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.718 [2024-07-25 10:17:30.776286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.718 qpair failed and we were unable to recover it. 00:28:45.718 [2024-07-25 10:17:30.776509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.718 [2024-07-25 10:17:30.776553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.718 qpair failed and we were unable to recover it. 00:28:45.718 [2024-07-25 10:17:30.776708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.718 [2024-07-25 10:17:30.776752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.718 qpair failed and we were unable to recover it. 00:28:45.718 [2024-07-25 10:17:30.776913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.718 [2024-07-25 10:17:30.776966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.718 qpair failed and we were unable to recover it. 00:28:45.718 [2024-07-25 10:17:30.777149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.718 [2024-07-25 10:17:30.777192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.718 qpair failed and we were unable to recover it. 00:28:45.718 [2024-07-25 10:17:30.777397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.718 [2024-07-25 10:17:30.777420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.718 qpair failed and we were unable to recover it. 00:28:45.718 [2024-07-25 10:17:30.777660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.718 [2024-07-25 10:17:30.777707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.718 qpair failed and we were unable to recover it. 00:28:45.718 [2024-07-25 10:17:30.777892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.718 [2024-07-25 10:17:30.777935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.718 qpair failed and we were unable to recover it. 00:28:45.718 [2024-07-25 10:17:30.778098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.718 [2024-07-25 10:17:30.778139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.718 qpair failed and we were unable to recover it. 00:28:45.718 [2024-07-25 10:17:30.778277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.718 [2024-07-25 10:17:30.778315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.718 qpair failed and we were unable to recover it. 00:28:45.718 [2024-07-25 10:17:30.778527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.718 [2024-07-25 10:17:30.778571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.718 qpair failed and we were unable to recover it. 00:28:45.718 [2024-07-25 10:17:30.778749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.718 [2024-07-25 10:17:30.778791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.718 qpair failed and we were unable to recover it. 00:28:45.718 [2024-07-25 10:17:30.778955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.718 [2024-07-25 10:17:30.778997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.718 qpair failed and we were unable to recover it. 00:28:45.718 [2024-07-25 10:17:30.779179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.718 [2024-07-25 10:17:30.779202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.718 qpair failed and we were unable to recover it. 00:28:45.718 [2024-07-25 10:17:30.779380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.718 [2024-07-25 10:17:30.779404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.718 qpair failed and we were unable to recover it. 00:28:45.718 [2024-07-25 10:17:30.779587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.718 [2024-07-25 10:17:30.779612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.718 qpair failed and we were unable to recover it. 00:28:45.718 [2024-07-25 10:17:30.779820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.718 [2024-07-25 10:17:30.779844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.718 qpair failed and we were unable to recover it. 00:28:45.718 [2024-07-25 10:17:30.779979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.718 [2024-07-25 10:17:30.780022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.718 qpair failed and we were unable to recover it. 00:28:45.718 [2024-07-25 10:17:30.780173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.718 [2024-07-25 10:17:30.780215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.718 qpair failed and we were unable to recover it. 00:28:45.718 [2024-07-25 10:17:30.780395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.718 [2024-07-25 10:17:30.780419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.718 qpair failed and we were unable to recover it. 00:28:45.718 [2024-07-25 10:17:30.780610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.718 [2024-07-25 10:17:30.780654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.718 qpair failed and we were unable to recover it. 00:28:45.718 [2024-07-25 10:17:30.780823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.718 [2024-07-25 10:17:30.780865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.718 qpair failed and we were unable to recover it. 00:28:45.718 [2024-07-25 10:17:30.781065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.718 [2024-07-25 10:17:30.781108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.718 qpair failed and we were unable to recover it. 00:28:45.718 [2024-07-25 10:17:30.781289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.718 [2024-07-25 10:17:30.781312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.718 qpair failed and we were unable to recover it. 00:28:45.718 [2024-07-25 10:17:30.781486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.718 [2024-07-25 10:17:30.781526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.718 qpair failed and we were unable to recover it. 00:28:45.718 [2024-07-25 10:17:30.781702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.718 [2024-07-25 10:17:30.781744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.718 qpair failed and we were unable to recover it. 00:28:45.718 [2024-07-25 10:17:30.781949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.718 [2024-07-25 10:17:30.781990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.718 qpair failed and we were unable to recover it. 00:28:45.718 [2024-07-25 10:17:30.782196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.718 [2024-07-25 10:17:30.782239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.718 qpair failed and we were unable to recover it. 00:28:45.718 [2024-07-25 10:17:30.782426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.718 [2024-07-25 10:17:30.782466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.718 qpair failed and we were unable to recover it. 00:28:45.718 [2024-07-25 10:17:30.782639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.718 [2024-07-25 10:17:30.782681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.718 qpair failed and we were unable to recover it. 00:28:45.718 [2024-07-25 10:17:30.782896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.718 [2024-07-25 10:17:30.782940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.718 qpair failed and we were unable to recover it. 00:28:45.718 [2024-07-25 10:17:30.783139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.718 [2024-07-25 10:17:30.783182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.718 qpair failed and we were unable to recover it. 00:28:45.718 [2024-07-25 10:17:30.783347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.718 [2024-07-25 10:17:30.783370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.718 qpair failed and we were unable to recover it. 00:28:45.718 [2024-07-25 10:17:30.783598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.718 [2024-07-25 10:17:30.783641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.718 qpair failed and we were unable to recover it. 00:28:45.718 [2024-07-25 10:17:30.783837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.718 [2024-07-25 10:17:30.783881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.718 qpair failed and we were unable to recover it. 00:28:45.718 [2024-07-25 10:17:30.784086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.718 [2024-07-25 10:17:30.784115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.718 qpair failed and we were unable to recover it. 00:28:45.718 [2024-07-25 10:17:30.784316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.718 [2024-07-25 10:17:30.784340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.718 qpair failed and we were unable to recover it. 00:28:45.718 [2024-07-25 10:17:30.784521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.718 [2024-07-25 10:17:30.784551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.718 qpair failed and we were unable to recover it. 00:28:45.718 [2024-07-25 10:17:30.784743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.718 [2024-07-25 10:17:30.784785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.718 qpair failed and we were unable to recover it. 00:28:45.718 [2024-07-25 10:17:30.784978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.718 [2024-07-25 10:17:30.785025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.718 qpair failed and we were unable to recover it. 00:28:45.718 [2024-07-25 10:17:30.785225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.718 [2024-07-25 10:17:30.785268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.718 qpair failed and we were unable to recover it. 00:28:45.718 [2024-07-25 10:17:30.785456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.718 [2024-07-25 10:17:30.785480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.718 qpair failed and we were unable to recover it. 00:28:45.718 [2024-07-25 10:17:30.785695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.718 [2024-07-25 10:17:30.785738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.718 qpair failed and we were unable to recover it. 00:28:45.718 [2024-07-25 10:17:30.785899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.719 [2024-07-25 10:17:30.785941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.719 qpair failed and we were unable to recover it. 00:28:45.719 [2024-07-25 10:17:30.786142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.719 [2024-07-25 10:17:30.786185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.719 qpair failed and we were unable to recover it. 00:28:45.719 [2024-07-25 10:17:30.786372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.719 [2024-07-25 10:17:30.786396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.719 qpair failed and we were unable to recover it. 00:28:45.719 [2024-07-25 10:17:30.786624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.719 [2024-07-25 10:17:30.786666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.719 qpair failed and we were unable to recover it. 00:28:45.719 [2024-07-25 10:17:30.786835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.719 [2024-07-25 10:17:30.786878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.719 qpair failed and we were unable to recover it. 00:28:45.719 [2024-07-25 10:17:30.787094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.719 [2024-07-25 10:17:30.787137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.719 qpair failed and we were unable to recover it. 00:28:45.719 [2024-07-25 10:17:30.787328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.719 [2024-07-25 10:17:30.787351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.719 qpair failed and we were unable to recover it. 00:28:45.719 [2024-07-25 10:17:30.787495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.719 [2024-07-25 10:17:30.787519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.719 qpair failed and we were unable to recover it. 00:28:45.719 [2024-07-25 10:17:30.787740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.719 [2024-07-25 10:17:30.787783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.719 qpair failed and we were unable to recover it. 00:28:45.719 [2024-07-25 10:17:30.787992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.719 [2024-07-25 10:17:30.788020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.719 qpair failed and we were unable to recover it. 00:28:45.719 [2024-07-25 10:17:30.788232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.719 [2024-07-25 10:17:30.788275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.719 qpair failed and we were unable to recover it. 00:28:45.719 [2024-07-25 10:17:30.788459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.719 [2024-07-25 10:17:30.788499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.719 qpair failed and we were unable to recover it. 00:28:45.719 [2024-07-25 10:17:30.788704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.719 [2024-07-25 10:17:30.788748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.719 qpair failed and we were unable to recover it. 00:28:45.719 [2024-07-25 10:17:30.788936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.719 [2024-07-25 10:17:30.788978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.719 qpair failed and we were unable to recover it. 00:28:45.719 [2024-07-25 10:17:30.789173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.719 [2024-07-25 10:17:30.789216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.719 qpair failed and we were unable to recover it. 00:28:45.719 [2024-07-25 10:17:30.789385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.719 [2024-07-25 10:17:30.789423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.719 qpair failed and we were unable to recover it. 00:28:45.719 [2024-07-25 10:17:30.789659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.719 [2024-07-25 10:17:30.789685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.719 qpair failed and we were unable to recover it. 00:28:45.719 [2024-07-25 10:17:30.789840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.719 [2024-07-25 10:17:30.789882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.719 qpair failed and we were unable to recover it. 00:28:45.719 [2024-07-25 10:17:30.790067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.719 [2024-07-25 10:17:30.790108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.719 qpair failed and we were unable to recover it. 00:28:45.719 [2024-07-25 10:17:30.790282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.719 [2024-07-25 10:17:30.790306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.719 qpair failed and we were unable to recover it. 00:28:45.719 [2024-07-25 10:17:30.790534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.719 [2024-07-25 10:17:30.790564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.719 qpair failed and we were unable to recover it. 00:28:45.719 [2024-07-25 10:17:30.790783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.719 [2024-07-25 10:17:30.790826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.719 qpair failed and we were unable to recover it. 00:28:45.719 [2024-07-25 10:17:30.791038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.719 [2024-07-25 10:17:30.791079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.719 qpair failed and we were unable to recover it. 00:28:45.719 [2024-07-25 10:17:30.791270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.719 [2024-07-25 10:17:30.791294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.719 qpair failed and we were unable to recover it. 00:28:45.719 [2024-07-25 10:17:30.791463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.719 [2024-07-25 10:17:30.791487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.719 qpair failed and we were unable to recover it. 00:28:45.719 [2024-07-25 10:17:30.791712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.719 [2024-07-25 10:17:30.791753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.719 qpair failed and we were unable to recover it. 00:28:45.719 [2024-07-25 10:17:30.791980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.719 [2024-07-25 10:17:30.792022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.719 qpair failed and we were unable to recover it. 00:28:45.719 [2024-07-25 10:17:30.792162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.719 [2024-07-25 10:17:30.792192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.719 qpair failed and we were unable to recover it. 00:28:45.719 [2024-07-25 10:17:30.792394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.719 [2024-07-25 10:17:30.792418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.719 qpair failed and we were unable to recover it. 00:28:45.719 [2024-07-25 10:17:30.792619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.719 [2024-07-25 10:17:30.792671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.719 qpair failed and we were unable to recover it. 00:28:45.719 [2024-07-25 10:17:30.792842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.719 [2024-07-25 10:17:30.792886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.719 qpair failed and we were unable to recover it. 00:28:45.719 [2024-07-25 10:17:30.793033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.719 [2024-07-25 10:17:30.793076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.719 qpair failed and we were unable to recover it. 00:28:45.719 [2024-07-25 10:17:30.793256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.719 [2024-07-25 10:17:30.793280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.719 qpair failed and we were unable to recover it. 00:28:45.719 [2024-07-25 10:17:30.793470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.719 [2024-07-25 10:17:30.793494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.719 qpair failed and we were unable to recover it. 00:28:45.719 [2024-07-25 10:17:30.793703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.719 [2024-07-25 10:17:30.793744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.719 qpair failed and we were unable to recover it. 00:28:45.719 [2024-07-25 10:17:30.793967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.719 [2024-07-25 10:17:30.794010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.719 qpair failed and we were unable to recover it. 00:28:45.719 [2024-07-25 10:17:30.794181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.719 [2024-07-25 10:17:30.794227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.719 qpair failed and we were unable to recover it. 00:28:45.719 [2024-07-25 10:17:30.794438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.719 [2024-07-25 10:17:30.794477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.719 qpair failed and we were unable to recover it. 00:28:45.719 [2024-07-25 10:17:30.794632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.719 [2024-07-25 10:17:30.794674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.719 qpair failed and we were unable to recover it. 00:28:45.719 [2024-07-25 10:17:30.794849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.719 [2024-07-25 10:17:30.794889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.719 qpair failed and we were unable to recover it. 00:28:45.719 [2024-07-25 10:17:30.795075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.719 [2024-07-25 10:17:30.795119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.719 qpair failed and we were unable to recover it. 00:28:45.719 [2024-07-25 10:17:30.795311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.719 [2024-07-25 10:17:30.795334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.719 qpair failed and we were unable to recover it. 00:28:45.719 [2024-07-25 10:17:30.795540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.719 [2024-07-25 10:17:30.795585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.719 qpair failed and we were unable to recover it. 00:28:45.719 [2024-07-25 10:17:30.795771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.719 [2024-07-25 10:17:30.795814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.719 qpair failed and we were unable to recover it. 00:28:45.719 [2024-07-25 10:17:30.796002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.719 [2024-07-25 10:17:30.796045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.719 qpair failed and we were unable to recover it. 00:28:45.719 [2024-07-25 10:17:30.796247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.719 [2024-07-25 10:17:30.796289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.719 qpair failed and we were unable to recover it. 00:28:45.719 [2024-07-25 10:17:30.796461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.719 [2024-07-25 10:17:30.796503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.719 qpair failed and we were unable to recover it. 00:28:45.719 [2024-07-25 10:17:30.796650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.719 [2024-07-25 10:17:30.796692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.719 qpair failed and we were unable to recover it. 00:28:45.719 [2024-07-25 10:17:30.796888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.719 [2024-07-25 10:17:30.796932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.719 qpair failed and we were unable to recover it. 00:28:45.719 [2024-07-25 10:17:30.797128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.719 [2024-07-25 10:17:30.797157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.719 qpair failed and we were unable to recover it. 00:28:45.719 [2024-07-25 10:17:30.797369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.719 [2024-07-25 10:17:30.797393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.719 qpair failed and we were unable to recover it. 00:28:45.719 [2024-07-25 10:17:30.797598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.719 [2024-07-25 10:17:30.797640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.719 qpair failed and we were unable to recover it. 00:28:45.719 [2024-07-25 10:17:30.797808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.719 [2024-07-25 10:17:30.797850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.719 qpair failed and we were unable to recover it. 00:28:45.719 [2024-07-25 10:17:30.798051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.719 [2024-07-25 10:17:30.798094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.720 qpair failed and we were unable to recover it. 00:28:45.720 [2024-07-25 10:17:30.798251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.720 [2024-07-25 10:17:30.798274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.720 qpair failed and we were unable to recover it. 00:28:45.720 [2024-07-25 10:17:30.798460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.720 [2024-07-25 10:17:30.798504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.720 qpair failed and we were unable to recover it. 00:28:45.720 [2024-07-25 10:17:30.798661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.720 [2024-07-25 10:17:30.798704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.720 qpair failed and we were unable to recover it. 00:28:45.720 [2024-07-25 10:17:30.798895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.720 [2024-07-25 10:17:30.798939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.720 qpair failed and we were unable to recover it. 00:28:45.720 [2024-07-25 10:17:30.799142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.720 [2024-07-25 10:17:30.799184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.720 qpair failed and we were unable to recover it. 00:28:45.720 [2024-07-25 10:17:30.799360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.720 [2024-07-25 10:17:30.799382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.720 qpair failed and we were unable to recover it. 00:28:45.720 [2024-07-25 10:17:30.799576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.720 [2024-07-25 10:17:30.799620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.720 qpair failed and we were unable to recover it. 00:28:45.720 [2024-07-25 10:17:30.799822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.720 [2024-07-25 10:17:30.799865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.720 qpair failed and we were unable to recover it. 00:28:45.720 [2024-07-25 10:17:30.800069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.720 [2024-07-25 10:17:30.800112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.720 qpair failed and we were unable to recover it. 00:28:45.720 [2024-07-25 10:17:30.800294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.720 [2024-07-25 10:17:30.800317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.720 qpair failed and we were unable to recover it. 00:28:45.720 [2024-07-25 10:17:30.800536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.720 [2024-07-25 10:17:30.800579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.720 qpair failed and we were unable to recover it. 00:28:45.720 [2024-07-25 10:17:30.800791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.720 [2024-07-25 10:17:30.800832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.720 qpair failed and we were unable to recover it. 00:28:45.720 [2024-07-25 10:17:30.801038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.720 [2024-07-25 10:17:30.801081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.720 qpair failed and we were unable to recover it. 00:28:45.720 [2024-07-25 10:17:30.801264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.720 [2024-07-25 10:17:30.801288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.720 qpair failed and we were unable to recover it. 00:28:45.720 [2024-07-25 10:17:30.801520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.720 [2024-07-25 10:17:30.801545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.720 qpair failed and we were unable to recover it. 00:28:45.720 [2024-07-25 10:17:30.801746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.720 [2024-07-25 10:17:30.801788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.720 qpair failed and we were unable to recover it. 00:28:45.720 [2024-07-25 10:17:30.801992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.720 [2024-07-25 10:17:30.802034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.720 qpair failed and we were unable to recover it. 00:28:45.720 [2024-07-25 10:17:30.802216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.720 [2024-07-25 10:17:30.802240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.720 qpair failed and we were unable to recover it. 00:28:45.720 [2024-07-25 10:17:30.802471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.720 [2024-07-25 10:17:30.802496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.720 qpair failed and we were unable to recover it. 00:28:45.720 [2024-07-25 10:17:30.802641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.720 [2024-07-25 10:17:30.802683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.720 qpair failed and we were unable to recover it. 00:28:45.720 [2024-07-25 10:17:30.802866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.720 [2024-07-25 10:17:30.802909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.720 qpair failed and we were unable to recover it. 00:28:45.720 [2024-07-25 10:17:30.803088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.720 [2024-07-25 10:17:30.803131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.720 qpair failed and we were unable to recover it. 00:28:45.720 [2024-07-25 10:17:30.803348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.720 [2024-07-25 10:17:30.803375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.720 qpair failed and we were unable to recover it. 00:28:45.720 [2024-07-25 10:17:30.803596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.720 [2024-07-25 10:17:30.803639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.720 qpair failed and we were unable to recover it. 00:28:45.720 [2024-07-25 10:17:30.803816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.720 [2024-07-25 10:17:30.803859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.720 qpair failed and we were unable to recover it. 00:28:45.720 [2024-07-25 10:17:30.804007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.720 [2024-07-25 10:17:30.804049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.720 qpair failed and we were unable to recover it. 00:28:45.720 [2024-07-25 10:17:30.804227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.720 [2024-07-25 10:17:30.804251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.720 qpair failed and we were unable to recover it. 00:28:45.720 [2024-07-25 10:17:30.804438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.720 [2024-07-25 10:17:30.804462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.720 qpair failed and we were unable to recover it. 00:28:45.720 [2024-07-25 10:17:30.804650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.720 [2024-07-25 10:17:30.804693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.720 qpair failed and we were unable to recover it. 00:28:45.720 [2024-07-25 10:17:30.804887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.720 [2024-07-25 10:17:30.804930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.720 qpair failed and we were unable to recover it. 00:28:45.720 [2024-07-25 10:17:30.805091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.720 [2024-07-25 10:17:30.805133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.720 qpair failed and we were unable to recover it. 00:28:45.720 [2024-07-25 10:17:30.805295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.720 [2024-07-25 10:17:30.805319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.720 qpair failed and we were unable to recover it. 00:28:45.720 [2024-07-25 10:17:30.805466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.720 [2024-07-25 10:17:30.805490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.720 qpair failed and we were unable to recover it. 00:28:45.720 [2024-07-25 10:17:30.805700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.720 [2024-07-25 10:17:30.805740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.720 qpair failed and we were unable to recover it. 00:28:45.720 [2024-07-25 10:17:30.805978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.720 [2024-07-25 10:17:30.806020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.720 qpair failed and we were unable to recover it. 00:28:45.720 [2024-07-25 10:17:30.806206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.720 [2024-07-25 10:17:30.806249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.720 qpair failed and we were unable to recover it. 00:28:45.720 [2024-07-25 10:17:30.806457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.720 [2024-07-25 10:17:30.806482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.720 qpair failed and we were unable to recover it. 00:28:45.720 [2024-07-25 10:17:30.806649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.720 [2024-07-25 10:17:30.806691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.720 qpair failed and we were unable to recover it. 00:28:45.720 [2024-07-25 10:17:30.806872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.720 [2024-07-25 10:17:30.806915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.720 qpair failed and we were unable to recover it. 00:28:45.720 [2024-07-25 10:17:30.807099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.720 [2024-07-25 10:17:30.807142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.720 qpair failed and we were unable to recover it. 00:28:45.720 [2024-07-25 10:17:30.807346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.720 [2024-07-25 10:17:30.807370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.720 qpair failed and we were unable to recover it. 00:28:45.720 [2024-07-25 10:17:30.807595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.720 [2024-07-25 10:17:30.807620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.720 qpair failed and we were unable to recover it. 00:28:45.720 [2024-07-25 10:17:30.807793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.720 [2024-07-25 10:17:30.807834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.720 qpair failed and we were unable to recover it. 00:28:45.720 [2024-07-25 10:17:30.808021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.720 [2024-07-25 10:17:30.808064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.720 qpair failed and we were unable to recover it. 00:28:45.720 [2024-07-25 10:17:30.808231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.720 [2024-07-25 10:17:30.808301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.720 qpair failed and we were unable to recover it. 00:28:45.720 [2024-07-25 10:17:30.808524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.720 [2024-07-25 10:17:30.808568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.720 qpair failed and we were unable to recover it. 00:28:45.720 [2024-07-25 10:17:30.808759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.720 [2024-07-25 10:17:30.808802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.720 qpair failed and we were unable to recover it. 00:28:45.720 [2024-07-25 10:17:30.808976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.720 [2024-07-25 10:17:30.809020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.720 qpair failed and we were unable to recover it. 00:28:45.720 [2024-07-25 10:17:30.809191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.720 [2024-07-25 10:17:30.809215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.720 qpair failed and we were unable to recover it. 00:28:45.720 [2024-07-25 10:17:30.809399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.720 [2024-07-25 10:17:30.809423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.720 qpair failed and we were unable to recover it. 00:28:45.720 [2024-07-25 10:17:30.809642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.720 [2024-07-25 10:17:30.809686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.720 qpair failed and we were unable to recover it. 00:28:45.720 [2024-07-25 10:17:30.809890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.720 [2024-07-25 10:17:30.809933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.720 qpair failed and we were unable to recover it. 00:28:45.720 [2024-07-25 10:17:30.810107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.720 [2024-07-25 10:17:30.810150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.720 qpair failed and we were unable to recover it. 00:28:45.720 [2024-07-25 10:17:30.810353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.720 [2024-07-25 10:17:30.810377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.720 qpair failed and we were unable to recover it. 00:28:45.720 [2024-07-25 10:17:30.810610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.720 [2024-07-25 10:17:30.810639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.720 qpair failed and we were unable to recover it. 00:28:45.720 [2024-07-25 10:17:30.810856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.720 [2024-07-25 10:17:30.810899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.720 qpair failed and we were unable to recover it. 00:28:45.720 [2024-07-25 10:17:30.811113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.720 [2024-07-25 10:17:30.811155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.720 qpair failed and we were unable to recover it. 00:28:45.720 [2024-07-25 10:17:30.811348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.720 [2024-07-25 10:17:30.811370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.720 qpair failed and we were unable to recover it. 00:28:45.720 [2024-07-25 10:17:30.811541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.720 [2024-07-25 10:17:30.811585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.720 qpair failed and we were unable to recover it. 00:28:45.721 [2024-07-25 10:17:30.811770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.721 [2024-07-25 10:17:30.811813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.721 qpair failed and we were unable to recover it. 00:28:45.721 [2024-07-25 10:17:30.811990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.721 [2024-07-25 10:17:30.812034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.721 qpair failed and we were unable to recover it. 00:28:45.721 [2024-07-25 10:17:30.812235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.721 [2024-07-25 10:17:30.812277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.721 qpair failed and we were unable to recover it. 00:28:45.721 [2024-07-25 10:17:30.812472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.721 [2024-07-25 10:17:30.812519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.721 qpair failed and we were unable to recover it. 00:28:45.721 [2024-07-25 10:17:30.812695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.721 [2024-07-25 10:17:30.812738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.721 qpair failed and we were unable to recover it. 00:28:45.721 [2024-07-25 10:17:30.812948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.721 [2024-07-25 10:17:30.812990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.721 qpair failed and we were unable to recover it. 00:28:45.721 [2024-07-25 10:17:30.813183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.721 [2024-07-25 10:17:30.813226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.721 qpair failed and we were unable to recover it. 00:28:45.721 [2024-07-25 10:17:30.813421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.721 [2024-07-25 10:17:30.813458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.721 qpair failed and we were unable to recover it. 00:28:45.721 [2024-07-25 10:17:30.813642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.721 [2024-07-25 10:17:30.813684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.721 qpair failed and we were unable to recover it. 00:28:45.721 [2024-07-25 10:17:30.813882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.721 [2024-07-25 10:17:30.813924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.721 qpair failed and we were unable to recover it. 00:28:45.721 [2024-07-25 10:17:30.814137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.721 [2024-07-25 10:17:30.814180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.721 qpair failed and we were unable to recover it. 00:28:45.721 [2024-07-25 10:17:30.814370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.721 [2024-07-25 10:17:30.814393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.721 qpair failed and we were unable to recover it. 00:28:45.721 [2024-07-25 10:17:30.814580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.721 [2024-07-25 10:17:30.814624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.721 qpair failed and we were unable to recover it. 00:28:45.721 [2024-07-25 10:17:30.814825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.721 [2024-07-25 10:17:30.814869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.721 qpair failed and we were unable to recover it. 00:28:45.721 [2024-07-25 10:17:30.815066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.721 [2024-07-25 10:17:30.815096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.721 qpair failed and we were unable to recover it. 00:28:45.721 [2024-07-25 10:17:30.815311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.721 [2024-07-25 10:17:30.815335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.721 qpair failed and we were unable to recover it. 00:28:45.721 [2024-07-25 10:17:30.815551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.721 [2024-07-25 10:17:30.815576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.721 qpair failed and we were unable to recover it. 00:28:45.721 [2024-07-25 10:17:30.815752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.721 [2024-07-25 10:17:30.815795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.721 qpair failed and we were unable to recover it. 00:28:45.721 [2024-07-25 10:17:30.816012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.721 [2024-07-25 10:17:30.816054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.721 qpair failed and we were unable to recover it. 00:28:45.721 [2024-07-25 10:17:30.816242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.721 [2024-07-25 10:17:30.816285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.721 qpair failed and we were unable to recover it. 00:28:45.721 [2024-07-25 10:17:30.816451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.721 [2024-07-25 10:17:30.816496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.721 qpair failed and we were unable to recover it. 00:28:45.721 [2024-07-25 10:17:30.816673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.721 [2024-07-25 10:17:30.816716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.721 qpair failed and we were unable to recover it. 00:28:45.721 [2024-07-25 10:17:30.816890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.721 [2024-07-25 10:17:30.816934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.721 qpair failed and we were unable to recover it. 00:28:45.721 [2024-07-25 10:17:30.817079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.721 [2024-07-25 10:17:30.817109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.721 qpair failed and we were unable to recover it. 00:28:45.721 [2024-07-25 10:17:30.817275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.721 [2024-07-25 10:17:30.817314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.721 qpair failed and we were unable to recover it. 00:28:45.721 [2024-07-25 10:17:30.817504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.721 [2024-07-25 10:17:30.817534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.721 qpair failed and we were unable to recover it. 00:28:45.721 [2024-07-25 10:17:30.817725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.721 [2024-07-25 10:17:30.817769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.721 qpair failed and we were unable to recover it. 00:28:45.721 [2024-07-25 10:17:30.817965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.721 [2024-07-25 10:17:30.818007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.721 qpair failed and we were unable to recover it. 00:28:45.721 [2024-07-25 10:17:30.818182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.721 [2024-07-25 10:17:30.818206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.721 qpair failed and we were unable to recover it. 00:28:45.721 [2024-07-25 10:17:30.818409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.721 [2024-07-25 10:17:30.818452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.721 qpair failed and we were unable to recover it. 00:28:45.721 [2024-07-25 10:17:30.818643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.721 [2024-07-25 10:17:30.818686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.721 qpair failed and we were unable to recover it. 00:28:45.721 [2024-07-25 10:17:30.818863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.721 [2024-07-25 10:17:30.818906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.721 qpair failed and we were unable to recover it. 00:28:45.721 [2024-07-25 10:17:30.819115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.721 [2024-07-25 10:17:30.819158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.721 qpair failed and we were unable to recover it. 00:28:45.721 [2024-07-25 10:17:30.819357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.721 [2024-07-25 10:17:30.819380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.721 qpair failed and we were unable to recover it. 00:28:45.721 [2024-07-25 10:17:30.819604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.721 [2024-07-25 10:17:30.819648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.721 qpair failed and we were unable to recover it. 00:28:45.721 [2024-07-25 10:17:30.819847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.721 [2024-07-25 10:17:30.819889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.721 qpair failed and we were unable to recover it. 00:28:45.721 [2024-07-25 10:17:30.820038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.721 [2024-07-25 10:17:30.820062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.721 qpair failed and we were unable to recover it. 00:28:45.721 [2024-07-25 10:17:30.820237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.721 [2024-07-25 10:17:30.820261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.721 qpair failed and we were unable to recover it. 00:28:45.721 [2024-07-25 10:17:30.820458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.721 [2024-07-25 10:17:30.820482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.721 qpair failed and we were unable to recover it. 00:28:45.721 [2024-07-25 10:17:30.820639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.721 [2024-07-25 10:17:30.820669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.721 qpair failed and we were unable to recover it. 00:28:45.721 [2024-07-25 10:17:30.820854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.721 [2024-07-25 10:17:30.820896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.721 qpair failed and we were unable to recover it. 00:28:45.721 [2024-07-25 10:17:30.821090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.721 [2024-07-25 10:17:30.821132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.721 qpair failed and we were unable to recover it. 00:28:45.721 [2024-07-25 10:17:30.821269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.721 [2024-07-25 10:17:30.821293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.721 qpair failed and we were unable to recover it. 00:28:45.721 [2024-07-25 10:17:30.821483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.721 [2024-07-25 10:17:30.821517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.721 qpair failed and we were unable to recover it. 00:28:45.721 [2024-07-25 10:17:30.821717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.721 [2024-07-25 10:17:30.821761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.721 qpair failed and we were unable to recover it. 00:28:45.721 [2024-07-25 10:17:30.821919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.721 [2024-07-25 10:17:30.821962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.721 qpair failed and we were unable to recover it. 00:28:45.721 [2024-07-25 10:17:30.822153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.721 [2024-07-25 10:17:30.822196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.721 qpair failed and we were unable to recover it. 00:28:45.721 [2024-07-25 10:17:30.822385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.721 [2024-07-25 10:17:30.822409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.721 qpair failed and we were unable to recover it. 00:28:45.721 [2024-07-25 10:17:30.822598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.721 [2024-07-25 10:17:30.822642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.721 qpair failed and we were unable to recover it. 00:28:45.721 [2024-07-25 10:17:30.822810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.721 [2024-07-25 10:17:30.822854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.721 qpair failed and we were unable to recover it. 00:28:45.721 [2024-07-25 10:17:30.823002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.721 [2024-07-25 10:17:30.823046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.721 qpair failed and we were unable to recover it. 00:28:45.721 [2024-07-25 10:17:30.823197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.721 [2024-07-25 10:17:30.823236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.721 qpair failed and we were unable to recover it. 00:28:45.721 [2024-07-25 10:17:30.823439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.721 [2024-07-25 10:17:30.823464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.721 qpair failed and we were unable to recover it. 00:28:45.721 [2024-07-25 10:17:30.823654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.721 [2024-07-25 10:17:30.823697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.721 qpair failed and we were unable to recover it. 00:28:45.721 [2024-07-25 10:17:30.823881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.721 [2024-07-25 10:17:30.823923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.721 qpair failed and we were unable to recover it. 00:28:45.721 [2024-07-25 10:17:30.824109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.721 [2024-07-25 10:17:30.824153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.721 qpair failed and we were unable to recover it. 00:28:45.721 [2024-07-25 10:17:30.824324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.721 [2024-07-25 10:17:30.824347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.721 qpair failed and we were unable to recover it. 00:28:45.721 [2024-07-25 10:17:30.824538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.721 [2024-07-25 10:17:30.824568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.721 qpair failed and we were unable to recover it. 00:28:45.721 [2024-07-25 10:17:30.824782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.721 [2024-07-25 10:17:30.824824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.722 qpair failed and we were unable to recover it. 00:28:45.722 [2024-07-25 10:17:30.825020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.722 [2024-07-25 10:17:30.825050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.722 qpair failed and we were unable to recover it. 00:28:45.722 [2024-07-25 10:17:30.825277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.722 [2024-07-25 10:17:30.825321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.722 qpair failed and we were unable to recover it. 00:28:45.722 [2024-07-25 10:17:30.825486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.722 [2024-07-25 10:17:30.825536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.722 qpair failed and we were unable to recover it. 00:28:45.722 [2024-07-25 10:17:30.825715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.722 [2024-07-25 10:17:30.825758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.722 qpair failed and we were unable to recover it. 00:28:45.722 [2024-07-25 10:17:30.825920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.722 [2024-07-25 10:17:30.825973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.722 qpair failed and we were unable to recover it. 00:28:45.722 [2024-07-25 10:17:30.826163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.722 [2024-07-25 10:17:30.826205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.722 qpair failed and we were unable to recover it. 00:28:45.722 [2024-07-25 10:17:30.826404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.722 [2024-07-25 10:17:30.826448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.722 qpair failed and we were unable to recover it. 00:28:45.722 [2024-07-25 10:17:30.826618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.722 [2024-07-25 10:17:30.826661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.722 qpair failed and we were unable to recover it. 00:28:45.722 [2024-07-25 10:17:30.826837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.722 [2024-07-25 10:17:30.826880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.722 qpair failed and we were unable to recover it. 00:28:45.722 [2024-07-25 10:17:30.827059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.722 [2024-07-25 10:17:30.827103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.722 qpair failed and we were unable to recover it. 00:28:45.722 [2024-07-25 10:17:30.827269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.722 [2024-07-25 10:17:30.827293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.722 qpair failed and we were unable to recover it. 00:28:45.722 [2024-07-25 10:17:30.827484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.722 [2024-07-25 10:17:30.827528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.722 qpair failed and we were unable to recover it. 00:28:45.722 [2024-07-25 10:17:30.827686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.722 [2024-07-25 10:17:30.827715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.722 qpair failed and we were unable to recover it. 00:28:45.722 [2024-07-25 10:17:30.827893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.722 [2024-07-25 10:17:30.827936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.722 qpair failed and we were unable to recover it. 00:28:45.722 [2024-07-25 10:17:30.828129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.722 [2024-07-25 10:17:30.828171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.722 qpair failed and we were unable to recover it. 00:28:45.722 [2024-07-25 10:17:30.828346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.722 [2024-07-25 10:17:30.828370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.722 qpair failed and we were unable to recover it. 00:28:45.722 [2024-07-25 10:17:30.828552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.722 [2024-07-25 10:17:30.828582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.722 qpair failed and we were unable to recover it. 00:28:45.722 [2024-07-25 10:17:30.828790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.722 [2024-07-25 10:17:30.828834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.722 qpair failed and we were unable to recover it. 00:28:45.722 [2024-07-25 10:17:30.829046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.722 [2024-07-25 10:17:30.829090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.722 qpair failed and we were unable to recover it. 00:28:45.722 [2024-07-25 10:17:30.829271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.722 [2024-07-25 10:17:30.829294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.722 qpair failed and we were unable to recover it. 00:28:45.722 [2024-07-25 10:17:30.829474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.722 [2024-07-25 10:17:30.829499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.722 qpair failed and we were unable to recover it. 00:28:45.722 [2024-07-25 10:17:30.829681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.722 [2024-07-25 10:17:30.829711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.722 qpair failed and we were unable to recover it. 00:28:45.722 [2024-07-25 10:17:30.829918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.722 [2024-07-25 10:17:30.829960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.722 qpair failed and we were unable to recover it. 00:28:45.722 [2024-07-25 10:17:30.830156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.722 [2024-07-25 10:17:30.830199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.722 qpair failed and we were unable to recover it. 00:28:45.722 [2024-07-25 10:17:30.830387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.722 [2024-07-25 10:17:30.830414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.722 qpair failed and we were unable to recover it. 00:28:45.722 [2024-07-25 10:17:30.830589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.722 [2024-07-25 10:17:30.830618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.722 qpair failed and we were unable to recover it. 00:28:45.722 [2024-07-25 10:17:30.830833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.722 [2024-07-25 10:17:30.830876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.722 qpair failed and we were unable to recover it. 00:28:45.722 [2024-07-25 10:17:30.831074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.722 [2024-07-25 10:17:30.831104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.722 qpair failed and we were unable to recover it. 00:28:45.722 [2024-07-25 10:17:30.831261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.722 [2024-07-25 10:17:30.831288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.722 qpair failed and we were unable to recover it. 00:28:45.722 [2024-07-25 10:17:30.831495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.722 [2024-07-25 10:17:30.831540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.722 qpair failed and we were unable to recover it. 00:28:45.722 [2024-07-25 10:17:30.831716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.722 [2024-07-25 10:17:30.831759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.722 qpair failed and we were unable to recover it. 00:28:45.722 [2024-07-25 10:17:30.831913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.722 [2024-07-25 10:17:30.831942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.722 qpair failed and we were unable to recover it. 00:28:45.722 [2024-07-25 10:17:30.832154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.722 [2024-07-25 10:17:30.832179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.722 qpair failed and we were unable to recover it. 00:28:45.722 [2024-07-25 10:17:30.832359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.722 [2024-07-25 10:17:30.832383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.722 qpair failed and we were unable to recover it. 00:28:45.722 [2024-07-25 10:17:30.832573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.722 [2024-07-25 10:17:30.832618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.722 qpair failed and we were unable to recover it. 00:28:45.722 [2024-07-25 10:17:30.832806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.722 [2024-07-25 10:17:30.832849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.722 qpair failed and we were unable to recover it. 00:28:45.722 [2024-07-25 10:17:30.833054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.722 [2024-07-25 10:17:30.833097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.722 qpair failed and we were unable to recover it. 00:28:45.722 [2024-07-25 10:17:30.833278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.722 [2024-07-25 10:17:30.833305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.722 qpair failed and we were unable to recover it. 00:28:45.722 [2024-07-25 10:17:30.833482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.722 [2024-07-25 10:17:30.833509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.722 qpair failed and we were unable to recover it. 00:28:45.722 [2024-07-25 10:17:30.833664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.722 [2024-07-25 10:17:30.833708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.722 qpair failed and we were unable to recover it. 00:28:45.722 [2024-07-25 10:17:30.833939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.722 [2024-07-25 10:17:30.833981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.722 qpair failed and we were unable to recover it. 00:28:45.722 [2024-07-25 10:17:30.834139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.722 [2024-07-25 10:17:30.834180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.722 qpair failed and we were unable to recover it. 00:28:45.722 [2024-07-25 10:17:30.834358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.722 [2024-07-25 10:17:30.834383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.722 qpair failed and we were unable to recover it. 00:28:45.722 [2024-07-25 10:17:30.834557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.722 [2024-07-25 10:17:30.834587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.722 qpair failed and we were unable to recover it. 00:28:45.722 [2024-07-25 10:17:30.834807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.722 [2024-07-25 10:17:30.834849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.722 qpair failed and we were unable to recover it. 00:28:45.722 [2024-07-25 10:17:30.835051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.722 [2024-07-25 10:17:30.835081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.722 qpair failed and we were unable to recover it. 00:28:45.722 [2024-07-25 10:17:30.835235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:45.722 [2024-07-25 10:17:30.835262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:45.722 qpair failed and we were unable to recover it. 00:28:46.002 [2024-07-25 10:17:30.835480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.002 [2024-07-25 10:17:30.835510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.002 qpair failed and we were unable to recover it. 00:28:46.002 [2024-07-25 10:17:30.835698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.002 [2024-07-25 10:17:30.835744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.002 qpair failed and we were unable to recover it. 00:28:46.002 [2024-07-25 10:17:30.835957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.002 [2024-07-25 10:17:30.836000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.002 qpair failed and we were unable to recover it. 00:28:46.002 [2024-07-25 10:17:30.836220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.002 [2024-07-25 10:17:30.836264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.002 qpair failed and we were unable to recover it. 00:28:46.002 [2024-07-25 10:17:30.836469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.002 [2024-07-25 10:17:30.836499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.002 qpair failed and we were unable to recover it. 00:28:46.002 [2024-07-25 10:17:30.836694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.002 [2024-07-25 10:17:30.836738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.002 qpair failed and we were unable to recover it. 00:28:46.002 [2024-07-25 10:17:30.836904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.002 [2024-07-25 10:17:30.836947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.002 qpair failed and we were unable to recover it. 00:28:46.002 [2024-07-25 10:17:30.837111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.002 [2024-07-25 10:17:30.837155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.002 qpair failed and we were unable to recover it. 00:28:46.002 [2024-07-25 10:17:30.837356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.002 [2024-07-25 10:17:30.837384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.002 qpair failed and we were unable to recover it. 00:28:46.002 [2024-07-25 10:17:30.837548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.002 [2024-07-25 10:17:30.837593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.002 qpair failed and we were unable to recover it. 00:28:46.002 [2024-07-25 10:17:30.837783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.002 [2024-07-25 10:17:30.837813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.002 qpair failed and we were unable to recover it. 00:28:46.002 [2024-07-25 10:17:30.837998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.002 [2024-07-25 10:17:30.838041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.002 qpair failed and we were unable to recover it. 00:28:46.002 [2024-07-25 10:17:30.838196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.002 [2024-07-25 10:17:30.838223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.002 qpair failed and we were unable to recover it. 00:28:46.002 [2024-07-25 10:17:30.838473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.002 [2024-07-25 10:17:30.838501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.002 qpair failed and we were unable to recover it. 00:28:46.003 [2024-07-25 10:17:30.838676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.003 [2024-07-25 10:17:30.838720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.003 qpair failed and we were unable to recover it. 00:28:46.003 [2024-07-25 10:17:30.838909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.003 [2024-07-25 10:17:30.838952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.003 qpair failed and we were unable to recover it. 00:28:46.003 [2024-07-25 10:17:30.839116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.003 [2024-07-25 10:17:30.839160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.003 qpair failed and we were unable to recover it. 00:28:46.003 [2024-07-25 10:17:30.839330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.003 [2024-07-25 10:17:30.839359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.003 qpair failed and we were unable to recover it. 00:28:46.003 [2024-07-25 10:17:30.839569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.003 [2024-07-25 10:17:30.839614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.003 qpair failed and we were unable to recover it. 00:28:46.003 [2024-07-25 10:17:30.839781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.003 [2024-07-25 10:17:30.839824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.003 qpair failed and we were unable to recover it. 00:28:46.003 [2024-07-25 10:17:30.840022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.003 [2024-07-25 10:17:30.840052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.003 qpair failed and we were unable to recover it. 00:28:46.003 [2024-07-25 10:17:30.840261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.003 [2024-07-25 10:17:30.840301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.003 qpair failed and we were unable to recover it. 00:28:46.003 [2024-07-25 10:17:30.840470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.003 [2024-07-25 10:17:30.840513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.003 qpair failed and we were unable to recover it. 00:28:46.003 [2024-07-25 10:17:30.840695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.003 [2024-07-25 10:17:30.840739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.003 qpair failed and we were unable to recover it. 00:28:46.003 [2024-07-25 10:17:30.840912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.003 [2024-07-25 10:17:30.840955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.003 qpair failed and we were unable to recover it. 00:28:46.003 [2024-07-25 10:17:30.841155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.003 [2024-07-25 10:17:30.841184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.003 qpair failed and we were unable to recover it. 00:28:46.003 [2024-07-25 10:17:30.841400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.003 [2024-07-25 10:17:30.841424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.003 qpair failed and we were unable to recover it. 00:28:46.003 [2024-07-25 10:17:30.841574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.003 [2024-07-25 10:17:30.841617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.003 qpair failed and we were unable to recover it. 00:28:46.003 [2024-07-25 10:17:30.841768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.003 [2024-07-25 10:17:30.841811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.003 qpair failed and we were unable to recover it. 00:28:46.003 [2024-07-25 10:17:30.841981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.003 [2024-07-25 10:17:30.842024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.003 qpair failed and we were unable to recover it. 00:28:46.003 [2024-07-25 10:17:30.842201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.003 [2024-07-25 10:17:30.842244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.003 qpair failed and we were unable to recover it. 00:28:46.003 [2024-07-25 10:17:30.842418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.003 [2024-07-25 10:17:30.842463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.003 qpair failed and we were unable to recover it. 00:28:46.003 [2024-07-25 10:17:30.842621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.003 [2024-07-25 10:17:30.842663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.003 qpair failed and we were unable to recover it. 00:28:46.003 [2024-07-25 10:17:30.842865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.003 [2024-07-25 10:17:30.842908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.003 qpair failed and we were unable to recover it. 00:28:46.003 [2024-07-25 10:17:30.843102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.003 [2024-07-25 10:17:30.843145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.003 qpair failed and we were unable to recover it. 00:28:46.003 [2024-07-25 10:17:30.843344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.003 [2024-07-25 10:17:30.843369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.003 qpair failed and we were unable to recover it. 00:28:46.003 [2024-07-25 10:17:30.843595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.003 [2024-07-25 10:17:30.843621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.003 qpair failed and we were unable to recover it. 00:28:46.003 [2024-07-25 10:17:30.843824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.003 [2024-07-25 10:17:30.843867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.003 qpair failed and we were unable to recover it. 00:28:46.003 [2024-07-25 10:17:30.844005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.003 [2024-07-25 10:17:30.844047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.003 qpair failed and we were unable to recover it. 00:28:46.003 [2024-07-25 10:17:30.844197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.003 [2024-07-25 10:17:30.844239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.003 qpair failed and we were unable to recover it. 00:28:46.003 [2024-07-25 10:17:30.844439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.003 [2024-07-25 10:17:30.844463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.003 qpair failed and we were unable to recover it. 00:28:46.003 [2024-07-25 10:17:30.844669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.003 [2024-07-25 10:17:30.844693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.003 qpair failed and we were unable to recover it. 00:28:46.003 [2024-07-25 10:17:30.844860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.003 [2024-07-25 10:17:30.844902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.003 qpair failed and we were unable to recover it. 00:28:46.003 [2024-07-25 10:17:30.845082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.003 [2024-07-25 10:17:30.845125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.003 qpair failed and we were unable to recover it. 00:28:46.003 [2024-07-25 10:17:30.845319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.003 [2024-07-25 10:17:30.845343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.003 qpair failed and we were unable to recover it. 00:28:46.003 [2024-07-25 10:17:30.845509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.003 [2024-07-25 10:17:30.845534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.003 qpair failed and we were unable to recover it. 00:28:46.003 [2024-07-25 10:17:30.845714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.003 [2024-07-25 10:17:30.845756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.003 qpair failed and we were unable to recover it. 00:28:46.003 [2024-07-25 10:17:30.845936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.003 [2024-07-25 10:17:30.845978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.003 qpair failed and we were unable to recover it. 00:28:46.003 [2024-07-25 10:17:30.846170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.003 [2024-07-25 10:17:30.846212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.003 qpair failed and we were unable to recover it. 00:28:46.003 [2024-07-25 10:17:30.846411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.003 [2024-07-25 10:17:30.846450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.003 qpair failed and we were unable to recover it. 00:28:46.003 [2024-07-25 10:17:30.846677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.004 [2024-07-25 10:17:30.846702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.004 qpair failed and we were unable to recover it. 00:28:46.004 [2024-07-25 10:17:30.846903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.004 [2024-07-25 10:17:30.846946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.004 qpair failed and we were unable to recover it. 00:28:46.004 [2024-07-25 10:17:30.847136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.004 [2024-07-25 10:17:30.847179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.004 qpair failed and we were unable to recover it. 00:28:46.004 [2024-07-25 10:17:30.847367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.004 [2024-07-25 10:17:30.847391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.004 qpair failed and we were unable to recover it. 00:28:46.004 [2024-07-25 10:17:30.847580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.004 [2024-07-25 10:17:30.847623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.004 qpair failed and we were unable to recover it. 00:28:46.004 [2024-07-25 10:17:30.847798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.004 [2024-07-25 10:17:30.847841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.004 qpair failed and we were unable to recover it. 00:28:46.004 [2024-07-25 10:17:30.848018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.004 [2024-07-25 10:17:30.848060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.004 qpair failed and we were unable to recover it. 00:28:46.004 [2024-07-25 10:17:30.848247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.004 [2024-07-25 10:17:30.848293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.004 qpair failed and we were unable to recover it. 00:28:46.004 [2024-07-25 10:17:30.848493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.004 [2024-07-25 10:17:30.848537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.004 qpair failed and we were unable to recover it. 00:28:46.004 [2024-07-25 10:17:30.848759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.004 [2024-07-25 10:17:30.848802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.004 qpair failed and we were unable to recover it. 00:28:46.004 [2024-07-25 10:17:30.848952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.004 [2024-07-25 10:17:30.848995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.004 qpair failed and we were unable to recover it. 00:28:46.004 [2024-07-25 10:17:30.849208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.004 [2024-07-25 10:17:30.849252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.004 qpair failed and we were unable to recover it. 00:28:46.004 [2024-07-25 10:17:30.849398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.004 [2024-07-25 10:17:30.849421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.004 qpair failed and we were unable to recover it. 00:28:46.004 [2024-07-25 10:17:30.849657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.004 [2024-07-25 10:17:30.849700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.004 qpair failed and we were unable to recover it. 00:28:46.004 [2024-07-25 10:17:30.849861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.004 [2024-07-25 10:17:30.849904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.004 qpair failed and we were unable to recover it. 00:28:46.004 [2024-07-25 10:17:30.850049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.004 [2024-07-25 10:17:30.850087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.004 qpair failed and we were unable to recover it. 00:28:46.004 [2024-07-25 10:17:30.850300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.004 [2024-07-25 10:17:30.850324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.004 qpair failed and we were unable to recover it. 00:28:46.004 [2024-07-25 10:17:30.850518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.004 [2024-07-25 10:17:30.850543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.004 qpair failed and we were unable to recover it. 00:28:46.004 [2024-07-25 10:17:30.850722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.004 [2024-07-25 10:17:30.850746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.004 qpair failed and we were unable to recover it. 00:28:46.004 [2024-07-25 10:17:30.850979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.004 [2024-07-25 10:17:30.851021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.004 qpair failed and we were unable to recover it. 00:28:46.004 [2024-07-25 10:17:30.851181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.004 [2024-07-25 10:17:30.851225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.004 qpair failed and we were unable to recover it. 00:28:46.004 [2024-07-25 10:17:30.851371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.004 [2024-07-25 10:17:30.851410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.004 qpair failed and we were unable to recover it. 00:28:46.004 [2024-07-25 10:17:30.851639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.004 [2024-07-25 10:17:30.851664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.004 qpair failed and we were unable to recover it. 00:28:46.004 [2024-07-25 10:17:30.851886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.004 [2024-07-25 10:17:30.851929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.004 qpair failed and we were unable to recover it. 00:28:46.004 [2024-07-25 10:17:30.852115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.004 [2024-07-25 10:17:30.852157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.004 qpair failed and we were unable to recover it. 00:28:46.004 [2024-07-25 10:17:30.852296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.004 [2024-07-25 10:17:30.852319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.004 qpair failed and we were unable to recover it. 00:28:46.004 [2024-07-25 10:17:30.852549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.004 [2024-07-25 10:17:30.852592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.004 qpair failed and we were unable to recover it. 00:28:46.004 [2024-07-25 10:17:30.852763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.004 [2024-07-25 10:17:30.852805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.004 qpair failed and we were unable to recover it. 00:28:46.004 [2024-07-25 10:17:30.852991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.004 [2024-07-25 10:17:30.853033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.004 qpair failed and we were unable to recover it. 00:28:46.004 [2024-07-25 10:17:30.853233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.004 [2024-07-25 10:17:30.853257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.004 qpair failed and we were unable to recover it. 00:28:46.004 [2024-07-25 10:17:30.853441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.004 [2024-07-25 10:17:30.853480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.004 qpair failed and we were unable to recover it. 00:28:46.004 [2024-07-25 10:17:30.853657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.004 [2024-07-25 10:17:30.853700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.004 qpair failed and we were unable to recover it. 00:28:46.004 [2024-07-25 10:17:30.853885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.004 [2024-07-25 10:17:30.853929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.004 qpair failed and we were unable to recover it. 00:28:46.004 [2024-07-25 10:17:30.854073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.004 [2024-07-25 10:17:30.854102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.004 qpair failed and we were unable to recover it. 00:28:46.004 [2024-07-25 10:17:30.854283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.004 [2024-07-25 10:17:30.854321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.004 qpair failed and we were unable to recover it. 00:28:46.004 [2024-07-25 10:17:30.854504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.004 [2024-07-25 10:17:30.854548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.004 qpair failed and we were unable to recover it. 00:28:46.004 [2024-07-25 10:17:30.854731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.004 [2024-07-25 10:17:30.854773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.004 qpair failed and we were unable to recover it. 00:28:46.004 [2024-07-25 10:17:30.854975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.005 [2024-07-25 10:17:30.855018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.005 qpair failed and we were unable to recover it. 00:28:46.005 [2024-07-25 10:17:30.855170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.005 [2024-07-25 10:17:30.855213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.005 qpair failed and we were unable to recover it. 00:28:46.005 [2024-07-25 10:17:30.855400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.005 [2024-07-25 10:17:30.855424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.005 qpair failed and we were unable to recover it. 00:28:46.005 [2024-07-25 10:17:30.855595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.005 [2024-07-25 10:17:30.855638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.005 qpair failed and we were unable to recover it. 00:28:46.005 [2024-07-25 10:17:30.855795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.005 [2024-07-25 10:17:30.855837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.005 qpair failed and we were unable to recover it. 00:28:46.005 [2024-07-25 10:17:30.856021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.005 [2024-07-25 10:17:30.856064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.005 qpair failed and we were unable to recover it. 00:28:46.005 [2024-07-25 10:17:30.856225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.005 [2024-07-25 10:17:30.856248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.005 qpair failed and we were unable to recover it. 00:28:46.005 [2024-07-25 10:17:30.856396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.005 [2024-07-25 10:17:30.856440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.005 qpair failed and we were unable to recover it. 00:28:46.005 [2024-07-25 10:17:30.856591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.005 [2024-07-25 10:17:30.856634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.005 qpair failed and we were unable to recover it. 00:28:46.005 [2024-07-25 10:17:30.856830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.005 [2024-07-25 10:17:30.856873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.005 qpair failed and we were unable to recover it. 00:28:46.005 [2024-07-25 10:17:30.857016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.005 [2024-07-25 10:17:30.857042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.005 qpair failed and we were unable to recover it. 00:28:46.005 [2024-07-25 10:17:30.857249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.005 [2024-07-25 10:17:30.857273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.005 qpair failed and we were unable to recover it. 00:28:46.005 [2024-07-25 10:17:30.857454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.005 [2024-07-25 10:17:30.857498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.005 qpair failed and we were unable to recover it. 00:28:46.005 [2024-07-25 10:17:30.857678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.005 [2024-07-25 10:17:30.857721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.005 qpair failed and we were unable to recover it. 00:28:46.005 [2024-07-25 10:17:30.857868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.005 [2024-07-25 10:17:30.857912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.005 qpair failed and we were unable to recover it. 00:28:46.005 [2024-07-25 10:17:30.858088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.005 [2024-07-25 10:17:30.858112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.005 qpair failed and we were unable to recover it. 00:28:46.005 [2024-07-25 10:17:30.858292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.005 [2024-07-25 10:17:30.858316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.005 qpair failed and we were unable to recover it. 00:28:46.005 [2024-07-25 10:17:30.858540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.005 [2024-07-25 10:17:30.858583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.005 qpair failed and we were unable to recover it. 00:28:46.005 [2024-07-25 10:17:30.858727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.005 [2024-07-25 10:17:30.858770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.005 qpair failed and we were unable to recover it. 00:28:46.005 [2024-07-25 10:17:30.858923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.005 [2024-07-25 10:17:30.858947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.005 qpair failed and we were unable to recover it. 00:28:46.005 [2024-07-25 10:17:30.859159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.005 [2024-07-25 10:17:30.859182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.005 qpair failed and we were unable to recover it. 00:28:46.005 [2024-07-25 10:17:30.859399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.005 [2024-07-25 10:17:30.859443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.005 qpair failed and we were unable to recover it. 00:28:46.005 [2024-07-25 10:17:30.859626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.005 [2024-07-25 10:17:30.859668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.005 qpair failed and we were unable to recover it. 00:28:46.005 [2024-07-25 10:17:30.859846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.005 [2024-07-25 10:17:30.859889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.005 qpair failed and we were unable to recover it. 00:28:46.005 [2024-07-25 10:17:30.860042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.005 [2024-07-25 10:17:30.860085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.005 qpair failed and we were unable to recover it. 00:28:46.005 [2024-07-25 10:17:30.860264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.005 [2024-07-25 10:17:30.860288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.005 qpair failed and we were unable to recover it. 00:28:46.005 [2024-07-25 10:17:30.860426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.005 [2024-07-25 10:17:30.860469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.005 qpair failed and we were unable to recover it. 00:28:46.005 [2024-07-25 10:17:30.860648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.005 [2024-07-25 10:17:30.860691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.005 qpair failed and we were unable to recover it. 00:28:46.005 [2024-07-25 10:17:30.860857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.005 [2024-07-25 10:17:30.860899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.005 qpair failed and we were unable to recover it. 00:28:46.005 [2024-07-25 10:17:30.861114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.005 [2024-07-25 10:17:30.861156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.005 qpair failed and we were unable to recover it. 00:28:46.005 [2024-07-25 10:17:30.861349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.005 [2024-07-25 10:17:30.861373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.005 qpair failed and we were unable to recover it. 00:28:46.005 [2024-07-25 10:17:30.861598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.005 [2024-07-25 10:17:30.861642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.005 qpair failed and we were unable to recover it. 00:28:46.005 [2024-07-25 10:17:30.861856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.005 [2024-07-25 10:17:30.861899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.005 qpair failed and we were unable to recover it. 00:28:46.005 [2024-07-25 10:17:30.862046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.005 [2024-07-25 10:17:30.862089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.005 qpair failed and we were unable to recover it. 00:28:46.005 [2024-07-25 10:17:30.862280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.005 [2024-07-25 10:17:30.862304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.005 qpair failed and we were unable to recover it. 00:28:46.005 [2024-07-25 10:17:30.862529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.005 [2024-07-25 10:17:30.862554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.005 qpair failed and we were unable to recover it. 00:28:46.005 [2024-07-25 10:17:30.862702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.005 [2024-07-25 10:17:30.862746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.005 qpair failed and we were unable to recover it. 00:28:46.006 [2024-07-25 10:17:30.862963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.006 [2024-07-25 10:17:30.863006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.006 qpair failed and we were unable to recover it. 00:28:46.006 [2024-07-25 10:17:30.863146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.006 [2024-07-25 10:17:30.863170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.006 qpair failed and we were unable to recover it. 00:28:46.006 [2024-07-25 10:17:30.863379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.006 [2024-07-25 10:17:30.863403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.006 qpair failed and we were unable to recover it. 00:28:46.006 [2024-07-25 10:17:30.863609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.006 [2024-07-25 10:17:30.863653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.006 qpair failed and we were unable to recover it. 00:28:46.006 [2024-07-25 10:17:30.863806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.006 [2024-07-25 10:17:30.863849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.006 qpair failed and we were unable to recover it. 00:28:46.006 [2024-07-25 10:17:30.864026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.006 [2024-07-25 10:17:30.864050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.006 qpair failed and we were unable to recover it. 00:28:46.006 [2024-07-25 10:17:30.864264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.006 [2024-07-25 10:17:30.864288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.006 qpair failed and we were unable to recover it. 00:28:46.006 [2024-07-25 10:17:30.864474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.006 [2024-07-25 10:17:30.864498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.006 qpair failed and we were unable to recover it. 00:28:46.006 [2024-07-25 10:17:30.864719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.006 [2024-07-25 10:17:30.864749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.006 qpair failed and we were unable to recover it. 00:28:46.006 [2024-07-25 10:17:30.864956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.006 [2024-07-25 10:17:30.864998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.006 qpair failed and we were unable to recover it. 00:28:46.006 [2024-07-25 10:17:30.865194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.006 [2024-07-25 10:17:30.865237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.006 qpair failed and we were unable to recover it. 00:28:46.006 [2024-07-25 10:17:30.865444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.006 [2024-07-25 10:17:30.865469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.006 qpair failed and we were unable to recover it. 00:28:46.006 [2024-07-25 10:17:30.865638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.006 [2024-07-25 10:17:30.865680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.006 qpair failed and we were unable to recover it. 00:28:46.006 [2024-07-25 10:17:30.865878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.006 [2024-07-25 10:17:30.865923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.006 qpair failed and we were unable to recover it. 00:28:46.006 [2024-07-25 10:17:30.866106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.006 [2024-07-25 10:17:30.866149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.006 qpair failed and we were unable to recover it. 00:28:46.006 [2024-07-25 10:17:30.866324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.006 [2024-07-25 10:17:30.866348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.006 qpair failed and we were unable to recover it. 00:28:46.006 [2024-07-25 10:17:30.866577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.006 [2024-07-25 10:17:30.866620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.006 qpair failed and we were unable to recover it. 00:28:46.006 [2024-07-25 10:17:30.866801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.006 [2024-07-25 10:17:30.866844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.006 qpair failed and we were unable to recover it. 00:28:46.006 [2024-07-25 10:17:30.867010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.006 [2024-07-25 10:17:30.867051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.006 qpair failed and we were unable to recover it. 00:28:46.006 [2024-07-25 10:17:30.867233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.006 [2024-07-25 10:17:30.867256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.006 qpair failed and we were unable to recover it. 00:28:46.006 [2024-07-25 10:17:30.867438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.006 [2024-07-25 10:17:30.867462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.006 qpair failed and we were unable to recover it. 00:28:46.006 [2024-07-25 10:17:30.867655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.006 [2024-07-25 10:17:30.867679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.006 qpair failed and we were unable to recover it. 00:28:46.006 [2024-07-25 10:17:30.867870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.006 [2024-07-25 10:17:30.867913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.006 qpair failed and we were unable to recover it. 00:28:46.006 [2024-07-25 10:17:30.868107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.006 [2024-07-25 10:17:30.868150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.006 qpair failed and we were unable to recover it. 00:28:46.006 [2024-07-25 10:17:30.868353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.006 [2024-07-25 10:17:30.868377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.006 qpair failed and we were unable to recover it. 00:28:46.006 [2024-07-25 10:17:30.868548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.006 [2024-07-25 10:17:30.868578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.006 qpair failed and we were unable to recover it. 00:28:46.006 [2024-07-25 10:17:30.868775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.006 [2024-07-25 10:17:30.868817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.006 qpair failed and we were unable to recover it. 00:28:46.006 [2024-07-25 10:17:30.869004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.006 [2024-07-25 10:17:30.869047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.006 qpair failed and we were unable to recover it. 00:28:46.006 [2024-07-25 10:17:30.869206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.006 [2024-07-25 10:17:30.869250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.006 qpair failed and we were unable to recover it. 00:28:46.006 [2024-07-25 10:17:30.869382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.006 [2024-07-25 10:17:30.869421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.006 qpair failed and we were unable to recover it. 00:28:46.006 [2024-07-25 10:17:30.869606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.006 [2024-07-25 10:17:30.869649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.006 qpair failed and we were unable to recover it. 00:28:46.006 [2024-07-25 10:17:30.869816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.006 [2024-07-25 10:17:30.869859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.006 qpair failed and we were unable to recover it. 00:28:46.006 [2024-07-25 10:17:30.870022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.006 [2024-07-25 10:17:30.870066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.006 qpair failed and we were unable to recover it. 00:28:46.006 [2024-07-25 10:17:30.870255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.006 [2024-07-25 10:17:30.870278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.006 qpair failed and we were unable to recover it. 00:28:46.006 [2024-07-25 10:17:30.870459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.006 [2024-07-25 10:17:30.870502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.006 qpair failed and we were unable to recover it. 00:28:46.006 [2024-07-25 10:17:30.870678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.006 [2024-07-25 10:17:30.870707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.006 qpair failed and we were unable to recover it. 00:28:46.006 [2024-07-25 10:17:30.870889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.006 [2024-07-25 10:17:30.870933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.006 qpair failed and we were unable to recover it. 00:28:46.007 [2024-07-25 10:17:30.871103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.007 [2024-07-25 10:17:30.871145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.007 qpair failed and we were unable to recover it. 00:28:46.007 [2024-07-25 10:17:30.871310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.007 [2024-07-25 10:17:30.871334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.007 qpair failed and we were unable to recover it. 00:28:46.007 [2024-07-25 10:17:30.871566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.007 [2024-07-25 10:17:30.871610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.007 qpair failed and we were unable to recover it. 00:28:46.007 [2024-07-25 10:17:30.871811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.007 [2024-07-25 10:17:30.871854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.007 qpair failed and we were unable to recover it. 00:28:46.007 [2024-07-25 10:17:30.872053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.007 [2024-07-25 10:17:30.872083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.007 qpair failed and we were unable to recover it. 00:28:46.007 [2024-07-25 10:17:30.872236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.007 [2024-07-25 10:17:30.872261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.007 qpair failed and we were unable to recover it. 00:28:46.007 [2024-07-25 10:17:30.872442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.007 [2024-07-25 10:17:30.872467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.007 qpair failed and we were unable to recover it. 00:28:46.007 [2024-07-25 10:17:30.872650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.007 [2024-07-25 10:17:30.872693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.007 qpair failed and we were unable to recover it. 00:28:46.007 [2024-07-25 10:17:30.872865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.007 [2024-07-25 10:17:30.872907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.007 qpair failed and we were unable to recover it. 00:28:46.007 [2024-07-25 10:17:30.873094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.007 [2024-07-25 10:17:30.873136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.007 qpair failed and we were unable to recover it. 00:28:46.007 [2024-07-25 10:17:30.873305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.007 [2024-07-25 10:17:30.873329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.007 qpair failed and we were unable to recover it. 00:28:46.007 [2024-07-25 10:17:30.873533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.007 [2024-07-25 10:17:30.873576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.007 qpair failed and we were unable to recover it. 00:28:46.007 [2024-07-25 10:17:30.873755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.007 [2024-07-25 10:17:30.873807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.007 qpair failed and we were unable to recover it. 00:28:46.007 [2024-07-25 10:17:30.873960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.007 [2024-07-25 10:17:30.874003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.007 qpair failed and we were unable to recover it. 00:28:46.007 [2024-07-25 10:17:30.874172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.007 [2024-07-25 10:17:30.874215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.007 qpair failed and we were unable to recover it. 00:28:46.007 [2024-07-25 10:17:30.874372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.007 [2024-07-25 10:17:30.874396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.007 qpair failed and we were unable to recover it. 00:28:46.007 [2024-07-25 10:17:30.874625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.007 [2024-07-25 10:17:30.874672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.007 qpair failed and we were unable to recover it. 00:28:46.007 [2024-07-25 10:17:30.874884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.007 [2024-07-25 10:17:30.874925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.007 qpair failed and we were unable to recover it. 00:28:46.007 [2024-07-25 10:17:30.875089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.007 [2024-07-25 10:17:30.875119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.007 qpair failed and we were unable to recover it. 00:28:46.007 [2024-07-25 10:17:30.875327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.007 [2024-07-25 10:17:30.875352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.007 qpair failed and we were unable to recover it. 00:28:46.007 [2024-07-25 10:17:30.875571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.007 [2024-07-25 10:17:30.875614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.007 qpair failed and we were unable to recover it. 00:28:46.007 [2024-07-25 10:17:30.875799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.007 [2024-07-25 10:17:30.875842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.007 qpair failed and we were unable to recover it. 00:28:46.007 [2024-07-25 10:17:30.876050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.007 [2024-07-25 10:17:30.876094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.007 qpair failed and we were unable to recover it. 00:28:46.007 [2024-07-25 10:17:30.876244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.007 [2024-07-25 10:17:30.876268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.007 qpair failed and we were unable to recover it. 00:28:46.007 [2024-07-25 10:17:30.876436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.007 [2024-07-25 10:17:30.876466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.007 qpair failed and we were unable to recover it. 00:28:46.007 [2024-07-25 10:17:30.876632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.007 [2024-07-25 10:17:30.876676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.007 qpair failed and we were unable to recover it. 00:28:46.007 [2024-07-25 10:17:30.876882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.007 [2024-07-25 10:17:30.876925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.007 qpair failed and we were unable to recover it. 00:28:46.007 [2024-07-25 10:17:30.877120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.007 [2024-07-25 10:17:30.877163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.007 qpair failed and we were unable to recover it. 00:28:46.007 [2024-07-25 10:17:30.877377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.007 [2024-07-25 10:17:30.877401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.007 qpair failed and we were unable to recover it. 00:28:46.007 [2024-07-25 10:17:30.877656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.008 [2024-07-25 10:17:30.877704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.008 qpair failed and we were unable to recover it. 00:28:46.008 [2024-07-25 10:17:30.877892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.008 [2024-07-25 10:17:30.877935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.008 qpair failed and we were unable to recover it. 00:28:46.008 [2024-07-25 10:17:30.878123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.008 [2024-07-25 10:17:30.878165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.008 qpair failed and we were unable to recover it. 00:28:46.008 [2024-07-25 10:17:30.878315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.008 [2024-07-25 10:17:30.878339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.008 qpair failed and we were unable to recover it. 00:28:46.008 [2024-07-25 10:17:30.878533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.008 [2024-07-25 10:17:30.878563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.008 qpair failed and we were unable to recover it. 00:28:46.008 [2024-07-25 10:17:30.878749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.008 [2024-07-25 10:17:30.878793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.008 qpair failed and we were unable to recover it. 00:28:46.008 [2024-07-25 10:17:30.878979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.008 [2024-07-25 10:17:30.879021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.008 qpair failed and we were unable to recover it. 00:28:46.008 [2024-07-25 10:17:30.879186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.008 [2024-07-25 10:17:30.879211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.008 qpair failed and we were unable to recover it. 00:28:46.008 [2024-07-25 10:17:30.879409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.008 [2024-07-25 10:17:30.879451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.008 qpair failed and we were unable to recover it. 00:28:46.008 [2024-07-25 10:17:30.879612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.008 [2024-07-25 10:17:30.879655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.008 qpair failed and we were unable to recover it. 00:28:46.008 [2024-07-25 10:17:30.879844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.008 [2024-07-25 10:17:30.879886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.008 qpair failed and we were unable to recover it. 00:28:46.008 [2024-07-25 10:17:30.880075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.008 [2024-07-25 10:17:30.880118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.008 qpair failed and we were unable to recover it. 00:28:46.008 [2024-07-25 10:17:30.880292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.008 [2024-07-25 10:17:30.880316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.008 qpair failed and we were unable to recover it. 00:28:46.008 [2024-07-25 10:17:30.880468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.008 [2024-07-25 10:17:30.880495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.008 qpair failed and we were unable to recover it. 00:28:46.008 [2024-07-25 10:17:30.880665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.008 [2024-07-25 10:17:30.880708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.008 qpair failed and we were unable to recover it. 00:28:46.008 [2024-07-25 10:17:30.880925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.008 [2024-07-25 10:17:30.880968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.008 qpair failed and we were unable to recover it. 00:28:46.008 [2024-07-25 10:17:30.881171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.008 [2024-07-25 10:17:30.881214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.008 qpair failed and we were unable to recover it. 00:28:46.008 [2024-07-25 10:17:30.881396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.008 [2024-07-25 10:17:30.881421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.008 qpair failed and we were unable to recover it. 00:28:46.008 [2024-07-25 10:17:30.881615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.008 [2024-07-25 10:17:30.881659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.008 qpair failed and we were unable to recover it. 00:28:46.008 [2024-07-25 10:17:30.881864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.008 [2024-07-25 10:17:30.881906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.008 qpair failed and we were unable to recover it. 00:28:46.008 [2024-07-25 10:17:30.882061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.008 [2024-07-25 10:17:30.882104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.008 qpair failed and we were unable to recover it. 00:28:46.008 [2024-07-25 10:17:30.882307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.008 [2024-07-25 10:17:30.882331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.008 qpair failed and we were unable to recover it. 00:28:46.008 [2024-07-25 10:17:30.882535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.008 [2024-07-25 10:17:30.882566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.008 qpair failed and we were unable to recover it. 00:28:46.008 [2024-07-25 10:17:30.882728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.008 [2024-07-25 10:17:30.882776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.008 qpair failed and we were unable to recover it. 00:28:46.008 [2024-07-25 10:17:30.882961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.008 [2024-07-25 10:17:30.883005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.008 qpair failed and we were unable to recover it. 00:28:46.008 [2024-07-25 10:17:30.883188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.008 [2024-07-25 10:17:30.883231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.008 qpair failed and we were unable to recover it. 00:28:46.008 [2024-07-25 10:17:30.883406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.008 [2024-07-25 10:17:30.883451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.008 qpair failed and we were unable to recover it. 00:28:46.008 [2024-07-25 10:17:30.883674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.008 [2024-07-25 10:17:30.883722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.008 qpair failed and we were unable to recover it. 00:28:46.008 [2024-07-25 10:17:30.883869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.008 [2024-07-25 10:17:30.883898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.008 qpair failed and we were unable to recover it. 00:28:46.008 [2024-07-25 10:17:30.884079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.008 [2024-07-25 10:17:30.884123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.008 qpair failed and we were unable to recover it. 00:28:46.008 [2024-07-25 10:17:30.884331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.008 [2024-07-25 10:17:30.884356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.008 qpair failed and we were unable to recover it. 00:28:46.008 [2024-07-25 10:17:30.884557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.008 [2024-07-25 10:17:30.884601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.008 qpair failed and we were unable to recover it. 00:28:46.008 [2024-07-25 10:17:30.884750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.008 [2024-07-25 10:17:30.884794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.008 qpair failed and we were unable to recover it. 00:28:46.008 [2024-07-25 10:17:30.884962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.008 [2024-07-25 10:17:30.885006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.008 qpair failed and we were unable to recover it. 00:28:46.008 [2024-07-25 10:17:30.885206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.008 [2024-07-25 10:17:30.885249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.008 qpair failed and we were unable to recover it. 00:28:46.008 [2024-07-25 10:17:30.885461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.008 [2024-07-25 10:17:30.885486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.008 qpair failed and we were unable to recover it. 00:28:46.008 [2024-07-25 10:17:30.885672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.008 [2024-07-25 10:17:30.885715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.008 qpair failed and we were unable to recover it. 00:28:46.009 [2024-07-25 10:17:30.885906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.009 [2024-07-25 10:17:30.885949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.009 qpair failed and we were unable to recover it. 00:28:46.009 [2024-07-25 10:17:30.886105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.009 [2024-07-25 10:17:30.886150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.009 qpair failed and we were unable to recover it. 00:28:46.009 [2024-07-25 10:17:30.886344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.009 [2024-07-25 10:17:30.886368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.009 qpair failed and we were unable to recover it. 00:28:46.009 [2024-07-25 10:17:30.886560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.009 [2024-07-25 10:17:30.886605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.009 qpair failed and we were unable to recover it. 00:28:46.009 [2024-07-25 10:17:30.886762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.009 [2024-07-25 10:17:30.886805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.009 qpair failed and we were unable to recover it. 00:28:46.009 [2024-07-25 10:17:30.886996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.009 [2024-07-25 10:17:30.887037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.009 qpair failed and we were unable to recover it. 00:28:46.009 [2024-07-25 10:17:30.887222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.009 [2024-07-25 10:17:30.887246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.009 qpair failed and we were unable to recover it. 00:28:46.009 [2024-07-25 10:17:30.887403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.009 [2024-07-25 10:17:30.887433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.009 qpair failed and we were unable to recover it. 00:28:46.009 [2024-07-25 10:17:30.887616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.009 [2024-07-25 10:17:30.887658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.009 qpair failed and we were unable to recover it. 00:28:46.009 [2024-07-25 10:17:30.887856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.009 [2024-07-25 10:17:30.887897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.009 qpair failed and we were unable to recover it. 00:28:46.009 [2024-07-25 10:17:30.888088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.009 [2024-07-25 10:17:30.888132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.009 qpair failed and we were unable to recover it. 00:28:46.009 [2024-07-25 10:17:30.888324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.009 [2024-07-25 10:17:30.888349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.009 qpair failed and we were unable to recover it. 00:28:46.009 [2024-07-25 10:17:30.888544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.009 [2024-07-25 10:17:30.888570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.009 qpair failed and we were unable to recover it. 00:28:46.009 [2024-07-25 10:17:30.888747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.009 [2024-07-25 10:17:30.888790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.009 qpair failed and we were unable to recover it. 00:28:46.009 [2024-07-25 10:17:30.889003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.009 [2024-07-25 10:17:30.889046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.009 qpair failed and we were unable to recover it. 00:28:46.009 [2024-07-25 10:17:30.889225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.009 [2024-07-25 10:17:30.889278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.009 qpair failed and we were unable to recover it. 00:28:46.009 [2024-07-25 10:17:30.889453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.009 [2024-07-25 10:17:30.889495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.009 qpair failed and we were unable to recover it. 00:28:46.009 [2024-07-25 10:17:30.889715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.009 [2024-07-25 10:17:30.889758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.009 qpair failed and we were unable to recover it. 00:28:46.009 [2024-07-25 10:17:30.889963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.009 [2024-07-25 10:17:30.890005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.009 qpair failed and we were unable to recover it. 00:28:46.009 [2024-07-25 10:17:30.890182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.009 [2024-07-25 10:17:30.890207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.009 qpair failed and we were unable to recover it. 00:28:46.009 [2024-07-25 10:17:30.890401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.009 [2024-07-25 10:17:30.890432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.009 qpair failed and we were unable to recover it. 00:28:46.009 [2024-07-25 10:17:30.890578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.009 [2024-07-25 10:17:30.890622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.009 qpair failed and we were unable to recover it. 00:28:46.009 [2024-07-25 10:17:30.890820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.009 [2024-07-25 10:17:30.890848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.009 qpair failed and we were unable to recover it. 00:28:46.009 [2024-07-25 10:17:30.891018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.009 [2024-07-25 10:17:30.891061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.009 qpair failed and we were unable to recover it. 00:28:46.009 [2024-07-25 10:17:30.891203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.009 [2024-07-25 10:17:30.891230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.009 qpair failed and we were unable to recover it. 00:28:46.009 [2024-07-25 10:17:30.891402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.009 [2024-07-25 10:17:30.891448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.009 qpair failed and we were unable to recover it. 00:28:46.009 [2024-07-25 10:17:30.891654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.009 [2024-07-25 10:17:30.891698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.009 qpair failed and we were unable to recover it. 00:28:46.009 [2024-07-25 10:17:30.891917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.009 [2024-07-25 10:17:30.891946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.009 qpair failed and we were unable to recover it. 00:28:46.009 [2024-07-25 10:17:30.892159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.009 [2024-07-25 10:17:30.892201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.009 qpair failed and we were unable to recover it. 00:28:46.009 [2024-07-25 10:17:30.892417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.009 [2024-07-25 10:17:30.892462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.009 qpair failed and we were unable to recover it. 00:28:46.009 [2024-07-25 10:17:30.892688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.009 [2024-07-25 10:17:30.892731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.009 qpair failed and we were unable to recover it. 00:28:46.009 [2024-07-25 10:17:30.892898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.009 [2024-07-25 10:17:30.892934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.009 qpair failed and we were unable to recover it. 00:28:46.009 [2024-07-25 10:17:30.893113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.009 [2024-07-25 10:17:30.893157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.009 qpair failed and we were unable to recover it. 00:28:46.009 [2024-07-25 10:17:30.893343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.009 [2024-07-25 10:17:30.893367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.009 qpair failed and we were unable to recover it. 00:28:46.009 [2024-07-25 10:17:30.893550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.009 [2024-07-25 10:17:30.893593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.009 qpair failed and we were unable to recover it. 00:28:46.009 [2024-07-25 10:17:30.893786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.009 [2024-07-25 10:17:30.893829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.009 qpair failed and we were unable to recover it. 00:28:46.009 [2024-07-25 10:17:30.894005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.009 [2024-07-25 10:17:30.894047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.009 qpair failed and we were unable to recover it. 00:28:46.009 [2024-07-25 10:17:30.894230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.010 [2024-07-25 10:17:30.894273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.010 qpair failed and we were unable to recover it. 00:28:46.010 [2024-07-25 10:17:30.894486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.010 [2024-07-25 10:17:30.894529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.010 qpair failed and we were unable to recover it. 00:28:46.010 [2024-07-25 10:17:30.894717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.010 [2024-07-25 10:17:30.894760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.010 qpair failed and we were unable to recover it. 00:28:46.010 [2024-07-25 10:17:30.894942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.010 [2024-07-25 10:17:30.894985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.010 qpair failed and we were unable to recover it. 00:28:46.010 [2024-07-25 10:17:30.895194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.010 [2024-07-25 10:17:30.895237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.010 qpair failed and we were unable to recover it. 00:28:46.010 [2024-07-25 10:17:30.895443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.010 [2024-07-25 10:17:30.895469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.010 qpair failed and we were unable to recover it. 00:28:46.010 [2024-07-25 10:17:30.895655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.010 [2024-07-25 10:17:30.895702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.010 qpair failed and we were unable to recover it. 00:28:46.010 [2024-07-25 10:17:30.895929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.010 [2024-07-25 10:17:30.895973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.010 qpair failed and we were unable to recover it. 00:28:46.010 [2024-07-25 10:17:30.896187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.010 [2024-07-25 10:17:30.896229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.010 qpair failed and we were unable to recover it. 00:28:46.010 [2024-07-25 10:17:30.896442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.010 [2024-07-25 10:17:30.896468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.010 qpair failed and we were unable to recover it. 00:28:46.010 [2024-07-25 10:17:30.896705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.010 [2024-07-25 10:17:30.896748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.010 qpair failed and we were unable to recover it. 00:28:46.010 [2024-07-25 10:17:30.896895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.010 [2024-07-25 10:17:30.896937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.010 qpair failed and we were unable to recover it. 00:28:46.010 [2024-07-25 10:17:30.897088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.010 [2024-07-25 10:17:30.897118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.010 qpair failed and we were unable to recover it. 00:28:46.010 [2024-07-25 10:17:30.897290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.010 [2024-07-25 10:17:30.897329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.010 qpair failed and we were unable to recover it. 00:28:46.010 [2024-07-25 10:17:30.897541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.010 [2024-07-25 10:17:30.897584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.010 qpair failed and we were unable to recover it. 00:28:46.010 [2024-07-25 10:17:30.897737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.010 [2024-07-25 10:17:30.897780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.010 qpair failed and we were unable to recover it. 00:28:46.010 [2024-07-25 10:17:30.897963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.010 [2024-07-25 10:17:30.898004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.010 qpair failed and we were unable to recover it. 00:28:46.010 [2024-07-25 10:17:30.898212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.010 [2024-07-25 10:17:30.898237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.010 qpair failed and we were unable to recover it. 00:28:46.010 [2024-07-25 10:17:30.898409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.010 [2024-07-25 10:17:30.898442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.010 qpair failed and we were unable to recover it. 00:28:46.010 [2024-07-25 10:17:30.898624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.010 [2024-07-25 10:17:30.898668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.010 qpair failed and we were unable to recover it. 00:28:46.010 [2024-07-25 10:17:30.898843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.010 [2024-07-25 10:17:30.898889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.010 qpair failed and we were unable to recover it. 00:28:46.010 [2024-07-25 10:17:30.899085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.010 [2024-07-25 10:17:30.899129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.010 qpair failed and we were unable to recover it. 00:28:46.010 [2024-07-25 10:17:30.899312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.010 [2024-07-25 10:17:30.899337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.010 qpair failed and we were unable to recover it. 00:28:46.010 [2024-07-25 10:17:30.899574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.010 [2024-07-25 10:17:30.899617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.010 qpair failed and we were unable to recover it. 00:28:46.010 [2024-07-25 10:17:30.899791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.010 [2024-07-25 10:17:30.899835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.010 qpair failed and we were unable to recover it. 00:28:46.010 [2024-07-25 10:17:30.900035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.010 [2024-07-25 10:17:30.900077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.010 qpair failed and we were unable to recover it. 00:28:46.010 [2024-07-25 10:17:30.900260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.010 [2024-07-25 10:17:30.900285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.010 qpair failed and we were unable to recover it. 00:28:46.010 [2024-07-25 10:17:30.900524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.010 [2024-07-25 10:17:30.900553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.010 qpair failed and we were unable to recover it. 00:28:46.010 [2024-07-25 10:17:30.900737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.010 [2024-07-25 10:17:30.900780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.010 qpair failed and we were unable to recover it. 00:28:46.010 [2024-07-25 10:17:30.900970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.010 [2024-07-25 10:17:30.901014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.010 qpair failed and we were unable to recover it. 00:28:46.010 [2024-07-25 10:17:30.901188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.010 [2024-07-25 10:17:30.901212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.010 qpair failed and we were unable to recover it. 00:28:46.010 [2024-07-25 10:17:30.901386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.010 [2024-07-25 10:17:30.901410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.010 qpair failed and we were unable to recover it. 00:28:46.010 [2024-07-25 10:17:30.901624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.010 [2024-07-25 10:17:30.901669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.010 qpair failed and we were unable to recover it. 00:28:46.010 [2024-07-25 10:17:30.901834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.010 [2024-07-25 10:17:30.901877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.010 qpair failed and we were unable to recover it. 00:28:46.010 [2024-07-25 10:17:30.902093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.010 [2024-07-25 10:17:30.902137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.010 qpair failed and we were unable to recover it. 00:28:46.010 [2024-07-25 10:17:30.902329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.011 [2024-07-25 10:17:30.902353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.011 qpair failed and we were unable to recover it. 00:28:46.011 [2024-07-25 10:17:30.902540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.011 [2024-07-25 10:17:30.902570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.011 qpair failed and we were unable to recover it. 00:28:46.011 [2024-07-25 10:17:30.902767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.011 [2024-07-25 10:17:30.902810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.011 qpair failed and we were unable to recover it. 00:28:46.011 [2024-07-25 10:17:30.902978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.011 [2024-07-25 10:17:30.903021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.011 qpair failed and we were unable to recover it. 00:28:46.011 [2024-07-25 10:17:30.903182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.011 [2024-07-25 10:17:30.903224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.011 qpair failed and we were unable to recover it. 00:28:46.011 [2024-07-25 10:17:30.903390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.011 [2024-07-25 10:17:30.903414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.011 qpair failed and we were unable to recover it. 00:28:46.011 [2024-07-25 10:17:30.903621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.011 [2024-07-25 10:17:30.903664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.011 qpair failed and we were unable to recover it. 00:28:46.011 [2024-07-25 10:17:30.903802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.011 [2024-07-25 10:17:30.903846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.011 qpair failed and we were unable to recover it. 00:28:46.011 [2024-07-25 10:17:30.904056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.011 [2024-07-25 10:17:30.904100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.011 qpair failed and we were unable to recover it. 00:28:46.011 [2024-07-25 10:17:30.904251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.011 [2024-07-25 10:17:30.904275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.011 qpair failed and we were unable to recover it. 00:28:46.011 [2024-07-25 10:17:30.904397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.011 [2024-07-25 10:17:30.904422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.011 qpair failed and we were unable to recover it. 00:28:46.011 [2024-07-25 10:17:30.904614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.011 [2024-07-25 10:17:30.904657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.011 qpair failed and we were unable to recover it. 00:28:46.011 [2024-07-25 10:17:30.904875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.011 [2024-07-25 10:17:30.904919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.011 qpair failed and we were unable to recover it. 00:28:46.011 [2024-07-25 10:17:30.905112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.011 [2024-07-25 10:17:30.905155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.011 qpair failed and we were unable to recover it. 00:28:46.011 [2024-07-25 10:17:30.905357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.011 [2024-07-25 10:17:30.905381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.011 qpair failed and we were unable to recover it. 00:28:46.011 [2024-07-25 10:17:30.905546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.011 [2024-07-25 10:17:30.905572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.011 qpair failed and we were unable to recover it. 00:28:46.011 [2024-07-25 10:17:30.905756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.011 [2024-07-25 10:17:30.905798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.011 qpair failed and we were unable to recover it. 00:28:46.011 [2024-07-25 10:17:30.906017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.011 [2024-07-25 10:17:30.906060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.011 qpair failed and we were unable to recover it. 00:28:46.011 [2024-07-25 10:17:30.906266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.011 [2024-07-25 10:17:30.906309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.011 qpair failed and we were unable to recover it. 00:28:46.011 [2024-07-25 10:17:30.906482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.011 [2024-07-25 10:17:30.906512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.011 qpair failed and we were unable to recover it. 00:28:46.011 [2024-07-25 10:17:30.906702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.011 [2024-07-25 10:17:30.906743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.011 qpair failed and we were unable to recover it. 00:28:46.011 [2024-07-25 10:17:30.906890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.011 [2024-07-25 10:17:30.906933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.011 qpair failed and we were unable to recover it. 00:28:46.011 [2024-07-25 10:17:30.907109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.011 [2024-07-25 10:17:30.907153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.011 qpair failed and we were unable to recover it. 00:28:46.011 [2024-07-25 10:17:30.907353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.011 [2024-07-25 10:17:30.907378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.011 qpair failed and we were unable to recover it. 00:28:46.011 [2024-07-25 10:17:30.907529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.011 [2024-07-25 10:17:30.907559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.011 qpair failed and we were unable to recover it. 00:28:46.011 [2024-07-25 10:17:30.907736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.011 [2024-07-25 10:17:30.907783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.011 qpair failed and we were unable to recover it. 00:28:46.011 [2024-07-25 10:17:30.907984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.011 [2024-07-25 10:17:30.908027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.011 qpair failed and we were unable to recover it. 00:28:46.011 [2024-07-25 10:17:30.908200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.011 [2024-07-25 10:17:30.908243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.011 qpair failed and we were unable to recover it. 00:28:46.011 [2024-07-25 10:17:30.908439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.011 [2024-07-25 10:17:30.908464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.011 qpair failed and we were unable to recover it. 00:28:46.011 [2024-07-25 10:17:30.908658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.011 [2024-07-25 10:17:30.908682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.011 qpair failed and we were unable to recover it. 00:28:46.011 [2024-07-25 10:17:30.908887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.011 [2024-07-25 10:17:30.908932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.011 qpair failed and we were unable to recover it. 00:28:46.011 [2024-07-25 10:17:30.909107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.011 [2024-07-25 10:17:30.909150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.011 qpair failed and we were unable to recover it. 00:28:46.011 [2024-07-25 10:17:30.909328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.011 [2024-07-25 10:17:30.909352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.011 qpair failed and we were unable to recover it. 00:28:46.011 [2024-07-25 10:17:30.909538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.011 [2024-07-25 10:17:30.909563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.011 qpair failed and we were unable to recover it. 00:28:46.011 [2024-07-25 10:17:30.909733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.011 [2024-07-25 10:17:30.909776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.011 qpair failed and we were unable to recover it. 00:28:46.011 [2024-07-25 10:17:30.909951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.011 [2024-07-25 10:17:30.909993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.011 qpair failed and we were unable to recover it. 00:28:46.011 [2024-07-25 10:17:30.910147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.011 [2024-07-25 10:17:30.910190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.011 qpair failed and we were unable to recover it. 00:28:46.011 [2024-07-25 10:17:30.910390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.011 [2024-07-25 10:17:30.910414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.011 qpair failed and we were unable to recover it. 00:28:46.012 [2024-07-25 10:17:30.910617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.012 [2024-07-25 10:17:30.910661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.012 qpair failed and we were unable to recover it. 00:28:46.012 [2024-07-25 10:17:30.910815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.012 [2024-07-25 10:17:30.910858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.012 qpair failed and we were unable to recover it. 00:28:46.012 [2024-07-25 10:17:30.911025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.012 [2024-07-25 10:17:30.911068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.012 qpair failed and we were unable to recover it. 00:28:46.012 [2024-07-25 10:17:30.911232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.012 [2024-07-25 10:17:30.911256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.012 qpair failed and we were unable to recover it. 00:28:46.012 [2024-07-25 10:17:30.911447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.012 [2024-07-25 10:17:30.911471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.012 qpair failed and we were unable to recover it. 00:28:46.012 [2024-07-25 10:17:30.911658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.012 [2024-07-25 10:17:30.911701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.012 qpair failed and we were unable to recover it. 00:28:46.012 [2024-07-25 10:17:30.911896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.012 [2024-07-25 10:17:30.911938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.012 qpair failed and we were unable to recover it. 00:28:46.012 [2024-07-25 10:17:30.912121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.012 [2024-07-25 10:17:30.912164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.012 qpair failed and we were unable to recover it. 00:28:46.012 [2024-07-25 10:17:30.912326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.012 [2024-07-25 10:17:30.912350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.012 qpair failed and we were unable to recover it. 00:28:46.012 [2024-07-25 10:17:30.912568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.012 [2024-07-25 10:17:30.912594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.012 qpair failed and we were unable to recover it. 00:28:46.012 [2024-07-25 10:17:30.912767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.012 [2024-07-25 10:17:30.912809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.012 qpair failed and we were unable to recover it. 00:28:46.012 [2024-07-25 10:17:30.912996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.012 [2024-07-25 10:17:30.913039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.012 qpair failed and we were unable to recover it. 00:28:46.012 [2024-07-25 10:17:30.913237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.012 [2024-07-25 10:17:30.913280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.012 qpair failed and we were unable to recover it. 00:28:46.012 [2024-07-25 10:17:30.913482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.012 [2024-07-25 10:17:30.913506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.012 qpair failed and we were unable to recover it. 00:28:46.012 [2024-07-25 10:17:30.913677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.012 [2024-07-25 10:17:30.913720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.012 qpair failed and we were unable to recover it. 00:28:46.012 [2024-07-25 10:17:30.913877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.012 [2024-07-25 10:17:30.913920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.012 qpair failed and we were unable to recover it. 00:28:46.012 [2024-07-25 10:17:30.914118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.012 [2024-07-25 10:17:30.914161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.012 qpair failed and we were unable to recover it. 00:28:46.012 [2024-07-25 10:17:30.914363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.012 [2024-07-25 10:17:30.914387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.012 qpair failed and we were unable to recover it. 00:28:46.012 [2024-07-25 10:17:30.914555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.012 [2024-07-25 10:17:30.914580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.012 qpair failed and we were unable to recover it. 00:28:46.012 [2024-07-25 10:17:30.914742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.012 [2024-07-25 10:17:30.914785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.012 qpair failed and we were unable to recover it. 00:28:46.012 [2024-07-25 10:17:30.914965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.012 [2024-07-25 10:17:30.915008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.012 qpair failed and we were unable to recover it. 00:28:46.012 [2024-07-25 10:17:30.915164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.013 [2024-07-25 10:17:30.915207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.013 qpair failed and we were unable to recover it. 00:28:46.013 [2024-07-25 10:17:30.915378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.013 [2024-07-25 10:17:30.915406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.013 qpair failed and we were unable to recover it. 00:28:46.013 [2024-07-25 10:17:30.915595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.013 [2024-07-25 10:17:30.915619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.013 qpair failed and we were unable to recover it. 00:28:46.013 [2024-07-25 10:17:30.915774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.013 [2024-07-25 10:17:30.915818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.013 qpair failed and we were unable to recover it. 00:28:46.013 [2024-07-25 10:17:30.916033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.013 [2024-07-25 10:17:30.916076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.013 qpair failed and we were unable to recover it. 00:28:46.013 [2024-07-25 10:17:30.916221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.013 [2024-07-25 10:17:30.916245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.013 qpair failed and we were unable to recover it. 00:28:46.013 [2024-07-25 10:17:30.916387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.013 [2024-07-25 10:17:30.916414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.013 qpair failed and we were unable to recover it. 00:28:46.013 [2024-07-25 10:17:30.916603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.013 [2024-07-25 10:17:30.916647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.013 qpair failed and we were unable to recover it. 00:28:46.013 [2024-07-25 10:17:30.916842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.013 [2024-07-25 10:17:30.916884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.013 qpair failed and we were unable to recover it. 00:28:46.013 [2024-07-25 10:17:30.917073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.013 [2024-07-25 10:17:30.917102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.013 qpair failed and we were unable to recover it. 00:28:46.013 [2024-07-25 10:17:30.917274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.013 [2024-07-25 10:17:30.917299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.013 qpair failed and we were unable to recover it. 00:28:46.013 [2024-07-25 10:17:30.917486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.013 [2024-07-25 10:17:30.917517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.013 qpair failed and we were unable to recover it. 00:28:46.013 [2024-07-25 10:17:30.917687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.013 [2024-07-25 10:17:30.917726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.013 qpair failed and we were unable to recover it. 00:28:46.013 [2024-07-25 10:17:30.917921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.013 [2024-07-25 10:17:30.917964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.013 qpair failed and we were unable to recover it. 00:28:46.013 [2024-07-25 10:17:30.918155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.013 [2024-07-25 10:17:30.918197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.013 qpair failed and we were unable to recover it. 00:28:46.013 [2024-07-25 10:17:30.918335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.013 [2024-07-25 10:17:30.918359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.013 qpair failed and we were unable to recover it. 00:28:46.013 [2024-07-25 10:17:30.918582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.013 [2024-07-25 10:17:30.918626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.013 qpair failed and we were unable to recover it. 00:28:46.013 [2024-07-25 10:17:30.918822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.013 [2024-07-25 10:17:30.918864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.013 qpair failed and we were unable to recover it. 00:28:46.013 [2024-07-25 10:17:30.919006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.013 [2024-07-25 10:17:30.919030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.013 qpair failed and we were unable to recover it. 00:28:46.013 [2024-07-25 10:17:30.919237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.013 [2024-07-25 10:17:30.919263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.013 qpair failed and we were unable to recover it. 00:28:46.013 [2024-07-25 10:17:30.919464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.013 [2024-07-25 10:17:30.919491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.013 qpair failed and we were unable to recover it. 00:28:46.013 [2024-07-25 10:17:30.919651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.013 [2024-07-25 10:17:30.919694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.013 qpair failed and we were unable to recover it. 00:28:46.013 [2024-07-25 10:17:30.919852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.013 [2024-07-25 10:17:30.919881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.013 qpair failed and we were unable to recover it. 00:28:46.013 [2024-07-25 10:17:30.920072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.013 [2024-07-25 10:17:30.920116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.013 qpair failed and we were unable to recover it. 00:28:46.013 [2024-07-25 10:17:30.920274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.013 [2024-07-25 10:17:30.920300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.013 qpair failed and we were unable to recover it. 00:28:46.013 [2024-07-25 10:17:30.920507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.013 [2024-07-25 10:17:30.920551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.013 qpair failed and we were unable to recover it. 00:28:46.013 [2024-07-25 10:17:30.920733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.013 [2024-07-25 10:17:30.920776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.013 qpair failed and we were unable to recover it. 00:28:46.013 [2024-07-25 10:17:30.920974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.013 [2024-07-25 10:17:30.921002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.013 qpair failed and we were unable to recover it. 00:28:46.013 [2024-07-25 10:17:30.921136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.013 [2024-07-25 10:17:30.921163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.013 qpair failed and we were unable to recover it. 00:28:46.013 [2024-07-25 10:17:30.921319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.013 [2024-07-25 10:17:30.921360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.013 qpair failed and we were unable to recover it. 00:28:46.013 [2024-07-25 10:17:30.921548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.013 [2024-07-25 10:17:30.921592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.013 qpair failed and we were unable to recover it. 00:28:46.013 [2024-07-25 10:17:30.921750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.013 [2024-07-25 10:17:30.921795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.013 qpair failed and we were unable to recover it. 00:28:46.013 [2024-07-25 10:17:30.921969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.013 [2024-07-25 10:17:30.922014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.013 qpair failed and we were unable to recover it. 00:28:46.013 [2024-07-25 10:17:30.922179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.013 [2024-07-25 10:17:30.922206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.013 qpair failed and we were unable to recover it. 00:28:46.013 [2024-07-25 10:17:30.922392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.013 [2024-07-25 10:17:30.922419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.013 qpair failed and we were unable to recover it. 00:28:46.013 [2024-07-25 10:17:30.922602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.013 [2024-07-25 10:17:30.922647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.013 qpair failed and we were unable to recover it. 00:28:46.013 [2024-07-25 10:17:30.922786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.013 [2024-07-25 10:17:30.922828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.013 qpair failed and we were unable to recover it. 00:28:46.013 [2024-07-25 10:17:30.922996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.014 [2024-07-25 10:17:30.923025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.014 qpair failed and we were unable to recover it. 00:28:46.014 [2024-07-25 10:17:30.923201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.014 [2024-07-25 10:17:30.923227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.014 qpair failed and we were unable to recover it. 00:28:46.014 [2024-07-25 10:17:30.923378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.014 [2024-07-25 10:17:30.923418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.014 qpair failed and we were unable to recover it. 00:28:46.014 [2024-07-25 10:17:30.923603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.014 [2024-07-25 10:17:30.923646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.014 qpair failed and we were unable to recover it. 00:28:46.014 [2024-07-25 10:17:30.923812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.014 [2024-07-25 10:17:30.923856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.014 qpair failed and we were unable to recover it. 00:28:46.014 [2024-07-25 10:17:30.924022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.014 [2024-07-25 10:17:30.924066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.014 qpair failed and we were unable to recover it. 00:28:46.014 [2024-07-25 10:17:30.924253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.014 [2024-07-25 10:17:30.924278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.014 qpair failed and we were unable to recover it. 00:28:46.014 [2024-07-25 10:17:30.924461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.014 [2024-07-25 10:17:30.924492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.014 qpair failed and we were unable to recover it. 00:28:46.014 [2024-07-25 10:17:30.924673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.014 [2024-07-25 10:17:30.924703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.014 qpair failed and we were unable to recover it. 00:28:46.014 [2024-07-25 10:17:30.924904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.014 [2024-07-25 10:17:30.924951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.014 qpair failed and we were unable to recover it. 00:28:46.014 [2024-07-25 10:17:30.925159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.014 [2024-07-25 10:17:30.925202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.014 qpair failed and we were unable to recover it. 00:28:46.014 [2024-07-25 10:17:30.925333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.014 [2024-07-25 10:17:30.925360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.014 qpair failed and we were unable to recover it. 00:28:46.014 [2024-07-25 10:17:30.925500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.014 [2024-07-25 10:17:30.925545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.014 qpair failed and we were unable to recover it. 00:28:46.014 [2024-07-25 10:17:30.925684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.014 [2024-07-25 10:17:30.925728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.014 qpair failed and we were unable to recover it. 00:28:46.014 [2024-07-25 10:17:30.925887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.014 [2024-07-25 10:17:30.925931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.014 qpair failed and we were unable to recover it. 00:28:46.014 [2024-07-25 10:17:30.926090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.014 [2024-07-25 10:17:30.926134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.014 qpair failed and we were unable to recover it. 00:28:46.014 [2024-07-25 10:17:30.926290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.014 [2024-07-25 10:17:30.926316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.014 qpair failed and we were unable to recover it. 00:28:46.014 [2024-07-25 10:17:30.926489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.014 [2024-07-25 10:17:30.926534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.014 qpair failed and we were unable to recover it. 00:28:46.014 [2024-07-25 10:17:30.926702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.014 [2024-07-25 10:17:30.926732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.014 qpair failed and we were unable to recover it. 00:28:46.014 [2024-07-25 10:17:30.926887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.014 [2024-07-25 10:17:30.926929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.014 qpair failed and we were unable to recover it. 00:28:46.014 [2024-07-25 10:17:30.927116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.014 [2024-07-25 10:17:30.927142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.014 qpair failed and we were unable to recover it. 00:28:46.014 [2024-07-25 10:17:30.927284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.014 [2024-07-25 10:17:30.927310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.014 qpair failed and we were unable to recover it. 00:28:46.014 [2024-07-25 10:17:30.927481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.014 [2024-07-25 10:17:30.927511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.014 qpair failed and we were unable to recover it. 00:28:46.014 [2024-07-25 10:17:30.927694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.014 [2024-07-25 10:17:30.927736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.014 qpair failed and we were unable to recover it. 00:28:46.014 [2024-07-25 10:17:30.927933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.014 [2024-07-25 10:17:30.927977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.014 qpair failed and we were unable to recover it. 00:28:46.014 [2024-07-25 10:17:30.928133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.014 [2024-07-25 10:17:30.928160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.014 qpair failed and we were unable to recover it. 00:28:46.014 [2024-07-25 10:17:30.928284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.014 [2024-07-25 10:17:30.928311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.014 qpair failed and we were unable to recover it. 00:28:46.014 [2024-07-25 10:17:30.928438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.014 [2024-07-25 10:17:30.928465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.014 qpair failed and we were unable to recover it. 00:28:46.014 [2024-07-25 10:17:30.928602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.014 [2024-07-25 10:17:30.928646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.014 qpair failed and we were unable to recover it. 00:28:46.014 [2024-07-25 10:17:30.928814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.014 [2024-07-25 10:17:30.928841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.014 qpair failed and we were unable to recover it. 00:28:46.014 [2024-07-25 10:17:30.929008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.014 [2024-07-25 10:17:30.929052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.014 qpair failed and we were unable to recover it. 00:28:46.014 [2024-07-25 10:17:30.929219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.014 [2024-07-25 10:17:30.929245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.014 qpair failed and we were unable to recover it. 00:28:46.014 [2024-07-25 10:17:30.929409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.014 [2024-07-25 10:17:30.929443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.014 qpair failed and we were unable to recover it. 00:28:46.014 [2024-07-25 10:17:30.929632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.014 [2024-07-25 10:17:30.929675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.014 qpair failed and we were unable to recover it. 00:28:46.014 [2024-07-25 10:17:30.929837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.014 [2024-07-25 10:17:30.929880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.014 qpair failed and we were unable to recover it. 00:28:46.014 [2024-07-25 10:17:30.930071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.015 [2024-07-25 10:17:30.930115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.015 qpair failed and we were unable to recover it. 00:28:46.015 [2024-07-25 10:17:30.930291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.015 [2024-07-25 10:17:30.930317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.015 qpair failed and we were unable to recover it. 00:28:46.015 [2024-07-25 10:17:30.930484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.015 [2024-07-25 10:17:30.930511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.015 qpair failed and we were unable to recover it. 00:28:46.015 [2024-07-25 10:17:30.930678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.015 [2024-07-25 10:17:30.930723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.015 qpair failed and we were unable to recover it. 00:28:46.015 [2024-07-25 10:17:30.930912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.015 [2024-07-25 10:17:30.930955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.015 qpair failed and we were unable to recover it. 00:28:46.015 [2024-07-25 10:17:30.931122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.015 [2024-07-25 10:17:30.931165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.015 qpair failed and we were unable to recover it. 00:28:46.015 [2024-07-25 10:17:30.931367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.015 [2024-07-25 10:17:30.931409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.015 qpair failed and we were unable to recover it. 00:28:46.015 [2024-07-25 10:17:30.931608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.015 [2024-07-25 10:17:30.931653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.015 qpair failed and we were unable to recover it. 00:28:46.015 [2024-07-25 10:17:30.931847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.015 [2024-07-25 10:17:30.931891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.015 qpair failed and we were unable to recover it. 00:28:46.015 [2024-07-25 10:17:30.932077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.015 [2024-07-25 10:17:30.932119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.015 qpair failed and we were unable to recover it. 00:28:46.015 [2024-07-25 10:17:30.932312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.015 [2024-07-25 10:17:30.932338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.015 qpair failed and we were unable to recover it. 00:28:46.015 [2024-07-25 10:17:30.932476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.015 [2024-07-25 10:17:30.932506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.015 qpair failed and we were unable to recover it. 00:28:46.015 [2024-07-25 10:17:30.932688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.015 [2024-07-25 10:17:30.932732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.015 qpair failed and we were unable to recover it. 00:28:46.015 [2024-07-25 10:17:30.932896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.015 [2024-07-25 10:17:30.932946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.015 qpair failed and we were unable to recover it. 00:28:46.015 [2024-07-25 10:17:30.933129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.015 [2024-07-25 10:17:30.933176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.015 qpair failed and we were unable to recover it. 00:28:46.015 [2024-07-25 10:17:30.933320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.015 [2024-07-25 10:17:30.933362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.015 qpair failed and we were unable to recover it. 00:28:46.015 [2024-07-25 10:17:30.933536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.015 [2024-07-25 10:17:30.933566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.015 qpair failed and we were unable to recover it. 00:28:46.015 [2024-07-25 10:17:30.933734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.015 [2024-07-25 10:17:30.933778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.015 qpair failed and we were unable to recover it. 00:28:46.015 [2024-07-25 10:17:30.933935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.015 [2024-07-25 10:17:30.933978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.015 qpair failed and we were unable to recover it. 00:28:46.015 [2024-07-25 10:17:30.934171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.015 [2024-07-25 10:17:30.934197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.015 qpair failed and we were unable to recover it. 00:28:46.015 [2024-07-25 10:17:30.934391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.015 [2024-07-25 10:17:30.934417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.015 qpair failed and we were unable to recover it. 00:28:46.015 [2024-07-25 10:17:30.934595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.015 [2024-07-25 10:17:30.934639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.015 qpair failed and we were unable to recover it. 00:28:46.015 [2024-07-25 10:17:30.934812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.015 [2024-07-25 10:17:30.934855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.015 qpair failed and we were unable to recover it. 00:28:46.015 [2024-07-25 10:17:30.935013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.015 [2024-07-25 10:17:30.935057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.015 qpair failed and we were unable to recover it. 00:28:46.015 [2024-07-25 10:17:30.935223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.015 [2024-07-25 10:17:30.935250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.015 qpair failed and we were unable to recover it. 00:28:46.015 [2024-07-25 10:17:30.935401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.015 [2024-07-25 10:17:30.935434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.015 qpair failed and we were unable to recover it. 00:28:46.015 [2024-07-25 10:17:30.935598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.015 [2024-07-25 10:17:30.935641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.016 qpair failed and we were unable to recover it. 00:28:46.016 [2024-07-25 10:17:30.935797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.016 [2024-07-25 10:17:30.935827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.016 qpair failed and we were unable to recover it. 00:28:46.016 [2024-07-25 10:17:30.936004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.016 [2024-07-25 10:17:30.936034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.016 qpair failed and we were unable to recover it. 00:28:46.016 [2024-07-25 10:17:30.936201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.016 [2024-07-25 10:17:30.936229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.016 qpair failed and we were unable to recover it. 00:28:46.016 [2024-07-25 10:17:30.936411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.016 [2024-07-25 10:17:30.936458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.016 qpair failed and we were unable to recover it. 00:28:46.016 [2024-07-25 10:17:30.936614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.016 [2024-07-25 10:17:30.936659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.016 qpair failed and we were unable to recover it. 00:28:46.016 [2024-07-25 10:17:30.936798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.016 [2024-07-25 10:17:30.936843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.016 qpair failed and we were unable to recover it. 00:28:46.016 [2024-07-25 10:17:30.936992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.016 [2024-07-25 10:17:30.937036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.016 qpair failed and we were unable to recover it. 00:28:46.016 [2024-07-25 10:17:30.937223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.016 [2024-07-25 10:17:30.937268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.016 qpair failed and we were unable to recover it. 00:28:46.016 [2024-07-25 10:17:30.937439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.016 [2024-07-25 10:17:30.937466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.016 qpair failed and we were unable to recover it. 00:28:46.016 [2024-07-25 10:17:30.937621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.016 [2024-07-25 10:17:30.937666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.016 qpair failed and we were unable to recover it. 00:28:46.016 [2024-07-25 10:17:30.937809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.016 [2024-07-25 10:17:30.937839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.016 qpair failed and we were unable to recover it. 00:28:46.016 [2024-07-25 10:17:30.938021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.016 [2024-07-25 10:17:30.938065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.016 qpair failed and we were unable to recover it. 00:28:46.016 [2024-07-25 10:17:30.938257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.016 [2024-07-25 10:17:30.938300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.016 qpair failed and we were unable to recover it. 00:28:46.016 [2024-07-25 10:17:30.938481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.016 [2024-07-25 10:17:30.938512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.016 qpair failed and we were unable to recover it. 00:28:46.016 [2024-07-25 10:17:30.938738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.016 [2024-07-25 10:17:30.938782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.016 qpair failed and we were unable to recover it. 00:28:46.016 [2024-07-25 10:17:30.938917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.016 [2024-07-25 10:17:30.938947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.016 qpair failed and we were unable to recover it. 00:28:46.016 [2024-07-25 10:17:30.939132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.016 [2024-07-25 10:17:30.939177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.016 qpair failed and we were unable to recover it. 00:28:46.016 [2024-07-25 10:17:30.939329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.016 [2024-07-25 10:17:30.939356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.016 qpair failed and we were unable to recover it. 00:28:46.016 [2024-07-25 10:17:30.939519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.016 [2024-07-25 10:17:30.939563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.016 qpair failed and we were unable to recover it. 00:28:46.016 [2024-07-25 10:17:30.939710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.016 [2024-07-25 10:17:30.939754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.016 qpair failed and we were unable to recover it. 00:28:46.016 [2024-07-25 10:17:30.939926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.016 [2024-07-25 10:17:30.939970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.016 qpair failed and we were unable to recover it. 00:28:46.016 [2024-07-25 10:17:30.940167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.016 [2024-07-25 10:17:30.940212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.016 qpair failed and we were unable to recover it. 00:28:46.016 [2024-07-25 10:17:30.940366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.016 [2024-07-25 10:17:30.940393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.016 qpair failed and we were unable to recover it. 00:28:46.016 [2024-07-25 10:17:30.940537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.016 [2024-07-25 10:17:30.940580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.016 qpair failed and we were unable to recover it. 00:28:46.016 [2024-07-25 10:17:30.940743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.016 [2024-07-25 10:17:30.940785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.016 qpair failed and we were unable to recover it. 00:28:46.016 [2024-07-25 10:17:30.940945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.016 [2024-07-25 10:17:30.940988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.016 qpair failed and we were unable to recover it. 00:28:46.016 [2024-07-25 10:17:30.941139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.017 [2024-07-25 10:17:30.941182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.017 qpair failed and we were unable to recover it. 00:28:46.017 [2024-07-25 10:17:30.941338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.017 [2024-07-25 10:17:30.941369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.017 qpair failed and we were unable to recover it. 00:28:46.017 [2024-07-25 10:17:30.941534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.017 [2024-07-25 10:17:30.941579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.017 qpair failed and we were unable to recover it. 00:28:46.017 [2024-07-25 10:17:30.941751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.017 [2024-07-25 10:17:30.941795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.017 qpair failed and we were unable to recover it. 00:28:46.017 [2024-07-25 10:17:30.941960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.017 [2024-07-25 10:17:30.942003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.017 qpair failed and we were unable to recover it. 00:28:46.017 [2024-07-25 10:17:30.942166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.017 [2024-07-25 10:17:30.942193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.017 qpair failed and we were unable to recover it. 00:28:46.017 [2024-07-25 10:17:30.942378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.017 [2024-07-25 10:17:30.942405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.017 qpair failed and we were unable to recover it. 00:28:46.017 [2024-07-25 10:17:30.942578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.017 [2024-07-25 10:17:30.942622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.017 qpair failed and we were unable to recover it. 00:28:46.017 [2024-07-25 10:17:30.942748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.017 [2024-07-25 10:17:30.942791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.017 qpair failed and we were unable to recover it. 00:28:46.017 [2024-07-25 10:17:30.942961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.017 [2024-07-25 10:17:30.943005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.017 qpair failed and we were unable to recover it. 00:28:46.017 [2024-07-25 10:17:30.943175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.017 [2024-07-25 10:17:30.943219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.017 qpair failed and we were unable to recover it. 00:28:46.017 [2024-07-25 10:17:30.943373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.017 [2024-07-25 10:17:30.943400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.017 qpair failed and we were unable to recover it. 00:28:46.017 [2024-07-25 10:17:30.943563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.017 [2024-07-25 10:17:30.943609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.017 qpair failed and we were unable to recover it. 00:28:46.017 [2024-07-25 10:17:30.943806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.017 [2024-07-25 10:17:30.943851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.017 qpair failed and we were unable to recover it. 00:28:46.017 [2024-07-25 10:17:30.943984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.017 [2024-07-25 10:17:30.944027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.017 qpair failed and we were unable to recover it. 00:28:46.017 [2024-07-25 10:17:30.944217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.017 [2024-07-25 10:17:30.944244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.017 qpair failed and we were unable to recover it. 00:28:46.017 [2024-07-25 10:17:30.944439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.017 [2024-07-25 10:17:30.944466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.017 qpair failed and we were unable to recover it. 00:28:46.017 [2024-07-25 10:17:30.944625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.017 [2024-07-25 10:17:30.944670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.017 qpair failed and we were unable to recover it. 00:28:46.017 [2024-07-25 10:17:30.944837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.017 [2024-07-25 10:17:30.944881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.017 qpair failed and we were unable to recover it. 00:28:46.017 [2024-07-25 10:17:30.945072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.017 [2024-07-25 10:17:30.945115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.017 qpair failed and we were unable to recover it. 00:28:46.017 [2024-07-25 10:17:30.945295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.017 [2024-07-25 10:17:30.945322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.017 qpair failed and we were unable to recover it. 00:28:46.017 [2024-07-25 10:17:30.945481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.017 [2024-07-25 10:17:30.945511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.017 qpair failed and we were unable to recover it. 00:28:46.017 [2024-07-25 10:17:30.945661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.017 [2024-07-25 10:17:30.945708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.017 qpair failed and we were unable to recover it. 00:28:46.017 [2024-07-25 10:17:30.945895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.018 [2024-07-25 10:17:30.945938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.018 qpair failed and we were unable to recover it. 00:28:46.018 [2024-07-25 10:17:30.946078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.018 [2024-07-25 10:17:30.946122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.018 qpair failed and we were unable to recover it. 00:28:46.018 [2024-07-25 10:17:30.946295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.018 [2024-07-25 10:17:30.946322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.018 qpair failed and we were unable to recover it. 00:28:46.018 [2024-07-25 10:17:30.946448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.018 [2024-07-25 10:17:30.946476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.018 qpair failed and we were unable to recover it. 00:28:46.018 [2024-07-25 10:17:30.946646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.018 [2024-07-25 10:17:30.946689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.018 qpair failed and we were unable to recover it. 00:28:46.018 [2024-07-25 10:17:30.946873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.018 [2024-07-25 10:17:30.946903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.018 qpair failed and we were unable to recover it. 00:28:46.018 [2024-07-25 10:17:30.947050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.018 [2024-07-25 10:17:30.947094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.018 qpair failed and we were unable to recover it. 00:28:46.018 [2024-07-25 10:17:30.947253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.018 [2024-07-25 10:17:30.947279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.018 qpair failed and we were unable to recover it. 00:28:46.018 [2024-07-25 10:17:30.947412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.018 [2024-07-25 10:17:30.947463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.018 qpair failed and we were unable to recover it. 00:28:46.018 [2024-07-25 10:17:30.947648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.018 [2024-07-25 10:17:30.947678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.018 qpair failed and we were unable to recover it. 00:28:46.018 [2024-07-25 10:17:30.947867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.018 [2024-07-25 10:17:30.947912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.018 qpair failed and we were unable to recover it. 00:28:46.018 [2024-07-25 10:17:30.948059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.018 [2024-07-25 10:17:30.948103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.018 qpair failed and we were unable to recover it. 00:28:46.018 [2024-07-25 10:17:30.948258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.018 [2024-07-25 10:17:30.948285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.018 qpair failed and we were unable to recover it. 00:28:46.018 [2024-07-25 10:17:30.948481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.018 [2024-07-25 10:17:30.948525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.018 qpair failed and we were unable to recover it. 00:28:46.018 [2024-07-25 10:17:30.948676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.018 [2024-07-25 10:17:30.948705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.018 qpair failed and we were unable to recover it. 00:28:46.018 [2024-07-25 10:17:30.948856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.018 [2024-07-25 10:17:30.948899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.018 qpair failed and we were unable to recover it. 00:28:46.018 [2024-07-25 10:17:30.949057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.018 [2024-07-25 10:17:30.949084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.018 qpair failed and we were unable to recover it. 00:28:46.018 [2024-07-25 10:17:30.949252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.018 [2024-07-25 10:17:30.949279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.018 qpair failed and we were unable to recover it. 00:28:46.018 [2024-07-25 10:17:30.949441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.018 [2024-07-25 10:17:30.949475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.018 qpair failed and we were unable to recover it. 00:28:46.018 [2024-07-25 10:17:30.949680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.018 [2024-07-25 10:17:30.949709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.018 qpair failed and we were unable to recover it. 00:28:46.018 [2024-07-25 10:17:30.949912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.018 [2024-07-25 10:17:30.949956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.018 qpair failed and we were unable to recover it. 00:28:46.018 [2024-07-25 10:17:30.950114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.018 [2024-07-25 10:17:30.950141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.018 qpair failed and we were unable to recover it. 00:28:46.018 [2024-07-25 10:17:30.950300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.018 [2024-07-25 10:17:30.950328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.018 qpair failed and we were unable to recover it. 00:28:46.018 [2024-07-25 10:17:30.950479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.018 [2024-07-25 10:17:30.950509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.018 qpair failed and we were unable to recover it. 00:28:46.019 [2024-07-25 10:17:30.950695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.019 [2024-07-25 10:17:30.950737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.019 qpair failed and we were unable to recover it. 00:28:46.019 [2024-07-25 10:17:30.951382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.019 [2024-07-25 10:17:30.951414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.019 qpair failed and we were unable to recover it. 00:28:46.019 [2024-07-25 10:17:30.951605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.019 [2024-07-25 10:17:30.951650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.019 qpair failed and we were unable to recover it. 00:28:46.019 [2024-07-25 10:17:30.951780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.019 [2024-07-25 10:17:30.951825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.019 qpair failed and we were unable to recover it. 00:28:46.019 [2024-07-25 10:17:30.951990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.019 [2024-07-25 10:17:30.952020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.019 qpair failed and we were unable to recover it. 00:28:46.019 [2024-07-25 10:17:30.952153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.019 [2024-07-25 10:17:30.952180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.019 qpair failed and we were unable to recover it. 00:28:46.019 [2024-07-25 10:17:30.952337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.019 [2024-07-25 10:17:30.952364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.019 qpair failed and we were unable to recover it. 00:28:46.019 [2024-07-25 10:17:30.952520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.019 [2024-07-25 10:17:30.952564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.019 qpair failed and we were unable to recover it. 00:28:46.019 [2024-07-25 10:17:30.952743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.019 [2024-07-25 10:17:30.952787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.019 qpair failed and we were unable to recover it. 00:28:46.019 [2024-07-25 10:17:30.952953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.019 [2024-07-25 10:17:30.952997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.019 qpair failed and we were unable to recover it. 00:28:46.019 [2024-07-25 10:17:30.953160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.019 [2024-07-25 10:17:30.953187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.019 qpair failed and we were unable to recover it. 00:28:46.019 [2024-07-25 10:17:30.953313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.019 [2024-07-25 10:17:30.953341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.019 qpair failed and we were unable to recover it. 00:28:46.019 [2024-07-25 10:17:30.953499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.019 [2024-07-25 10:17:30.953544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.019 qpair failed and we were unable to recover it. 00:28:46.019 [2024-07-25 10:17:30.953700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.019 [2024-07-25 10:17:30.953743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.019 qpair failed and we were unable to recover it. 00:28:46.019 [2024-07-25 10:17:30.953903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.019 [2024-07-25 10:17:30.953947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.019 qpair failed and we were unable to recover it. 00:28:46.019 [2024-07-25 10:17:30.954078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.019 [2024-07-25 10:17:30.954105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.019 qpair failed and we were unable to recover it. 00:28:46.019 [2024-07-25 10:17:30.954306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.019 [2024-07-25 10:17:30.954333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.019 qpair failed and we were unable to recover it. 00:28:46.019 [2024-07-25 10:17:30.954540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.019 [2024-07-25 10:17:30.954568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.019 qpair failed and we were unable to recover it. 00:28:46.019 [2024-07-25 10:17:30.954737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.019 [2024-07-25 10:17:30.954781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.019 qpair failed and we were unable to recover it. 00:28:46.019 [2024-07-25 10:17:30.954978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.019 [2024-07-25 10:17:30.955022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.019 qpair failed and we were unable to recover it. 00:28:46.019 [2024-07-25 10:17:30.955222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.019 [2024-07-25 10:17:30.955249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.019 qpair failed and we were unable to recover it. 00:28:46.019 [2024-07-25 10:17:30.955456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.019 [2024-07-25 10:17:30.955492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.019 qpair failed and we were unable to recover it. 00:28:46.019 [2024-07-25 10:17:30.955661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.019 [2024-07-25 10:17:30.955709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.019 qpair failed and we were unable to recover it. 00:28:46.019 [2024-07-25 10:17:30.955874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.019 [2024-07-25 10:17:30.955916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.019 qpair failed and we were unable to recover it. 00:28:46.019 [2024-07-25 10:17:30.956422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.019 [2024-07-25 10:17:30.956457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.019 qpair failed and we were unable to recover it. 00:28:46.019 [2024-07-25 10:17:30.956631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.019 [2024-07-25 10:17:30.956686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.019 qpair failed and we were unable to recover it. 00:28:46.019 [2024-07-25 10:17:30.956912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.020 [2024-07-25 10:17:30.956958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.020 qpair failed and we were unable to recover it. 00:28:46.020 [2024-07-25 10:17:30.957126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.020 [2024-07-25 10:17:30.957178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.020 qpair failed and we were unable to recover it. 00:28:46.020 [2024-07-25 10:17:30.957315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.020 [2024-07-25 10:17:30.957343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.020 qpair failed and we were unable to recover it. 00:28:46.020 [2024-07-25 10:17:30.957541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.020 [2024-07-25 10:17:30.957586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.020 qpair failed and we were unable to recover it. 00:28:46.020 [2024-07-25 10:17:30.957789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.020 [2024-07-25 10:17:30.957831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.020 qpair failed and we were unable to recover it. 00:28:46.020 [2024-07-25 10:17:30.958033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.020 [2024-07-25 10:17:30.958077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.020 qpair failed and we were unable to recover it. 00:28:46.020 [2024-07-25 10:17:30.958240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.020 [2024-07-25 10:17:30.958266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.020 qpair failed and we were unable to recover it. 00:28:46.020 [2024-07-25 10:17:30.958474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.020 [2024-07-25 10:17:30.958502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.020 qpair failed and we were unable to recover it. 00:28:46.020 [2024-07-25 10:17:30.958648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.020 [2024-07-25 10:17:30.958696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.020 qpair failed and we were unable to recover it. 00:28:46.020 [2024-07-25 10:17:30.958842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.020 [2024-07-25 10:17:30.958885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.020 qpair failed and we were unable to recover it. 00:28:46.020 [2024-07-25 10:17:30.959085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.020 [2024-07-25 10:17:30.959130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.020 qpair failed and we were unable to recover it. 00:28:46.020 [2024-07-25 10:17:30.959324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.020 [2024-07-25 10:17:30.959351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.020 qpair failed and we were unable to recover it. 00:28:46.020 [2024-07-25 10:17:30.959540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.020 [2024-07-25 10:17:30.959584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.020 qpair failed and we were unable to recover it. 00:28:46.020 [2024-07-25 10:17:30.959747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.020 [2024-07-25 10:17:30.959792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.020 qpair failed and we were unable to recover it. 00:28:46.020 [2024-07-25 10:17:30.959975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.020 [2024-07-25 10:17:30.960019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.020 qpair failed and we were unable to recover it. 00:28:46.020 [2024-07-25 10:17:30.960239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.020 [2024-07-25 10:17:30.960282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.020 qpair failed and we were unable to recover it. 00:28:46.020 [2024-07-25 10:17:30.960495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.020 [2024-07-25 10:17:30.960522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.020 qpair failed and we were unable to recover it. 00:28:46.020 [2024-07-25 10:17:30.960642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.020 [2024-07-25 10:17:30.960687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.020 qpair failed and we were unable to recover it. 00:28:46.020 [2024-07-25 10:17:30.960988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.020 [2024-07-25 10:17:30.961016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.020 qpair failed and we were unable to recover it. 00:28:46.020 [2024-07-25 10:17:30.961236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.020 [2024-07-25 10:17:30.961263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.020 qpair failed and we were unable to recover it. 00:28:46.020 [2024-07-25 10:17:30.961456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.020 [2024-07-25 10:17:30.961483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.020 qpair failed and we were unable to recover it. 00:28:46.020 [2024-07-25 10:17:30.961642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.020 [2024-07-25 10:17:30.961685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.020 qpair failed and we were unable to recover it. 00:28:46.020 [2024-07-25 10:17:30.961880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.020 [2024-07-25 10:17:30.961924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.020 qpair failed and we were unable to recover it. 00:28:46.020 [2024-07-25 10:17:30.962121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.020 [2024-07-25 10:17:30.962164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.020 qpair failed and we were unable to recover it. 00:28:46.020 [2024-07-25 10:17:30.962344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.020 [2024-07-25 10:17:30.962370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.020 qpair failed and we were unable to recover it. 00:28:46.020 [2024-07-25 10:17:30.962523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.020 [2024-07-25 10:17:30.962567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.020 qpair failed and we were unable to recover it. 00:28:46.020 [2024-07-25 10:17:30.962760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.021 [2024-07-25 10:17:30.962803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.021 qpair failed and we were unable to recover it. 00:28:46.021 [2024-07-25 10:17:30.962961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.021 [2024-07-25 10:17:30.963005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.021 qpair failed and we were unable to recover it. 00:28:46.021 [2024-07-25 10:17:30.963213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.021 [2024-07-25 10:17:30.963257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.021 qpair failed and we were unable to recover it. 00:28:46.021 [2024-07-25 10:17:30.963448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.021 [2024-07-25 10:17:30.963486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.021 qpair failed and we were unable to recover it. 00:28:46.021 [2024-07-25 10:17:30.963622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.021 [2024-07-25 10:17:30.963649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.021 qpair failed and we were unable to recover it. 00:28:46.021 [2024-07-25 10:17:30.963834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.021 [2024-07-25 10:17:30.963878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.021 qpair failed and we were unable to recover it. 00:28:46.021 [2024-07-25 10:17:30.964087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.021 [2024-07-25 10:17:30.964117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.021 qpair failed and we were unable to recover it. 00:28:46.021 [2024-07-25 10:17:30.964337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.021 [2024-07-25 10:17:30.964364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.021 qpair failed and we were unable to recover it. 00:28:46.021 [2024-07-25 10:17:30.964511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.021 [2024-07-25 10:17:30.964538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.021 qpair failed and we were unable to recover it. 00:28:46.021 [2024-07-25 10:17:30.964762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.021 [2024-07-25 10:17:30.964808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.021 qpair failed and we were unable to recover it. 00:28:46.021 [2024-07-25 10:17:30.965035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.021 [2024-07-25 10:17:30.965071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.021 qpair failed and we were unable to recover it. 00:28:46.021 [2024-07-25 10:17:30.965234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.021 [2024-07-25 10:17:30.965265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.021 qpair failed and we were unable to recover it. 00:28:46.021 [2024-07-25 10:17:30.965473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.021 [2024-07-25 10:17:30.965502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.021 qpair failed and we were unable to recover it. 00:28:46.021 [2024-07-25 10:17:30.965665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.021 [2024-07-25 10:17:30.965699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.021 qpair failed and we were unable to recover it. 00:28:46.021 [2024-07-25 10:17:30.965912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.021 [2024-07-25 10:17:30.965941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.021 qpair failed and we were unable to recover it. 00:28:46.021 [2024-07-25 10:17:30.966096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.021 [2024-07-25 10:17:30.966125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.021 qpair failed and we were unable to recover it. 00:28:46.021 [2024-07-25 10:17:30.966307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.021 [2024-07-25 10:17:30.966337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.021 qpair failed and we were unable to recover it. 00:28:46.021 [2024-07-25 10:17:30.966496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.021 [2024-07-25 10:17:30.966523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.021 qpair failed and we were unable to recover it. 00:28:46.021 [2024-07-25 10:17:30.966684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.021 [2024-07-25 10:17:30.966711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.021 qpair failed and we were unable to recover it. 00:28:46.021 [2024-07-25 10:17:30.966867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.021 [2024-07-25 10:17:30.966903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.021 qpair failed and we were unable to recover it. 00:28:46.021 [2024-07-25 10:17:30.967098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.021 [2024-07-25 10:17:30.967127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.021 qpair failed and we were unable to recover it. 00:28:46.021 [2024-07-25 10:17:30.967300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.021 [2024-07-25 10:17:30.967329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.021 qpair failed and we were unable to recover it. 00:28:46.021 [2024-07-25 10:17:30.967491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.021 [2024-07-25 10:17:30.967520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.021 qpair failed and we were unable to recover it. 00:28:46.021 [2024-07-25 10:17:30.967681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.021 [2024-07-25 10:17:30.967724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.021 qpair failed and we were unable to recover it. 00:28:46.021 [2024-07-25 10:17:30.967893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.021 [2024-07-25 10:17:30.967919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.021 qpair failed and we were unable to recover it. 00:28:46.021 [2024-07-25 10:17:30.968081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.021 [2024-07-25 10:17:30.968111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.021 qpair failed and we were unable to recover it. 00:28:46.021 [2024-07-25 10:17:30.968274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.022 [2024-07-25 10:17:30.968303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.022 qpair failed and we were unable to recover it. 00:28:46.022 [2024-07-25 10:17:30.968499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.022 [2024-07-25 10:17:30.968527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.022 qpair failed and we were unable to recover it. 00:28:46.022 [2024-07-25 10:17:30.968640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.022 [2024-07-25 10:17:30.968673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.022 qpair failed and we were unable to recover it. 00:28:46.022 [2024-07-25 10:17:30.968912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.022 [2024-07-25 10:17:30.968941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.022 qpair failed and we were unable to recover it. 00:28:46.022 [2024-07-25 10:17:30.969147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.022 [2024-07-25 10:17:30.969199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.022 qpair failed and we were unable to recover it. 00:28:46.022 [2024-07-25 10:17:30.969401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.022 [2024-07-25 10:17:30.969440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.022 qpair failed and we were unable to recover it. 00:28:46.022 [2024-07-25 10:17:30.969594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.022 [2024-07-25 10:17:30.969620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.022 qpair failed and we were unable to recover it. 00:28:46.022 [2024-07-25 10:17:30.969821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.022 [2024-07-25 10:17:30.969861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.022 qpair failed and we were unable to recover it. 00:28:46.022 [2024-07-25 10:17:30.970108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.022 [2024-07-25 10:17:30.970164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.022 qpair failed and we were unable to recover it. 00:28:46.022 [2024-07-25 10:17:30.970336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.022 [2024-07-25 10:17:30.970380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:46.022 qpair failed and we were unable to recover it. 00:28:46.022 [2024-07-25 10:17:30.970546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.022 [2024-07-25 10:17:30.970574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.022 qpair failed and we were unable to recover it. 00:28:46.022 [2024-07-25 10:17:30.970728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.022 [2024-07-25 10:17:30.970759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.022 qpair failed and we were unable to recover it. 00:28:46.022 [2024-07-25 10:17:30.970928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.022 [2024-07-25 10:17:30.970958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.022 qpair failed and we were unable to recover it. 00:28:46.022 [2024-07-25 10:17:30.971141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.022 [2024-07-25 10:17:30.971170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.022 qpair failed and we were unable to recover it. 00:28:46.022 [2024-07-25 10:17:30.971372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.022 [2024-07-25 10:17:30.971401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.022 qpair failed and we were unable to recover it. 00:28:46.022 [2024-07-25 10:17:30.971615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.022 [2024-07-25 10:17:30.971642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.022 qpair failed and we were unable to recover it. 00:28:46.022 [2024-07-25 10:17:30.971830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.022 [2024-07-25 10:17:30.971871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.022 qpair failed and we were unable to recover it. 00:28:46.022 [2024-07-25 10:17:30.972053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.022 [2024-07-25 10:17:30.972083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.022 qpair failed and we were unable to recover it. 00:28:46.022 [2024-07-25 10:17:30.972267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.022 [2024-07-25 10:17:30.972296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.022 qpair failed and we were unable to recover it. 00:28:46.022 [2024-07-25 10:17:30.972508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.022 [2024-07-25 10:17:30.972535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.022 qpair failed and we were unable to recover it. 00:28:46.022 [2024-07-25 10:17:30.972736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.022 [2024-07-25 10:17:30.972765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.022 qpair failed and we were unable to recover it. 00:28:46.022 [2024-07-25 10:17:30.972928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.022 [2024-07-25 10:17:30.972957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.022 qpair failed and we were unable to recover it. 00:28:46.022 [2024-07-25 10:17:30.973196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.022 [2024-07-25 10:17:30.973258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.022 qpair failed and we were unable to recover it. 00:28:46.022 [2024-07-25 10:17:30.973484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.022 [2024-07-25 10:17:30.973511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.022 qpair failed and we were unable to recover it. 00:28:46.022 [2024-07-25 10:17:30.973674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.022 [2024-07-25 10:17:30.973715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.022 qpair failed and we were unable to recover it. 00:28:46.022 [2024-07-25 10:17:30.973916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.022 [2024-07-25 10:17:30.973941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.022 qpair failed and we were unable to recover it. 00:28:46.022 [2024-07-25 10:17:30.974181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.023 [2024-07-25 10:17:30.974209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.023 qpair failed and we were unable to recover it. 00:28:46.023 [2024-07-25 10:17:30.974386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.023 [2024-07-25 10:17:30.974415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.023 qpair failed and we were unable to recover it. 00:28:46.023 [2024-07-25 10:17:30.974614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.023 [2024-07-25 10:17:30.974640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.023 qpair failed and we were unable to recover it. 00:28:46.023 [2024-07-25 10:17:30.974844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.023 [2024-07-25 10:17:30.974873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.023 qpair failed and we were unable to recover it. 00:28:46.023 [2024-07-25 10:17:30.975091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.023 [2024-07-25 10:17:30.975119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.023 qpair failed and we were unable to recover it. 00:28:46.023 [2024-07-25 10:17:30.975304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.023 [2024-07-25 10:17:30.975339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.023 qpair failed and we were unable to recover it. 00:28:46.023 [2024-07-25 10:17:30.975555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.023 [2024-07-25 10:17:30.975582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.023 qpair failed and we were unable to recover it. 00:28:46.023 [2024-07-25 10:17:30.975781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.023 [2024-07-25 10:17:30.975809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.023 qpair failed and we were unable to recover it. 00:28:46.023 [2024-07-25 10:17:30.976014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.023 [2024-07-25 10:17:30.976039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.023 qpair failed and we were unable to recover it. 00:28:46.023 [2024-07-25 10:17:30.976262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.023 [2024-07-25 10:17:30.976291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.023 qpair failed and we were unable to recover it. 00:28:46.023 [2024-07-25 10:17:30.976455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.023 [2024-07-25 10:17:30.976498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.023 qpair failed and we were unable to recover it. 00:28:46.023 [2024-07-25 10:17:30.976649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.023 [2024-07-25 10:17:30.976680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.023 qpair failed and we were unable to recover it. 00:28:46.023 [2024-07-25 10:17:30.976870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.023 [2024-07-25 10:17:30.976904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.023 qpair failed and we were unable to recover it. 00:28:46.023 [2024-07-25 10:17:30.977106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.023 [2024-07-25 10:17:30.977134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.023 qpair failed and we were unable to recover it. 00:28:46.023 [2024-07-25 10:17:30.977350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.023 [2024-07-25 10:17:30.977379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.023 qpair failed and we were unable to recover it. 00:28:46.023 [2024-07-25 10:17:30.977540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.023 [2024-07-25 10:17:30.977566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.023 qpair failed and we were unable to recover it. 00:28:46.023 [2024-07-25 10:17:30.977707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.023 [2024-07-25 10:17:30.977748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.023 qpair failed and we were unable to recover it. 00:28:46.023 [2024-07-25 10:17:30.977967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.023 [2024-07-25 10:17:30.977993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.023 qpair failed and we were unable to recover it. 00:28:46.023 [2024-07-25 10:17:30.978184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.023 [2024-07-25 10:17:30.978212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.023 qpair failed and we were unable to recover it. 00:28:46.023 [2024-07-25 10:17:30.978384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.023 [2024-07-25 10:17:30.978412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.023 qpair failed and we were unable to recover it. 00:28:46.023 [2024-07-25 10:17:30.978607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.023 [2024-07-25 10:17:30.978633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.023 qpair failed and we were unable to recover it. 00:28:46.023 [2024-07-25 10:17:30.978821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.023 [2024-07-25 10:17:30.978865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.023 qpair failed and we were unable to recover it. 00:28:46.023 [2024-07-25 10:17:30.979060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.023 [2024-07-25 10:17:30.979098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.023 qpair failed and we were unable to recover it. 00:28:46.023 [2024-07-25 10:17:30.979267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.023 [2024-07-25 10:17:30.979294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.023 qpair failed and we were unable to recover it. 00:28:46.023 [2024-07-25 10:17:30.979496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.023 [2024-07-25 10:17:30.979523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.023 qpair failed and we were unable to recover it. 00:28:46.023 [2024-07-25 10:17:30.979660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.023 [2024-07-25 10:17:30.979684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.023 qpair failed and we were unable to recover it. 00:28:46.023 [2024-07-25 10:17:30.979897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.023 [2024-07-25 10:17:30.979923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.023 qpair failed and we were unable to recover it. 00:28:46.024 [2024-07-25 10:17:30.980108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.024 [2024-07-25 10:17:30.980137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.024 qpair failed and we were unable to recover it. 00:28:46.024 [2024-07-25 10:17:30.980319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.024 [2024-07-25 10:17:30.980348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.024 qpair failed and we were unable to recover it. 00:28:46.024 [2024-07-25 10:17:30.980552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.024 [2024-07-25 10:17:30.980578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.024 qpair failed and we were unable to recover it. 00:28:46.024 [2024-07-25 10:17:30.980730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.024 [2024-07-25 10:17:30.980758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.024 qpair failed and we were unable to recover it. 00:28:46.024 [2024-07-25 10:17:30.980973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.024 [2024-07-25 10:17:30.981002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.024 qpair failed and we were unable to recover it. 00:28:46.024 [2024-07-25 10:17:30.981195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.024 [2024-07-25 10:17:30.981256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.024 qpair failed and we were unable to recover it. 00:28:46.024 [2024-07-25 10:17:30.981485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.024 [2024-07-25 10:17:30.981511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.024 qpair failed and we were unable to recover it. 00:28:46.024 [2024-07-25 10:17:30.981637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.024 [2024-07-25 10:17:30.981663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.024 qpair failed and we were unable to recover it. 00:28:46.024 [2024-07-25 10:17:30.981792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.024 [2024-07-25 10:17:30.981817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.024 qpair failed and we were unable to recover it. 00:28:46.024 [2024-07-25 10:17:30.981958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.024 [2024-07-25 10:17:30.981999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.024 qpair failed and we were unable to recover it. 00:28:46.024 [2024-07-25 10:17:30.982201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.024 [2024-07-25 10:17:30.982231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.024 qpair failed and we were unable to recover it. 00:28:46.024 [2024-07-25 10:17:30.982441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.024 [2024-07-25 10:17:30.982508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.024 qpair failed and we were unable to recover it. 00:28:46.024 [2024-07-25 10:17:30.982673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.024 [2024-07-25 10:17:30.982700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.024 qpair failed and we were unable to recover it. 00:28:46.024 [2024-07-25 10:17:30.982903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.024 [2024-07-25 10:17:30.982932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.024 qpair failed and we were unable to recover it. 00:28:46.024 [2024-07-25 10:17:30.983101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.024 [2024-07-25 10:17:30.983168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.024 qpair failed and we were unable to recover it. 00:28:46.024 [2024-07-25 10:17:30.983378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.024 [2024-07-25 10:17:30.983407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.024 qpair failed and we were unable to recover it. 00:28:46.024 [2024-07-25 10:17:30.983604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.024 [2024-07-25 10:17:30.983630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.024 qpair failed and we were unable to recover it. 00:28:46.024 [2024-07-25 10:17:30.983805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.024 [2024-07-25 10:17:30.983831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.024 qpair failed and we were unable to recover it. 00:28:46.024 [2024-07-25 10:17:30.984029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.024 [2024-07-25 10:17:30.984058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.024 qpair failed and we were unable to recover it. 00:28:46.024 [2024-07-25 10:17:30.984185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.024 [2024-07-25 10:17:30.984212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.024 qpair failed and we were unable to recover it. 00:28:46.025 [2024-07-25 10:17:30.984381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.025 [2024-07-25 10:17:30.984409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.025 qpair failed and we were unable to recover it. 00:28:46.025 [2024-07-25 10:17:30.984626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.025 [2024-07-25 10:17:30.984652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.025 qpair failed and we were unable to recover it. 00:28:46.025 [2024-07-25 10:17:30.984888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.025 [2024-07-25 10:17:30.984916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.025 qpair failed and we were unable to recover it. 00:28:46.025 [2024-07-25 10:17:30.985137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.025 [2024-07-25 10:17:30.985202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.025 qpair failed and we were unable to recover it. 00:28:46.025 [2024-07-25 10:17:30.985387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.025 [2024-07-25 10:17:30.985416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.025 qpair failed and we were unable to recover it. 00:28:46.025 [2024-07-25 10:17:30.985603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.025 [2024-07-25 10:17:30.985629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.025 qpair failed and we were unable to recover it. 00:28:46.025 [2024-07-25 10:17:30.985832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.025 [2024-07-25 10:17:30.985857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.025 qpair failed and we were unable to recover it. 00:28:46.025 [2024-07-25 10:17:30.985999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.025 [2024-07-25 10:17:30.986028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.025 qpair failed and we were unable to recover it. 00:28:46.025 [2024-07-25 10:17:30.986217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.025 [2024-07-25 10:17:30.986256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.025 qpair failed and we were unable to recover it. 00:28:46.025 [2024-07-25 10:17:30.986479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.025 [2024-07-25 10:17:30.986506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.025 qpair failed and we were unable to recover it. 00:28:46.025 [2024-07-25 10:17:30.986628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.025 [2024-07-25 10:17:30.986655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.025 qpair failed and we were unable to recover it. 00:28:46.025 [2024-07-25 10:17:30.986821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.025 [2024-07-25 10:17:30.986850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.025 qpair failed and we were unable to recover it. 00:28:46.025 [2024-07-25 10:17:30.987018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.025 [2024-07-25 10:17:30.987043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.025 qpair failed and we were unable to recover it. 00:28:46.025 [2024-07-25 10:17:30.987249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.025 [2024-07-25 10:17:30.987289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.025 qpair failed and we were unable to recover it. 00:28:46.025 [2024-07-25 10:17:30.987498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.025 [2024-07-25 10:17:30.987527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.025 qpair failed and we were unable to recover it. 00:28:46.025 [2024-07-25 10:17:30.987696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.025 [2024-07-25 10:17:30.987721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.025 qpair failed and we were unable to recover it. 00:28:46.025 [2024-07-25 10:17:30.987876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.025 [2024-07-25 10:17:30.987905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.025 qpair failed and we were unable to recover it. 00:28:46.025 [2024-07-25 10:17:30.988073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.025 [2024-07-25 10:17:30.988100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.025 qpair failed and we were unable to recover it. 00:28:46.025 [2024-07-25 10:17:30.988294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.025 [2024-07-25 10:17:30.988320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.025 qpair failed and we were unable to recover it. 00:28:46.025 [2024-07-25 10:17:30.988515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.025 [2024-07-25 10:17:30.988544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.025 qpair failed and we were unable to recover it. 00:28:46.025 [2024-07-25 10:17:30.988709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.025 [2024-07-25 10:17:30.988737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.025 qpair failed and we were unable to recover it. 00:28:46.025 [2024-07-25 10:17:30.988886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.025 [2024-07-25 10:17:30.988912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.025 qpair failed and we were unable to recover it. 00:28:46.025 [2024-07-25 10:17:30.989094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.025 [2024-07-25 10:17:30.989122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.025 qpair failed and we were unable to recover it. 00:28:46.025 [2024-07-25 10:17:30.989328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.025 [2024-07-25 10:17:30.989356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.025 qpair failed and we were unable to recover it. 00:28:46.025 [2024-07-25 10:17:30.989547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.025 [2024-07-25 10:17:30.989573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.025 qpair failed and we were unable to recover it. 00:28:46.025 [2024-07-25 10:17:30.989747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.025 [2024-07-25 10:17:30.989775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.025 qpair failed and we were unable to recover it. 00:28:46.025 [2024-07-25 10:17:30.989950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.025 [2024-07-25 10:17:30.989978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.025 qpair failed and we were unable to recover it. 00:28:46.026 [2024-07-25 10:17:30.990188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.026 [2024-07-25 10:17:30.990225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.026 qpair failed and we were unable to recover it. 00:28:46.026 [2024-07-25 10:17:30.990377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.026 [2024-07-25 10:17:30.990405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.026 qpair failed and we were unable to recover it. 00:28:46.026 [2024-07-25 10:17:30.990556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.026 [2024-07-25 10:17:30.990585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.026 qpair failed and we were unable to recover it. 00:28:46.026 [2024-07-25 10:17:30.990771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.026 [2024-07-25 10:17:30.990797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.026 qpair failed and we were unable to recover it. 00:28:46.026 [2024-07-25 10:17:30.991013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.026 [2024-07-25 10:17:30.991042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.026 qpair failed and we were unable to recover it. 00:28:46.026 [2024-07-25 10:17:30.991247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.026 [2024-07-25 10:17:30.991278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.026 qpair failed and we were unable to recover it. 00:28:46.026 [2024-07-25 10:17:30.991496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.026 [2024-07-25 10:17:30.991523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.026 qpair failed and we were unable to recover it. 00:28:46.026 [2024-07-25 10:17:30.991698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.026 [2024-07-25 10:17:30.991741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.026 qpair failed and we were unable to recover it. 00:28:46.026 [2024-07-25 10:17:30.991946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.026 [2024-07-25 10:17:30.991975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.026 qpair failed and we were unable to recover it. 00:28:46.026 [2024-07-25 10:17:30.992172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.026 [2024-07-25 10:17:30.992199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.026 qpair failed and we were unable to recover it. 00:28:46.026 [2024-07-25 10:17:30.992379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.026 [2024-07-25 10:17:30.992408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.026 qpair failed and we were unable to recover it. 00:28:46.026 [2024-07-25 10:17:30.992581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.026 [2024-07-25 10:17:30.992606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.026 qpair failed and we were unable to recover it. 00:28:46.026 [2024-07-25 10:17:30.992801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.026 [2024-07-25 10:17:30.992827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.026 qpair failed and we were unable to recover it. 00:28:46.026 [2024-07-25 10:17:30.993031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.026 [2024-07-25 10:17:30.993059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.026 qpair failed and we were unable to recover it. 00:28:46.026 [2024-07-25 10:17:30.993229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.026 [2024-07-25 10:17:30.993258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.026 qpair failed and we were unable to recover it. 00:28:46.026 [2024-07-25 10:17:30.993435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.026 [2024-07-25 10:17:30.993472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.026 qpair failed and we were unable to recover it. 00:28:46.026 [2024-07-25 10:17:30.993610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.026 [2024-07-25 10:17:30.993639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.026 qpair failed and we were unable to recover it. 00:28:46.026 [2024-07-25 10:17:30.993781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.026 [2024-07-25 10:17:30.993808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.026 qpair failed and we were unable to recover it. 00:28:46.026 [2024-07-25 10:17:30.994021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.026 [2024-07-25 10:17:30.994048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.026 qpair failed and we were unable to recover it. 00:28:46.026 [2024-07-25 10:17:30.994259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.026 [2024-07-25 10:17:30.994288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.026 qpair failed and we were unable to recover it. 00:28:46.026 [2024-07-25 10:17:30.994479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.026 [2024-07-25 10:17:30.994508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.026 qpair failed and we were unable to recover it. 00:28:46.026 [2024-07-25 10:17:30.994680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.026 [2024-07-25 10:17:30.994706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.026 qpair failed and we were unable to recover it. 00:28:46.026 [2024-07-25 10:17:30.994926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.026 [2024-07-25 10:17:30.994955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.026 qpair failed and we were unable to recover it. 00:28:46.026 [2024-07-25 10:17:30.995127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.026 [2024-07-25 10:17:30.995155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.026 qpair failed and we were unable to recover it. 00:28:46.026 [2024-07-25 10:17:30.995335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.026 [2024-07-25 10:17:30.995361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.026 qpair failed and we were unable to recover it. 00:28:46.026 [2024-07-25 10:17:30.995559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.026 [2024-07-25 10:17:30.995589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.026 qpair failed and we were unable to recover it. 00:28:46.026 [2024-07-25 10:17:30.995776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.027 [2024-07-25 10:17:30.995805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.027 qpair failed and we were unable to recover it. 00:28:46.027 [2024-07-25 10:17:30.996008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.027 [2024-07-25 10:17:30.996035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.027 qpair failed and we were unable to recover it. 00:28:46.027 [2024-07-25 10:17:30.996251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.027 [2024-07-25 10:17:30.996280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.027 qpair failed and we were unable to recover it. 00:28:46.027 [2024-07-25 10:17:30.996485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.027 [2024-07-25 10:17:30.996515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.027 qpair failed and we were unable to recover it. 00:28:46.027 [2024-07-25 10:17:30.996719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.027 [2024-07-25 10:17:30.996745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.027 qpair failed and we were unable to recover it. 00:28:46.027 [2024-07-25 10:17:30.996919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.027 [2024-07-25 10:17:30.996948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.027 qpair failed and we were unable to recover it. 00:28:46.027 [2024-07-25 10:17:30.997130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.027 [2024-07-25 10:17:30.997162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.027 qpair failed and we were unable to recover it. 00:28:46.027 [2024-07-25 10:17:30.997365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.027 [2024-07-25 10:17:30.997391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.027 qpair failed and we were unable to recover it. 00:28:46.027 [2024-07-25 10:17:30.997539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.027 [2024-07-25 10:17:30.997564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.027 qpair failed and we were unable to recover it. 00:28:46.027 [2024-07-25 10:17:30.997761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.027 [2024-07-25 10:17:30.997789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.027 qpair failed and we were unable to recover it. 00:28:46.027 [2024-07-25 10:17:30.997982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.027 [2024-07-25 10:17:30.998008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.027 qpair failed and we were unable to recover it. 00:28:46.027 [2024-07-25 10:17:30.998175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.027 [2024-07-25 10:17:30.998204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.027 qpair failed and we were unable to recover it. 00:28:46.027 [2024-07-25 10:17:30.998391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.027 [2024-07-25 10:17:30.998418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.027 qpair failed and we were unable to recover it. 00:28:46.027 [2024-07-25 10:17:30.998619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.027 [2024-07-25 10:17:30.998645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.027 qpair failed and we were unable to recover it. 00:28:46.027 [2024-07-25 10:17:30.998799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.027 [2024-07-25 10:17:30.998828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.027 qpair failed and we were unable to recover it. 00:28:46.027 [2024-07-25 10:17:30.998974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.027 [2024-07-25 10:17:30.999001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.027 qpair failed and we were unable to recover it. 00:28:46.027 [2024-07-25 10:17:30.999205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.027 [2024-07-25 10:17:30.999231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.027 qpair failed and we were unable to recover it. 00:28:46.027 [2024-07-25 10:17:30.999404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.027 [2024-07-25 10:17:30.999440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.027 qpair failed and we were unable to recover it. 00:28:46.027 [2024-07-25 10:17:30.999586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.027 [2024-07-25 10:17:30.999615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.027 qpair failed and we were unable to recover it. 00:28:46.027 [2024-07-25 10:17:30.999826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.027 [2024-07-25 10:17:30.999851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.027 qpair failed and we were unable to recover it. 00:28:46.027 [2024-07-25 10:17:31.000057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.027 [2024-07-25 10:17:31.000085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.027 qpair failed and we were unable to recover it. 00:28:46.027 [2024-07-25 10:17:31.000296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.027 [2024-07-25 10:17:31.000325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.027 qpair failed and we were unable to recover it. 00:28:46.027 [2024-07-25 10:17:31.000463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.027 [2024-07-25 10:17:31.000488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.027 qpair failed and we were unable to recover it. 00:28:46.027 [2024-07-25 10:17:31.000678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.027 [2024-07-25 10:17:31.000706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.027 qpair failed and we were unable to recover it. 00:28:46.027 [2024-07-25 10:17:31.000851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.027 [2024-07-25 10:17:31.000880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.027 qpair failed and we were unable to recover it. 00:28:46.027 [2024-07-25 10:17:31.001078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.027 [2024-07-25 10:17:31.001103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.027 qpair failed and we were unable to recover it. 00:28:46.027 [2024-07-25 10:17:31.001297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.027 [2024-07-25 10:17:31.001326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.028 qpair failed and we were unable to recover it. 00:28:46.028 [2024-07-25 10:17:31.001472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.028 [2024-07-25 10:17:31.001501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.028 qpair failed and we were unable to recover it. 00:28:46.028 [2024-07-25 10:17:31.001680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.028 [2024-07-25 10:17:31.001705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.028 qpair failed and we were unable to recover it. 00:28:46.028 [2024-07-25 10:17:31.001866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.028 [2024-07-25 10:17:31.001895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.028 qpair failed and we were unable to recover it. 00:28:46.028 [2024-07-25 10:17:31.002091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.028 [2024-07-25 10:17:31.002118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.028 qpair failed and we were unable to recover it. 00:28:46.028 [2024-07-25 10:17:31.002333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.028 [2024-07-25 10:17:31.002361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.028 qpair failed and we were unable to recover it. 00:28:46.028 [2024-07-25 10:17:31.002540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.028 [2024-07-25 10:17:31.002566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.028 qpair failed and we were unable to recover it. 00:28:46.028 [2024-07-25 10:17:31.002785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.028 [2024-07-25 10:17:31.002817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.028 qpair failed and we were unable to recover it. 00:28:46.028 [2024-07-25 10:17:31.002979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.028 [2024-07-25 10:17:31.003004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.028 qpair failed and we were unable to recover it. 00:28:46.028 [2024-07-25 10:17:31.003172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.028 [2024-07-25 10:17:31.003201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.028 qpair failed and we were unable to recover it. 00:28:46.028 [2024-07-25 10:17:31.003388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.028 [2024-07-25 10:17:31.003415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.028 qpair failed and we were unable to recover it. 00:28:46.028 [2024-07-25 10:17:31.003605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.028 [2024-07-25 10:17:31.003630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.028 qpair failed and we were unable to recover it. 00:28:46.028 [2024-07-25 10:17:31.003821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.028 [2024-07-25 10:17:31.003850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.028 qpair failed and we were unable to recover it. 00:28:46.028 [2024-07-25 10:17:31.004026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.028 [2024-07-25 10:17:31.004055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.028 qpair failed and we were unable to recover it. 00:28:46.028 [2024-07-25 10:17:31.004216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.028 [2024-07-25 10:17:31.004241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.028 qpair failed and we were unable to recover it. 00:28:46.028 [2024-07-25 10:17:31.004441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.028 [2024-07-25 10:17:31.004470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.028 qpair failed and we were unable to recover it. 00:28:46.028 [2024-07-25 10:17:31.004667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.028 [2024-07-25 10:17:31.004696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.028 qpair failed and we were unable to recover it. 00:28:46.028 [2024-07-25 10:17:31.004887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.028 [2024-07-25 10:17:31.004913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.028 qpair failed and we were unable to recover it. 00:28:46.028 [2024-07-25 10:17:31.005119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.028 [2024-07-25 10:17:31.005147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.028 qpair failed and we were unable to recover it. 00:28:46.028 [2024-07-25 10:17:31.005277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.028 [2024-07-25 10:17:31.005306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.028 qpair failed and we were unable to recover it. 00:28:46.028 [2024-07-25 10:17:31.005483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.028 [2024-07-25 10:17:31.005508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.028 qpair failed and we were unable to recover it. 00:28:46.028 [2024-07-25 10:17:31.005667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.028 [2024-07-25 10:17:31.005700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.028 qpair failed and we were unable to recover it. 00:28:46.028 [2024-07-25 10:17:31.005896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.028 [2024-07-25 10:17:31.005924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.028 qpair failed and we were unable to recover it. 00:28:46.028 [2024-07-25 10:17:31.006142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.028 [2024-07-25 10:17:31.006167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.028 qpair failed and we were unable to recover it. 00:28:46.028 [2024-07-25 10:17:31.006372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.028 [2024-07-25 10:17:31.006400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.028 qpair failed and we were unable to recover it. 00:28:46.028 [2024-07-25 10:17:31.006546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.028 [2024-07-25 10:17:31.006576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.028 qpair failed and we were unable to recover it. 00:28:46.028 [2024-07-25 10:17:31.006751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.028 [2024-07-25 10:17:31.006776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.028 qpair failed and we were unable to recover it. 00:28:46.029 [2024-07-25 10:17:31.006956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.029 [2024-07-25 10:17:31.006985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.029 qpair failed and we were unable to recover it. 00:28:46.029 [2024-07-25 10:17:31.007116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.029 [2024-07-25 10:17:31.007145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.029 qpair failed and we were unable to recover it. 00:28:46.029 [2024-07-25 10:17:31.007321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.029 [2024-07-25 10:17:31.007346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.029 qpair failed and we were unable to recover it. 00:28:46.029 [2024-07-25 10:17:31.007497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.029 [2024-07-25 10:17:31.007526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.029 qpair failed and we were unable to recover it. 00:28:46.029 [2024-07-25 10:17:31.007713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.029 [2024-07-25 10:17:31.007741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.029 qpair failed and we were unable to recover it. 00:28:46.029 [2024-07-25 10:17:31.007938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.029 [2024-07-25 10:17:31.007963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.029 qpair failed and we were unable to recover it. 00:28:46.029 [2024-07-25 10:17:31.008149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.029 [2024-07-25 10:17:31.008177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.029 qpair failed and we were unable to recover it. 00:28:46.029 [2024-07-25 10:17:31.008349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.029 [2024-07-25 10:17:31.008381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.029 qpair failed and we were unable to recover it. 00:28:46.029 [2024-07-25 10:17:31.008564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.029 [2024-07-25 10:17:31.008590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.029 qpair failed and we were unable to recover it. 00:28:46.029 [2024-07-25 10:17:31.008794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.029 [2024-07-25 10:17:31.008823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.029 qpair failed and we were unable to recover it. 00:28:46.029 [2024-07-25 10:17:31.009027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.029 [2024-07-25 10:17:31.009055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.029 qpair failed and we were unable to recover it. 00:28:46.029 [2024-07-25 10:17:31.009272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.029 [2024-07-25 10:17:31.009297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.029 qpair failed and we were unable to recover it. 00:28:46.029 [2024-07-25 10:17:31.009499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.029 [2024-07-25 10:17:31.009524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.029 qpair failed and we were unable to recover it. 00:28:46.029 [2024-07-25 10:17:31.009735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.029 [2024-07-25 10:17:31.009763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.029 qpair failed and we were unable to recover it. 00:28:46.029 [2024-07-25 10:17:31.009937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.029 [2024-07-25 10:17:31.009960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.029 qpair failed and we were unable to recover it. 00:28:46.029 [2024-07-25 10:17:31.010124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.029 [2024-07-25 10:17:31.010153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.029 qpair failed and we were unable to recover it. 00:28:46.029 [2024-07-25 10:17:31.010336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.029 [2024-07-25 10:17:31.010364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.029 qpair failed and we were unable to recover it. 00:28:46.029 [2024-07-25 10:17:31.010579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.029 [2024-07-25 10:17:31.010604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.029 qpair failed and we were unable to recover it. 00:28:46.029 [2024-07-25 10:17:31.010769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.029 [2024-07-25 10:17:31.010796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.029 qpair failed and we were unable to recover it. 00:28:46.029 [2024-07-25 10:17:31.010979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.029 [2024-07-25 10:17:31.011006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.029 qpair failed and we were unable to recover it. 00:28:46.029 [2024-07-25 10:17:31.011224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.029 [2024-07-25 10:17:31.011247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.029 qpair failed and we were unable to recover it. 00:28:46.029 [2024-07-25 10:17:31.011446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.029 [2024-07-25 10:17:31.011475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.029 qpair failed and we were unable to recover it. 00:28:46.029 [2024-07-25 10:17:31.011648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.029 [2024-07-25 10:17:31.011677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.029 qpair failed and we were unable to recover it. 00:28:46.029 [2024-07-25 10:17:31.011871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.029 [2024-07-25 10:17:31.011894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.029 qpair failed and we were unable to recover it. 00:28:46.029 [2024-07-25 10:17:31.012091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.029 [2024-07-25 10:17:31.012120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.029 qpair failed and we were unable to recover it. 00:28:46.029 [2024-07-25 10:17:31.012256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.029 [2024-07-25 10:17:31.012285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.029 qpair failed and we were unable to recover it. 00:28:46.029 [2024-07-25 10:17:31.012482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.029 [2024-07-25 10:17:31.012507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.029 qpair failed and we were unable to recover it. 00:28:46.030 [2024-07-25 10:17:31.012650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.030 [2024-07-25 10:17:31.012678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.030 qpair failed and we were unable to recover it. 00:28:46.030 [2024-07-25 10:17:31.012842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.030 [2024-07-25 10:17:31.012881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.030 qpair failed and we were unable to recover it. 00:28:46.030 [2024-07-25 10:17:31.013088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.030 [2024-07-25 10:17:31.013111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.030 qpair failed and we were unable to recover it. 00:28:46.030 [2024-07-25 10:17:31.013309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.030 [2024-07-25 10:17:31.013338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.030 qpair failed and we were unable to recover it. 00:28:46.030 [2024-07-25 10:17:31.013498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.030 [2024-07-25 10:17:31.013527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.030 qpair failed and we were unable to recover it. 00:28:46.030 [2024-07-25 10:17:31.013699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.030 [2024-07-25 10:17:31.013733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.030 qpair failed and we were unable to recover it. 00:28:46.030 [2024-07-25 10:17:31.013913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.030 [2024-07-25 10:17:31.013942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.030 qpair failed and we were unable to recover it. 00:28:46.030 [2024-07-25 10:17:31.014141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.030 [2024-07-25 10:17:31.014169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.030 qpair failed and we were unable to recover it. 00:28:46.030 [2024-07-25 10:17:31.014339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.030 [2024-07-25 10:17:31.014362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.030 qpair failed and we were unable to recover it. 00:28:46.030 [2024-07-25 10:17:31.014547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.030 [2024-07-25 10:17:31.014577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.030 qpair failed and we were unable to recover it. 00:28:46.030 [2024-07-25 10:17:31.014965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.030 [2024-07-25 10:17:31.015014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.030 qpair failed and we were unable to recover it. 00:28:46.030 [2024-07-25 10:17:31.015229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.030 [2024-07-25 10:17:31.015252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.030 qpair failed and we were unable to recover it. 00:28:46.030 [2024-07-25 10:17:31.015475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.030 [2024-07-25 10:17:31.015503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.030 qpair failed and we were unable to recover it. 00:28:46.030 [2024-07-25 10:17:31.015690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.030 [2024-07-25 10:17:31.015722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.030 qpair failed and we were unable to recover it. 00:28:46.030 [2024-07-25 10:17:31.015903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.030 [2024-07-25 10:17:31.015926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.030 qpair failed and we were unable to recover it. 00:28:46.030 [2024-07-25 10:17:31.016132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.030 [2024-07-25 10:17:31.016160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.030 qpair failed and we were unable to recover it. 00:28:46.030 [2024-07-25 10:17:31.016317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.030 [2024-07-25 10:17:31.016345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.030 qpair failed and we were unable to recover it. 00:28:46.030 [2024-07-25 10:17:31.016538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.030 [2024-07-25 10:17:31.016564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.030 qpair failed and we were unable to recover it. 00:28:46.030 [2024-07-25 10:17:31.016762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.030 [2024-07-25 10:17:31.016791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.030 qpair failed and we were unable to recover it. 00:28:46.030 [2024-07-25 10:17:31.016945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.030 [2024-07-25 10:17:31.016974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.030 qpair failed and we were unable to recover it. 00:28:46.030 [2024-07-25 10:17:31.017143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.030 [2024-07-25 10:17:31.017165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.030 qpair failed and we were unable to recover it. 00:28:46.030 [2024-07-25 10:17:31.017388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.030 [2024-07-25 10:17:31.017417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.030 qpair failed and we were unable to recover it. 00:28:46.030 [2024-07-25 10:17:31.017627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.030 [2024-07-25 10:17:31.017653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.030 qpair failed and we were unable to recover it. 00:28:46.030 [2024-07-25 10:17:31.017814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.030 [2024-07-25 10:17:31.017837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.030 qpair failed and we were unable to recover it. 00:28:46.030 [2024-07-25 10:17:31.018056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.030 [2024-07-25 10:17:31.018085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.030 qpair failed and we were unable to recover it. 00:28:46.030 [2024-07-25 10:17:31.018249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.030 [2024-07-25 10:17:31.018278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.030 qpair failed and we were unable to recover it. 00:28:46.030 [2024-07-25 10:17:31.018463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.031 [2024-07-25 10:17:31.018488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.031 qpair failed and we were unable to recover it. 00:28:46.031 [2024-07-25 10:17:31.018642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.031 [2024-07-25 10:17:31.018671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.031 qpair failed and we were unable to recover it. 00:28:46.031 [2024-07-25 10:17:31.018872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.031 [2024-07-25 10:17:31.018901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.031 qpair failed and we were unable to recover it. 00:28:46.031 [2024-07-25 10:17:31.019094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.031 [2024-07-25 10:17:31.019116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.031 qpair failed and we were unable to recover it. 00:28:46.031 [2024-07-25 10:17:31.019306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.031 [2024-07-25 10:17:31.019334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.031 qpair failed and we were unable to recover it. 00:28:46.031 [2024-07-25 10:17:31.019551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.031 [2024-07-25 10:17:31.019580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.031 qpair failed and we were unable to recover it. 00:28:46.031 [2024-07-25 10:17:31.019768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.031 [2024-07-25 10:17:31.019791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.031 qpair failed and we were unable to recover it. 00:28:46.031 [2024-07-25 10:17:31.020008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.031 [2024-07-25 10:17:31.020037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.031 qpair failed and we were unable to recover it. 00:28:46.031 [2024-07-25 10:17:31.020211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.031 [2024-07-25 10:17:31.020240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.031 qpair failed and we were unable to recover it. 00:28:46.031 [2024-07-25 10:17:31.020467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.031 [2024-07-25 10:17:31.020492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.031 qpair failed and we were unable to recover it. 00:28:46.031 [2024-07-25 10:17:31.020692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.031 [2024-07-25 10:17:31.020721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.031 qpair failed and we were unable to recover it. 00:28:46.031 [2024-07-25 10:17:31.020901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.031 [2024-07-25 10:17:31.020930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.031 qpair failed and we were unable to recover it. 00:28:46.031 [2024-07-25 10:17:31.021106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.031 [2024-07-25 10:17:31.021139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.031 qpair failed and we were unable to recover it. 00:28:46.031 [2024-07-25 10:17:31.021353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.031 [2024-07-25 10:17:31.021382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.031 qpair failed and we were unable to recover it. 00:28:46.031 [2024-07-25 10:17:31.021559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.031 [2024-07-25 10:17:31.021588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.031 qpair failed and we were unable to recover it. 00:28:46.031 [2024-07-25 10:17:31.021799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.031 [2024-07-25 10:17:31.021822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.031 qpair failed and we were unable to recover it. 00:28:46.031 [2024-07-25 10:17:31.022019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.031 [2024-07-25 10:17:31.022047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.031 qpair failed and we were unable to recover it. 00:28:46.031 [2024-07-25 10:17:31.022256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.031 [2024-07-25 10:17:31.022285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.031 qpair failed and we were unable to recover it. 00:28:46.031 [2024-07-25 10:17:31.022462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.031 [2024-07-25 10:17:31.022486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.031 qpair failed and we were unable to recover it. 00:28:46.031 [2024-07-25 10:17:31.022705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.031 [2024-07-25 10:17:31.022734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.031 qpair failed and we were unable to recover it. 00:28:46.031 [2024-07-25 10:17:31.022919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.032 [2024-07-25 10:17:31.022948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.032 qpair failed and we were unable to recover it. 00:28:46.032 [2024-07-25 10:17:31.023119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.032 [2024-07-25 10:17:31.023141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.032 qpair failed and we were unable to recover it. 00:28:46.032 [2024-07-25 10:17:31.023307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.032 [2024-07-25 10:17:31.023340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.032 qpair failed and we were unable to recover it. 00:28:46.032 [2024-07-25 10:17:31.023522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.032 [2024-07-25 10:17:31.023548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.032 qpair failed and we were unable to recover it. 00:28:46.032 [2024-07-25 10:17:31.023760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.032 [2024-07-25 10:17:31.023782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.032 qpair failed and we were unable to recover it. 00:28:46.032 [2024-07-25 10:17:31.024011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.032 [2024-07-25 10:17:31.024039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.032 qpair failed and we were unable to recover it. 00:28:46.032 [2024-07-25 10:17:31.024264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.032 [2024-07-25 10:17:31.024293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.032 qpair failed and we were unable to recover it. 00:28:46.032 [2024-07-25 10:17:31.024529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.032 [2024-07-25 10:17:31.024554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.032 qpair failed and we were unable to recover it. 00:28:46.032 [2024-07-25 10:17:31.024730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.032 [2024-07-25 10:17:31.024759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.032 qpair failed and we were unable to recover it. 00:28:46.032 [2024-07-25 10:17:31.024910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.032 [2024-07-25 10:17:31.024939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.032 qpair failed and we were unable to recover it. 00:28:46.032 [2024-07-25 10:17:31.025140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.032 [2024-07-25 10:17:31.025162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.032 qpair failed and we were unable to recover it. 00:28:46.032 [2024-07-25 10:17:31.025344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.032 [2024-07-25 10:17:31.025373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.032 qpair failed and we were unable to recover it. 00:28:46.032 [2024-07-25 10:17:31.025548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.032 [2024-07-25 10:17:31.025577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.032 qpair failed and we were unable to recover it. 00:28:46.032 [2024-07-25 10:17:31.025787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.032 [2024-07-25 10:17:31.025810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.032 qpair failed and we were unable to recover it. 00:28:46.032 [2024-07-25 10:17:31.026026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.032 [2024-07-25 10:17:31.026055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.032 qpair failed and we were unable to recover it. 00:28:46.032 [2024-07-25 10:17:31.026248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.032 [2024-07-25 10:17:31.026276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.032 qpair failed and we were unable to recover it. 00:28:46.032 [2024-07-25 10:17:31.026447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.032 [2024-07-25 10:17:31.026470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.032 qpair failed and we were unable to recover it. 00:28:46.032 [2024-07-25 10:17:31.026685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.032 [2024-07-25 10:17:31.026714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.032 qpair failed and we were unable to recover it. 00:28:46.032 [2024-07-25 10:17:31.026862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.032 [2024-07-25 10:17:31.026891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.032 qpair failed and we were unable to recover it. 00:28:46.032 [2024-07-25 10:17:31.027074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.032 [2024-07-25 10:17:31.027097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.032 qpair failed and we were unable to recover it. 00:28:46.032 [2024-07-25 10:17:31.027315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.032 [2024-07-25 10:17:31.027343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.032 qpair failed and we were unable to recover it. 00:28:46.032 [2024-07-25 10:17:31.027499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.032 [2024-07-25 10:17:31.027541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.032 qpair failed and we were unable to recover it. 00:28:46.032 [2024-07-25 10:17:31.027766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.032 [2024-07-25 10:17:31.027789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.032 qpair failed and we were unable to recover it. 00:28:46.032 [2024-07-25 10:17:31.027967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.032 [2024-07-25 10:17:31.027995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.032 qpair failed and we were unable to recover it. 00:28:46.032 [2024-07-25 10:17:31.028182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.032 [2024-07-25 10:17:31.028210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.032 qpair failed and we were unable to recover it. 00:28:46.032 [2024-07-25 10:17:31.028421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.032 [2024-07-25 10:17:31.028466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.032 qpair failed and we were unable to recover it. 00:28:46.032 [2024-07-25 10:17:31.028672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.032 [2024-07-25 10:17:31.028700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.032 qpair failed and we were unable to recover it. 00:28:46.033 [2024-07-25 10:17:31.028891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.033 [2024-07-25 10:17:31.028920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.033 qpair failed and we were unable to recover it. 00:28:46.033 [2024-07-25 10:17:31.029081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.033 [2024-07-25 10:17:31.029105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.033 qpair failed and we were unable to recover it. 00:28:46.033 [2024-07-25 10:17:31.029291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.033 [2024-07-25 10:17:31.029324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.033 qpair failed and we were unable to recover it. 00:28:46.033 [2024-07-25 10:17:31.029527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.033 [2024-07-25 10:17:31.029552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.033 qpair failed and we were unable to recover it. 00:28:46.033 [2024-07-25 10:17:31.029716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.033 [2024-07-25 10:17:31.029754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.033 qpair failed and we were unable to recover it. 00:28:46.033 [2024-07-25 10:17:31.029897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.033 [2024-07-25 10:17:31.029926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.033 qpair failed and we were unable to recover it. 00:28:46.033 [2024-07-25 10:17:31.030126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.033 [2024-07-25 10:17:31.030155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.033 qpair failed and we were unable to recover it. 00:28:46.033 [2024-07-25 10:17:31.030320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.033 [2024-07-25 10:17:31.030343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.033 qpair failed and we were unable to recover it. 00:28:46.033 [2024-07-25 10:17:31.030548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.033 [2024-07-25 10:17:31.030577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.033 qpair failed and we were unable to recover it. 00:28:46.033 [2024-07-25 10:17:31.030756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.033 [2024-07-25 10:17:31.030784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.033 qpair failed and we were unable to recover it. 00:28:46.033 [2024-07-25 10:17:31.031001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.033 [2024-07-25 10:17:31.031024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.033 qpair failed and we were unable to recover it. 00:28:46.033 [2024-07-25 10:17:31.031203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.033 [2024-07-25 10:17:31.031232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.033 qpair failed and we were unable to recover it. 00:28:46.033 [2024-07-25 10:17:31.031420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.033 [2024-07-25 10:17:31.031458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.033 qpair failed and we were unable to recover it. 00:28:46.033 [2024-07-25 10:17:31.031628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.033 [2024-07-25 10:17:31.031652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.033 qpair failed and we were unable to recover it. 00:28:46.033 [2024-07-25 10:17:31.031817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.033 [2024-07-25 10:17:31.031846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.033 qpair failed and we were unable to recover it. 00:28:46.033 [2024-07-25 10:17:31.032054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.033 [2024-07-25 10:17:31.032083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.033 qpair failed and we were unable to recover it. 00:28:46.033 [2024-07-25 10:17:31.032274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.033 [2024-07-25 10:17:31.032301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.033 qpair failed and we were unable to recover it. 00:28:46.033 [2024-07-25 10:17:31.032490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.033 [2024-07-25 10:17:31.032520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.033 qpair failed and we were unable to recover it. 00:28:46.033 [2024-07-25 10:17:31.032667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.033 [2024-07-25 10:17:31.032696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.033 qpair failed and we were unable to recover it. 00:28:46.033 [2024-07-25 10:17:31.032880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.033 [2024-07-25 10:17:31.032903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.033 qpair failed and we were unable to recover it. 00:28:46.033 [2024-07-25 10:17:31.033085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.033 [2024-07-25 10:17:31.033114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.033 qpair failed and we were unable to recover it. 00:28:46.033 [2024-07-25 10:17:31.033287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.033 [2024-07-25 10:17:31.033316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.033 qpair failed and we were unable to recover it. 00:28:46.033 [2024-07-25 10:17:31.033520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.033 [2024-07-25 10:17:31.033543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.033 qpair failed and we were unable to recover it. 00:28:46.033 [2024-07-25 10:17:31.033740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.033 [2024-07-25 10:17:31.033768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.033 qpair failed and we were unable to recover it. 00:28:46.033 [2024-07-25 10:17:31.033942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.033 [2024-07-25 10:17:31.033970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.033 qpair failed and we were unable to recover it. 00:28:46.033 [2024-07-25 10:17:31.034158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.033 [2024-07-25 10:17:31.034182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.033 qpair failed and we were unable to recover it. 00:28:46.033 [2024-07-25 10:17:31.034383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.034 [2024-07-25 10:17:31.034412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.034 qpair failed and we were unable to recover it. 00:28:46.034 [2024-07-25 10:17:31.034627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.034 [2024-07-25 10:17:31.034656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.034 qpair failed and we were unable to recover it. 00:28:46.034 [2024-07-25 10:17:31.034829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.034 [2024-07-25 10:17:31.034852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.034 qpair failed and we were unable to recover it. 00:28:46.034 [2024-07-25 10:17:31.034986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.034 [2024-07-25 10:17:31.035029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.034 qpair failed and we were unable to recover it. 00:28:46.034 [2024-07-25 10:17:31.035197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.034 [2024-07-25 10:17:31.035234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.034 qpair failed and we were unable to recover it. 00:28:46.034 [2024-07-25 10:17:31.035447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.034 [2024-07-25 10:17:31.035490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.034 qpair failed and we were unable to recover it. 00:28:46.034 [2024-07-25 10:17:31.035690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.034 [2024-07-25 10:17:31.035728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.034 qpair failed and we were unable to recover it. 00:28:46.034 [2024-07-25 10:17:31.035914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.034 [2024-07-25 10:17:31.035942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.034 qpair failed and we were unable to recover it. 00:28:46.034 [2024-07-25 10:17:31.036162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.034 [2024-07-25 10:17:31.036185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.034 qpair failed and we were unable to recover it. 00:28:46.034 [2024-07-25 10:17:31.036371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.034 [2024-07-25 10:17:31.036400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.034 qpair failed and we were unable to recover it. 00:28:46.034 [2024-07-25 10:17:31.036599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.034 [2024-07-25 10:17:31.036623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.034 qpair failed and we were unable to recover it. 00:28:46.034 [2024-07-25 10:17:31.036838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.034 [2024-07-25 10:17:31.036861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.034 qpair failed and we were unable to recover it. 00:28:46.034 [2024-07-25 10:17:31.037057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.034 [2024-07-25 10:17:31.037086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.034 qpair failed and we were unable to recover it. 00:28:46.034 [2024-07-25 10:17:31.037289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.034 [2024-07-25 10:17:31.037318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.034 qpair failed and we were unable to recover it. 00:28:46.034 [2024-07-25 10:17:31.037502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.034 [2024-07-25 10:17:31.037527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.034 qpair failed and we were unable to recover it. 00:28:46.034 [2024-07-25 10:17:31.037723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.034 [2024-07-25 10:17:31.037751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.034 qpair failed and we were unable to recover it. 00:28:46.034 [2024-07-25 10:17:31.037959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.034 [2024-07-25 10:17:31.037987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.034 qpair failed and we were unable to recover it. 00:28:46.034 [2024-07-25 10:17:31.038201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.034 [2024-07-25 10:17:31.038225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.034 qpair failed and we were unable to recover it. 00:28:46.034 [2024-07-25 10:17:31.038451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.034 [2024-07-25 10:17:31.038481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.034 qpair failed and we were unable to recover it. 00:28:46.034 [2024-07-25 10:17:31.038682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.034 [2024-07-25 10:17:31.038711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.034 qpair failed and we were unable to recover it. 00:28:46.034 [2024-07-25 10:17:31.038882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.034 [2024-07-25 10:17:31.038904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.034 qpair failed and we were unable to recover it. 00:28:46.034 [2024-07-25 10:17:31.039112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.034 [2024-07-25 10:17:31.039140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.034 qpair failed and we were unable to recover it. 00:28:46.034 [2024-07-25 10:17:31.039296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.034 [2024-07-25 10:17:31.039325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.034 qpair failed and we were unable to recover it. 00:28:46.034 [2024-07-25 10:17:31.039520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.034 [2024-07-25 10:17:31.039544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.034 qpair failed and we were unable to recover it. 00:28:46.034 [2024-07-25 10:17:31.039699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.034 [2024-07-25 10:17:31.039728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.034 qpair failed and we were unable to recover it. 00:28:46.034 [2024-07-25 10:17:31.039937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.034 [2024-07-25 10:17:31.039965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.034 qpair failed and we were unable to recover it. 00:28:46.034 [2024-07-25 10:17:31.040184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.034 [2024-07-25 10:17:31.040207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.035 qpair failed and we were unable to recover it. 00:28:46.035 [2024-07-25 10:17:31.040442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.035 [2024-07-25 10:17:31.040471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.035 qpair failed and we were unable to recover it. 00:28:46.035 [2024-07-25 10:17:31.040679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.035 [2024-07-25 10:17:31.040707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.035 qpair failed and we were unable to recover it. 00:28:46.035 [2024-07-25 10:17:31.040926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.035 [2024-07-25 10:17:31.040949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.035 qpair failed and we were unable to recover it. 00:28:46.035 [2024-07-25 10:17:31.041121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.035 [2024-07-25 10:17:31.041150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.035 qpair failed and we were unable to recover it. 00:28:46.035 [2024-07-25 10:17:31.041302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.035 [2024-07-25 10:17:31.041331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.035 qpair failed and we were unable to recover it. 00:28:46.035 [2024-07-25 10:17:31.041557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.035 [2024-07-25 10:17:31.041581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.035 qpair failed and we were unable to recover it. 00:28:46.035 [2024-07-25 10:17:31.041743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.035 [2024-07-25 10:17:31.041772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.035 qpair failed and we were unable to recover it. 00:28:46.035 [2024-07-25 10:17:31.041985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.035 [2024-07-25 10:17:31.042014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.035 qpair failed and we were unable to recover it. 00:28:46.035 [2024-07-25 10:17:31.042165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.035 [2024-07-25 10:17:31.042188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.035 qpair failed and we were unable to recover it. 00:28:46.035 [2024-07-25 10:17:31.042363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.035 [2024-07-25 10:17:31.042392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.035 qpair failed and we were unable to recover it. 00:28:46.035 [2024-07-25 10:17:31.042569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.035 [2024-07-25 10:17:31.042594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.035 qpair failed and we were unable to recover it. 00:28:46.035 [2024-07-25 10:17:31.042798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.035 [2024-07-25 10:17:31.042821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.035 qpair failed and we were unable to recover it. 00:28:46.035 [2024-07-25 10:17:31.043038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.035 [2024-07-25 10:17:31.043067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.035 qpair failed and we were unable to recover it. 00:28:46.035 [2024-07-25 10:17:31.043255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.035 [2024-07-25 10:17:31.043283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.035 qpair failed and we were unable to recover it. 00:28:46.035 [2024-07-25 10:17:31.043574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.035 [2024-07-25 10:17:31.043603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.035 qpair failed and we were unable to recover it. 00:28:46.035 [2024-07-25 10:17:31.043797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.035 [2024-07-25 10:17:31.043826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.035 qpair failed and we were unable to recover it. 00:28:46.035 [2024-07-25 10:17:31.044030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.035 [2024-07-25 10:17:31.044059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.035 qpair failed and we were unable to recover it. 00:28:46.035 [2024-07-25 10:17:31.044265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.035 [2024-07-25 10:17:31.044292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.035 qpair failed and we were unable to recover it. 00:28:46.035 [2024-07-25 10:17:31.044520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.035 [2024-07-25 10:17:31.044549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.035 qpair failed and we were unable to recover it. 00:28:46.035 [2024-07-25 10:17:31.044753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.035 [2024-07-25 10:17:31.044782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.035 qpair failed and we were unable to recover it. 00:28:46.035 [2024-07-25 10:17:31.044997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.035 [2024-07-25 10:17:31.045020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.035 qpair failed and we were unable to recover it. 00:28:46.035 [2024-07-25 10:17:31.045236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.035 [2024-07-25 10:17:31.045264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.035 qpair failed and we were unable to recover it. 00:28:46.035 [2024-07-25 10:17:31.045445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.035 [2024-07-25 10:17:31.045475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.035 qpair failed and we were unable to recover it. 00:28:46.035 [2024-07-25 10:17:31.045648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.035 [2024-07-25 10:17:31.045672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.035 qpair failed and we were unable to recover it. 00:28:46.035 [2024-07-25 10:17:31.045829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.035 [2024-07-25 10:17:31.045857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.035 qpair failed and we were unable to recover it. 00:28:46.035 [2024-07-25 10:17:31.046059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.035 [2024-07-25 10:17:31.046087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.035 qpair failed and we were unable to recover it. 00:28:46.035 [2024-07-25 10:17:31.046297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.036 [2024-07-25 10:17:31.046320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.036 qpair failed and we were unable to recover it. 00:28:46.036 [2024-07-25 10:17:31.046541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.036 [2024-07-25 10:17:31.046571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.036 qpair failed and we were unable to recover it. 00:28:46.036 [2024-07-25 10:17:31.046752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.036 [2024-07-25 10:17:31.046781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.036 qpair failed and we were unable to recover it. 00:28:46.036 [2024-07-25 10:17:31.047012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.036 [2024-07-25 10:17:31.047035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.036 qpair failed and we were unable to recover it. 00:28:46.036 [2024-07-25 10:17:31.047221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.036 [2024-07-25 10:17:31.047250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.036 qpair failed and we were unable to recover it. 00:28:46.036 [2024-07-25 10:17:31.047468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.036 [2024-07-25 10:17:31.047497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.036 qpair failed and we were unable to recover it. 00:28:46.036 [2024-07-25 10:17:31.047661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.036 [2024-07-25 10:17:31.047685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.036 qpair failed and we were unable to recover it. 00:28:46.036 [2024-07-25 10:17:31.047902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.036 [2024-07-25 10:17:31.047930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.036 qpair failed and we were unable to recover it. 00:28:46.036 [2024-07-25 10:17:31.048131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.036 [2024-07-25 10:17:31.048160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.036 qpair failed and we were unable to recover it. 00:28:46.036 [2024-07-25 10:17:31.048383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.036 [2024-07-25 10:17:31.048406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.036 qpair failed and we were unable to recover it. 00:28:46.036 [2024-07-25 10:17:31.048583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.036 [2024-07-25 10:17:31.048607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.036 qpair failed and we were unable to recover it. 00:28:46.036 [2024-07-25 10:17:31.048801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.036 [2024-07-25 10:17:31.048830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.036 qpair failed and we were unable to recover it. 00:28:46.036 [2024-07-25 10:17:31.048999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.036 [2024-07-25 10:17:31.049022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.036 qpair failed and we were unable to recover it. 00:28:46.036 [2024-07-25 10:17:31.049236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.036 [2024-07-25 10:17:31.049265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.036 qpair failed and we were unable to recover it. 00:28:46.036 [2024-07-25 10:17:31.049472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.036 [2024-07-25 10:17:31.049502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.036 qpair failed and we were unable to recover it. 00:28:46.036 [2024-07-25 10:17:31.049664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.036 [2024-07-25 10:17:31.049689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.036 qpair failed and we were unable to recover it. 00:28:46.036 [2024-07-25 10:17:31.049901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.036 [2024-07-25 10:17:31.049929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.036 qpair failed and we were unable to recover it. 00:28:46.036 [2024-07-25 10:17:31.050097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.036 [2024-07-25 10:17:31.050125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.036 qpair failed and we were unable to recover it. 00:28:46.036 [2024-07-25 10:17:31.050285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.036 [2024-07-25 10:17:31.050311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.036 qpair failed and we were unable to recover it. 00:28:46.036 [2024-07-25 10:17:31.050465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.036 [2024-07-25 10:17:31.050509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.036 qpair failed and we were unable to recover it. 00:28:46.036 [2024-07-25 10:17:31.050713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.036 [2024-07-25 10:17:31.050741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.036 qpair failed and we were unable to recover it. 00:28:46.036 [2024-07-25 10:17:31.050924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.036 [2024-07-25 10:17:31.050946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.036 qpair failed and we were unable to recover it. 00:28:46.036 [2024-07-25 10:17:31.051161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.036 [2024-07-25 10:17:31.051190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.036 qpair failed and we were unable to recover it. 00:28:46.036 [2024-07-25 10:17:31.051390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.036 [2024-07-25 10:17:31.051419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.036 qpair failed and we were unable to recover it. 00:28:46.036 [2024-07-25 10:17:31.051630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.036 [2024-07-25 10:17:31.051654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.036 qpair failed and we were unable to recover it. 00:28:46.036 [2024-07-25 10:17:31.051816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.036 [2024-07-25 10:17:31.051845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.036 qpair failed and we were unable to recover it. 00:28:46.036 [2024-07-25 10:17:31.052047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.036 [2024-07-25 10:17:31.052076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.036 qpair failed and we were unable to recover it. 00:28:46.037 [2024-07-25 10:17:31.052306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.037 [2024-07-25 10:17:31.052329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.037 qpair failed and we were unable to recover it. 00:28:46.037 [2024-07-25 10:17:31.052512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.037 [2024-07-25 10:17:31.052541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.037 qpair failed and we were unable to recover it. 00:28:46.037 [2024-07-25 10:17:31.052726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.037 [2024-07-25 10:17:31.052755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.037 qpair failed and we were unable to recover it. 00:28:46.037 [2024-07-25 10:17:31.052939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.037 [2024-07-25 10:17:31.052962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.037 qpair failed and we were unable to recover it. 00:28:46.037 [2024-07-25 10:17:31.053140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.037 [2024-07-25 10:17:31.053169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.037 qpair failed and we were unable to recover it. 00:28:46.037 [2024-07-25 10:17:31.053352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.037 [2024-07-25 10:17:31.053381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.037 qpair failed and we were unable to recover it. 00:28:46.037 [2024-07-25 10:17:31.053533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.037 [2024-07-25 10:17:31.053571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.037 qpair failed and we were unable to recover it. 00:28:46.037 [2024-07-25 10:17:31.053698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.037 [2024-07-25 10:17:31.053738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.037 qpair failed and we were unable to recover it. 00:28:46.037 [2024-07-25 10:17:31.053940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.037 [2024-07-25 10:17:31.053969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.037 qpair failed and we were unable to recover it. 00:28:46.037 [2024-07-25 10:17:31.054118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.037 [2024-07-25 10:17:31.054155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.037 qpair failed and we were unable to recover it. 00:28:46.037 [2024-07-25 10:17:31.054371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.037 [2024-07-25 10:17:31.054400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.037 qpair failed and we were unable to recover it. 00:28:46.037 [2024-07-25 10:17:31.054634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.037 [2024-07-25 10:17:31.054659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.037 qpair failed and we were unable to recover it. 00:28:46.037 [2024-07-25 10:17:31.054795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.037 [2024-07-25 10:17:31.054818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.037 qpair failed and we were unable to recover it. 00:28:46.037 [2024-07-25 10:17:31.054993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.037 [2024-07-25 10:17:31.055033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.037 qpair failed and we were unable to recover it. 00:28:46.037 [2024-07-25 10:17:31.055233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.037 [2024-07-25 10:17:31.055262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.037 qpair failed and we were unable to recover it. 00:28:46.037 [2024-07-25 10:17:31.055415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.037 [2024-07-25 10:17:31.055445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.037 qpair failed and we were unable to recover it. 00:28:46.037 [2024-07-25 10:17:31.055627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.037 [2024-07-25 10:17:31.055656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.037 qpair failed and we were unable to recover it. 00:28:46.037 [2024-07-25 10:17:31.055836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.037 [2024-07-25 10:17:31.055864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.037 qpair failed and we were unable to recover it. 00:28:46.037 [2024-07-25 10:17:31.056069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.037 [2024-07-25 10:17:31.056095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.037 qpair failed and we were unable to recover it. 00:28:46.037 [2024-07-25 10:17:31.056320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.037 [2024-07-25 10:17:31.056350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.037 qpair failed and we were unable to recover it. 00:28:46.037 [2024-07-25 10:17:31.056557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.037 [2024-07-25 10:17:31.056586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.037 qpair failed and we were unable to recover it. 00:28:46.037 [2024-07-25 10:17:31.056756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.037 [2024-07-25 10:17:31.056779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.037 qpair failed and we were unable to recover it. 00:28:46.037 [2024-07-25 10:17:31.057001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.037 [2024-07-25 10:17:31.057031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.037 qpair failed and we were unable to recover it. 00:28:46.037 [2024-07-25 10:17:31.057203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.037 [2024-07-25 10:17:31.057232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.037 qpair failed and we were unable to recover it. 00:28:46.037 [2024-07-25 10:17:31.057460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.037 [2024-07-25 10:17:31.057484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.037 qpair failed and we were unable to recover it. 00:28:46.038 [2024-07-25 10:17:31.057668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.038 [2024-07-25 10:17:31.057696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.038 qpair failed and we were unable to recover it. 00:28:46.038 [2024-07-25 10:17:31.057900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.038 [2024-07-25 10:17:31.057929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.038 qpair failed and we were unable to recover it. 00:28:46.038 [2024-07-25 10:17:31.058127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.038 [2024-07-25 10:17:31.058149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.038 qpair failed and we were unable to recover it. 00:28:46.038 [2024-07-25 10:17:31.058330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.038 [2024-07-25 10:17:31.058359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.038 qpair failed and we were unable to recover it. 00:28:46.038 [2024-07-25 10:17:31.058544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.038 [2024-07-25 10:17:31.058573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.038 qpair failed and we were unable to recover it. 00:28:46.038 [2024-07-25 10:17:31.058760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.038 [2024-07-25 10:17:31.058784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.038 qpair failed and we were unable to recover it. 00:28:46.038 [2024-07-25 10:17:31.058996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.038 [2024-07-25 10:17:31.059025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.038 qpair failed and we were unable to recover it. 00:28:46.038 [2024-07-25 10:17:31.059237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.038 [2024-07-25 10:17:31.059265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.038 qpair failed and we were unable to recover it. 00:28:46.038 [2024-07-25 10:17:31.059443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.038 [2024-07-25 10:17:31.059480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.038 qpair failed and we were unable to recover it. 00:28:46.038 [2024-07-25 10:17:31.059651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.038 [2024-07-25 10:17:31.059680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.038 qpair failed and we were unable to recover it. 00:28:46.038 [2024-07-25 10:17:31.059893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.038 [2024-07-25 10:17:31.059922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.038 qpair failed and we were unable to recover it. 00:28:46.038 [2024-07-25 10:17:31.060094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.038 [2024-07-25 10:17:31.060117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.038 qpair failed and we were unable to recover it. 00:28:46.038 [2024-07-25 10:17:31.060306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.038 [2024-07-25 10:17:31.060335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.038 qpair failed and we were unable to recover it. 00:28:46.038 [2024-07-25 10:17:31.060508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.038 [2024-07-25 10:17:31.060532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.038 qpair failed and we were unable to recover it. 00:28:46.038 [2024-07-25 10:17:31.060724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.038 [2024-07-25 10:17:31.060748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.038 qpair failed and we were unable to recover it. 00:28:46.038 [2024-07-25 10:17:31.060969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.038 [2024-07-25 10:17:31.060998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.038 qpair failed and we were unable to recover it. 00:28:46.038 [2024-07-25 10:17:31.061210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.038 [2024-07-25 10:17:31.061239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.038 qpair failed and we were unable to recover it. 00:28:46.038 [2024-07-25 10:17:31.061425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.038 [2024-07-25 10:17:31.061469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.038 qpair failed and we were unable to recover it. 00:28:46.038 [2024-07-25 10:17:31.061666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.038 [2024-07-25 10:17:31.061694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.038 qpair failed and we were unable to recover it. 00:28:46.038 [2024-07-25 10:17:31.061895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.038 [2024-07-25 10:17:31.061924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.038 qpair failed and we were unable to recover it. 00:28:46.038 [2024-07-25 10:17:31.062115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.038 [2024-07-25 10:17:31.062138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.038 qpair failed and we were unable to recover it. 00:28:46.038 [2024-07-25 10:17:31.062330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.038 [2024-07-25 10:17:31.062361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.039 qpair failed and we were unable to recover it. 00:28:46.039 [2024-07-25 10:17:31.062546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.039 [2024-07-25 10:17:31.062575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.039 qpair failed and we were unable to recover it. 00:28:46.039 [2024-07-25 10:17:31.062755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.039 [2024-07-25 10:17:31.062779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.039 qpair failed and we were unable to recover it. 00:28:46.039 [2024-07-25 10:17:31.062983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.039 [2024-07-25 10:17:31.063012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.039 qpair failed and we were unable to recover it. 00:28:46.039 [2024-07-25 10:17:31.063212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.039 [2024-07-25 10:17:31.063240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.039 qpair failed and we were unable to recover it. 00:28:46.039 [2024-07-25 10:17:31.063438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.039 [2024-07-25 10:17:31.063476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.039 qpair failed and we were unable to recover it. 00:28:46.039 [2024-07-25 10:17:31.063662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.039 [2024-07-25 10:17:31.063692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.039 qpair failed and we were unable to recover it. 00:28:46.039 [2024-07-25 10:17:31.063866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.039 [2024-07-25 10:17:31.063895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.039 qpair failed and we were unable to recover it. 00:28:46.039 [2024-07-25 10:17:31.064075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.039 [2024-07-25 10:17:31.064098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.039 qpair failed and we were unable to recover it. 00:28:46.039 [2024-07-25 10:17:31.064319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.039 [2024-07-25 10:17:31.064348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.039 qpair failed and we were unable to recover it. 00:28:46.039 [2024-07-25 10:17:31.064563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.039 [2024-07-25 10:17:31.064593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.039 qpair failed and we were unable to recover it. 00:28:46.039 [2024-07-25 10:17:31.064765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.039 [2024-07-25 10:17:31.064813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.039 qpair failed and we were unable to recover it. 00:28:46.039 [2024-07-25 10:17:31.064998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.039 [2024-07-25 10:17:31.065027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.039 qpair failed and we were unable to recover it. 00:28:46.039 [2024-07-25 10:17:31.065237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.039 [2024-07-25 10:17:31.065266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.039 qpair failed and we were unable to recover it. 00:28:46.039 [2024-07-25 10:17:31.065497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.039 [2024-07-25 10:17:31.065522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.039 qpair failed and we were unable to recover it. 00:28:46.039 [2024-07-25 10:17:31.065697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.039 [2024-07-25 10:17:31.065726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.039 qpair failed and we were unable to recover it. 00:28:46.039 [2024-07-25 10:17:31.065898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.039 [2024-07-25 10:17:31.065927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.039 qpair failed and we were unable to recover it. 00:28:46.039 [2024-07-25 10:17:31.066108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.039 [2024-07-25 10:17:31.066131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.039 qpair failed and we were unable to recover it. 00:28:46.039 [2024-07-25 10:17:31.066288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.039 [2024-07-25 10:17:31.066317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.039 qpair failed and we were unable to recover it. 00:28:46.039 [2024-07-25 10:17:31.066497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.039 [2024-07-25 10:17:31.066522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.039 qpair failed and we were unable to recover it. 00:28:46.039 [2024-07-25 10:17:31.066641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.039 [2024-07-25 10:17:31.066680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.039 qpair failed and we were unable to recover it. 00:28:46.039 [2024-07-25 10:17:31.066845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.039 [2024-07-25 10:17:31.066885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.039 qpair failed and we were unable to recover it. 00:28:46.039 [2024-07-25 10:17:31.067099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.039 [2024-07-25 10:17:31.067128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.039 qpair failed and we were unable to recover it. 00:28:46.039 [2024-07-25 10:17:31.067285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.039 [2024-07-25 10:17:31.067308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.039 qpair failed and we were unable to recover it. 00:28:46.039 [2024-07-25 10:17:31.067483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.039 [2024-07-25 10:17:31.067507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.039 qpair failed and we were unable to recover it. 00:28:46.039 [2024-07-25 10:17:31.067724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.039 [2024-07-25 10:17:31.067753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.039 qpair failed and we were unable to recover it. 00:28:46.039 [2024-07-25 10:17:31.067926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.039 [2024-07-25 10:17:31.067949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.039 qpair failed and we were unable to recover it. 00:28:46.040 [2024-07-25 10:17:31.068120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.040 [2024-07-25 10:17:31.068149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.040 qpair failed and we were unable to recover it. 00:28:46.040 [2024-07-25 10:17:31.068352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.040 [2024-07-25 10:17:31.068380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.040 qpair failed and we were unable to recover it. 00:28:46.040 [2024-07-25 10:17:31.068603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.040 [2024-07-25 10:17:31.068628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.040 qpair failed and we were unable to recover it. 00:28:46.040 [2024-07-25 10:17:31.068799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.040 [2024-07-25 10:17:31.068827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.040 qpair failed and we were unable to recover it. 00:28:46.040 [2024-07-25 10:17:31.068987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.040 [2024-07-25 10:17:31.069016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.040 qpair failed and we were unable to recover it. 00:28:46.040 [2024-07-25 10:17:31.069197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.040 [2024-07-25 10:17:31.069220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.040 qpair failed and we were unable to recover it. 00:28:46.040 [2024-07-25 10:17:31.069401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.040 [2024-07-25 10:17:31.069438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.040 qpair failed and we were unable to recover it. 00:28:46.040 [2024-07-25 10:17:31.069602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.040 [2024-07-25 10:17:31.069629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.040 qpair failed and we were unable to recover it. 00:28:46.040 [2024-07-25 10:17:31.069839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.040 [2024-07-25 10:17:31.069862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.040 qpair failed and we were unable to recover it. 00:28:46.040 [2024-07-25 10:17:31.070074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.040 [2024-07-25 10:17:31.070101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.040 qpair failed and we were unable to recover it. 00:28:46.040 [2024-07-25 10:17:31.070303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.040 [2024-07-25 10:17:31.070331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.040 qpair failed and we were unable to recover it. 00:28:46.040 [2024-07-25 10:17:31.070507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.040 [2024-07-25 10:17:31.070531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.040 qpair failed and we were unable to recover it. 00:28:46.040 [2024-07-25 10:17:31.070697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.040 [2024-07-25 10:17:31.070726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.040 qpair failed and we were unable to recover it. 00:28:46.040 [2024-07-25 10:17:31.070874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.040 [2024-07-25 10:17:31.070909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.040 qpair failed and we were unable to recover it. 00:28:46.040 [2024-07-25 10:17:31.071108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.040 [2024-07-25 10:17:31.071131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.040 qpair failed and we were unable to recover it. 00:28:46.040 [2024-07-25 10:17:31.071345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.040 [2024-07-25 10:17:31.071374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.040 qpair failed and we were unable to recover it. 00:28:46.040 [2024-07-25 10:17:31.071560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.040 [2024-07-25 10:17:31.071589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.040 qpair failed and we were unable to recover it. 00:28:46.040 [2024-07-25 10:17:31.071773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.040 [2024-07-25 10:17:31.071796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.040 qpair failed and we were unable to recover it. 00:28:46.040 [2024-07-25 10:17:31.072003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.040 [2024-07-25 10:17:31.072031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.040 qpair failed and we were unable to recover it. 00:28:46.040 [2024-07-25 10:17:31.072232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.040 [2024-07-25 10:17:31.072261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.040 qpair failed and we were unable to recover it. 00:28:46.040 [2024-07-25 10:17:31.072461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.040 [2024-07-25 10:17:31.072501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.040 qpair failed and we were unable to recover it. 00:28:46.040 [2024-07-25 10:17:31.072690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.040 [2024-07-25 10:17:31.072731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.040 qpair failed and we were unable to recover it. 00:28:46.040 [2024-07-25 10:17:31.072909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.040 [2024-07-25 10:17:31.072938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.040 qpair failed and we were unable to recover it. 00:28:46.040 [2024-07-25 10:17:31.073153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.040 [2024-07-25 10:17:31.073176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.040 qpair failed and we were unable to recover it. 00:28:46.040 [2024-07-25 10:17:31.073352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.040 [2024-07-25 10:17:31.073381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.040 qpair failed and we were unable to recover it. 00:28:46.040 [2024-07-25 10:17:31.073586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.040 [2024-07-25 10:17:31.073609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.040 qpair failed and we were unable to recover it. 00:28:46.040 [2024-07-25 10:17:31.073812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.041 [2024-07-25 10:17:31.073836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.041 qpair failed and we were unable to recover it. 00:28:46.041 [2024-07-25 10:17:31.074027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.041 [2024-07-25 10:17:31.074056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.041 qpair failed and we were unable to recover it. 00:28:46.041 [2024-07-25 10:17:31.074236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.041 [2024-07-25 10:17:31.074265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.041 qpair failed and we were unable to recover it. 00:28:46.041 [2024-07-25 10:17:31.074484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.041 [2024-07-25 10:17:31.074509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.041 qpair failed and we were unable to recover it. 00:28:46.041 [2024-07-25 10:17:31.074681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.041 [2024-07-25 10:17:31.074709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.041 qpair failed and we were unable to recover it. 00:28:46.041 [2024-07-25 10:17:31.074896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.041 [2024-07-25 10:17:31.074925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.041 qpair failed and we were unable to recover it. 00:28:46.041 [2024-07-25 10:17:31.075088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.041 [2024-07-25 10:17:31.075112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.041 qpair failed and we were unable to recover it. 00:28:46.041 [2024-07-25 10:17:31.075331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.041 [2024-07-25 10:17:31.075359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.041 qpair failed and we were unable to recover it. 00:28:46.041 [2024-07-25 10:17:31.075571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.041 [2024-07-25 10:17:31.075601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.041 qpair failed and we were unable to recover it. 00:28:46.041 [2024-07-25 10:17:31.075778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.041 [2024-07-25 10:17:31.075802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.041 qpair failed and we were unable to recover it. 00:28:46.041 [2024-07-25 10:17:31.076019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.041 [2024-07-25 10:17:31.076049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.041 qpair failed and we were unable to recover it. 00:28:46.041 [2024-07-25 10:17:31.076259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.041 [2024-07-25 10:17:31.076288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.041 qpair failed and we were unable to recover it. 00:28:46.041 [2024-07-25 10:17:31.076485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.041 [2024-07-25 10:17:31.076509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.041 qpair failed and we were unable to recover it. 00:28:46.041 [2024-07-25 10:17:31.076690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.041 [2024-07-25 10:17:31.076719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.041 qpair failed and we were unable to recover it. 00:28:46.041 [2024-07-25 10:17:31.076936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.041 [2024-07-25 10:17:31.076969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.041 qpair failed and we were unable to recover it. 00:28:46.041 [2024-07-25 10:17:31.077180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.041 [2024-07-25 10:17:31.077203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.041 qpair failed and we were unable to recover it. 00:28:46.041 [2024-07-25 10:17:31.077406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.041 [2024-07-25 10:17:31.077440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.041 qpair failed and we were unable to recover it. 00:28:46.041 [2024-07-25 10:17:31.077659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.041 [2024-07-25 10:17:31.077687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.041 qpair failed and we were unable to recover it. 00:28:46.041 [2024-07-25 10:17:31.077856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.041 [2024-07-25 10:17:31.077879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.041 qpair failed and we were unable to recover it. 00:28:46.041 [2024-07-25 10:17:31.078095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.041 [2024-07-25 10:17:31.078124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.041 qpair failed and we were unable to recover it. 00:28:46.041 [2024-07-25 10:17:31.078315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.041 [2024-07-25 10:17:31.078344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.041 qpair failed and we were unable to recover it. 00:28:46.041 [2024-07-25 10:17:31.078515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.041 [2024-07-25 10:17:31.078540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.041 qpair failed and we were unable to recover it. 00:28:46.041 [2024-07-25 10:17:31.078744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.041 [2024-07-25 10:17:31.078772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.041 qpair failed and we were unable to recover it. 00:28:46.041 [2024-07-25 10:17:31.078926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.041 [2024-07-25 10:17:31.078955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.041 qpair failed and we were unable to recover it. 00:28:46.041 [2024-07-25 10:17:31.079132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.041 [2024-07-25 10:17:31.079155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.041 qpair failed and we were unable to recover it. 00:28:46.041 [2024-07-25 10:17:31.079282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.041 [2024-07-25 10:17:31.079319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.041 qpair failed and we were unable to recover it. 00:28:46.041 [2024-07-25 10:17:31.079494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.041 [2024-07-25 10:17:31.079519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.041 qpair failed and we were unable to recover it. 00:28:46.041 [2024-07-25 10:17:31.079633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.041 [2024-07-25 10:17:31.079657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.041 qpair failed and we were unable to recover it. 00:28:46.041 [2024-07-25 10:17:31.079800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.041 [2024-07-25 10:17:31.079851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.042 qpair failed and we were unable to recover it. 00:28:46.042 [2024-07-25 10:17:31.080024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.042 [2024-07-25 10:17:31.080053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.042 qpair failed and we were unable to recover it. 00:28:46.042 [2024-07-25 10:17:31.080228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.042 [2024-07-25 10:17:31.080251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.042 qpair failed and we were unable to recover it. 00:28:46.042 [2024-07-25 10:17:31.080434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.042 [2024-07-25 10:17:31.080463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.042 qpair failed and we were unable to recover it. 00:28:46.042 [2024-07-25 10:17:31.080652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.042 [2024-07-25 10:17:31.080681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.042 qpair failed and we were unable to recover it. 00:28:46.042 [2024-07-25 10:17:31.080845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.042 [2024-07-25 10:17:31.080869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.042 qpair failed and we were unable to recover it. 00:28:46.042 [2024-07-25 10:17:31.081004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.042 [2024-07-25 10:17:31.081042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.042 qpair failed and we were unable to recover it. 00:28:46.042 [2024-07-25 10:17:31.081209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.042 [2024-07-25 10:17:31.081247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.042 qpair failed and we were unable to recover it. 00:28:46.042 [2024-07-25 10:17:31.081417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.042 [2024-07-25 10:17:31.081447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.042 qpair failed and we were unable to recover it. 00:28:46.042 [2024-07-25 10:17:31.081619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.042 [2024-07-25 10:17:31.081648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.042 qpair failed and we were unable to recover it. 00:28:46.042 [2024-07-25 10:17:31.081847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.042 [2024-07-25 10:17:31.081876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.042 qpair failed and we were unable to recover it. 00:28:46.042 [2024-07-25 10:17:31.082088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.042 [2024-07-25 10:17:31.082111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.042 qpair failed and we were unable to recover it. 00:28:46.042 [2024-07-25 10:17:31.082264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.042 [2024-07-25 10:17:31.082291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.042 qpair failed and we were unable to recover it. 00:28:46.042 [2024-07-25 10:17:31.082504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.042 [2024-07-25 10:17:31.082534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.042 qpair failed and we were unable to recover it. 00:28:46.042 [2024-07-25 10:17:31.082730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.042 [2024-07-25 10:17:31.082754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.042 qpair failed and we were unable to recover it. 00:28:46.042 [2024-07-25 10:17:31.082953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.042 [2024-07-25 10:17:31.082981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.042 qpair failed and we were unable to recover it. 00:28:46.042 [2024-07-25 10:17:31.083168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.042 [2024-07-25 10:17:31.083202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.042 qpair failed and we were unable to recover it. 00:28:46.042 [2024-07-25 10:17:31.083434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.042 [2024-07-25 10:17:31.083459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.042 qpair failed and we were unable to recover it. 00:28:46.042 [2024-07-25 10:17:31.083644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.042 [2024-07-25 10:17:31.083673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.042 qpair failed and we were unable to recover it. 00:28:46.042 [2024-07-25 10:17:31.083850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.042 [2024-07-25 10:17:31.083879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.042 qpair failed and we were unable to recover it. 00:28:46.042 [2024-07-25 10:17:31.084046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.042 [2024-07-25 10:17:31.084069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.042 qpair failed and we were unable to recover it. 00:28:46.042 [2024-07-25 10:17:31.084238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.042 [2024-07-25 10:17:31.084266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.042 qpair failed and we were unable to recover it. 00:28:46.042 [2024-07-25 10:17:31.084436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.042 [2024-07-25 10:17:31.084466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.042 qpair failed and we were unable to recover it. 00:28:46.042 [2024-07-25 10:17:31.084655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.042 [2024-07-25 10:17:31.084679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.042 qpair failed and we were unable to recover it. 00:28:46.042 [2024-07-25 10:17:31.084845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.042 [2024-07-25 10:17:31.084874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.042 qpair failed and we were unable to recover it. 00:28:46.042 [2024-07-25 10:17:31.085048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.042 [2024-07-25 10:17:31.085078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.042 qpair failed and we were unable to recover it. 00:28:46.042 [2024-07-25 10:17:31.085264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.042 [2024-07-25 10:17:31.085315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.042 qpair failed and we were unable to recover it. 00:28:46.042 [2024-07-25 10:17:31.085538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.042 [2024-07-25 10:17:31.085562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.042 qpair failed and we were unable to recover it. 00:28:46.042 [2024-07-25 10:17:31.085749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.042 [2024-07-25 10:17:31.085778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.042 qpair failed and we were unable to recover it. 00:28:46.042 [2024-07-25 10:17:31.085992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.042 [2024-07-25 10:17:31.086015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.042 qpair failed and we were unable to recover it. 00:28:46.042 [2024-07-25 10:17:31.086187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.043 [2024-07-25 10:17:31.086216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.043 qpair failed and we were unable to recover it. 00:28:46.043 [2024-07-25 10:17:31.086388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.043 [2024-07-25 10:17:31.086416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.043 qpair failed and we were unable to recover it. 00:28:46.043 [2024-07-25 10:17:31.086632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.043 [2024-07-25 10:17:31.086656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.043 qpair failed and we were unable to recover it. 00:28:46.043 [2024-07-25 10:17:31.086820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.043 [2024-07-25 10:17:31.086849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.043 qpair failed and we were unable to recover it. 00:28:46.043 [2024-07-25 10:17:31.087023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.043 [2024-07-25 10:17:31.087052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.043 qpair failed and we were unable to recover it. 00:28:46.043 [2024-07-25 10:17:31.087227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.043 [2024-07-25 10:17:31.087250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.043 qpair failed and we were unable to recover it. 00:28:46.043 [2024-07-25 10:17:31.087433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.043 [2024-07-25 10:17:31.087474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.043 qpair failed and we were unable to recover it. 00:28:46.043 [2024-07-25 10:17:31.087658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.043 [2024-07-25 10:17:31.087687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.043 qpair failed and we were unable to recover it. 00:28:46.043 [2024-07-25 10:17:31.087874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.043 [2024-07-25 10:17:31.087897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.043 qpair failed and we were unable to recover it. 00:28:46.043 [2024-07-25 10:17:31.088066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.043 [2024-07-25 10:17:31.088095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.043 qpair failed and we were unable to recover it. 00:28:46.043 [2024-07-25 10:17:31.088295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.043 [2024-07-25 10:17:31.088324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.043 qpair failed and we were unable to recover it. 00:28:46.043 [2024-07-25 10:17:31.088542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.043 [2024-07-25 10:17:31.088566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.043 qpair failed and we were unable to recover it. 00:28:46.043 [2024-07-25 10:17:31.088734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.043 [2024-07-25 10:17:31.088763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.043 qpair failed and we were unable to recover it. 00:28:46.043 [2024-07-25 10:17:31.088965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.043 [2024-07-25 10:17:31.088994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.043 qpair failed and we were unable to recover it. 00:28:46.043 [2024-07-25 10:17:31.089201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.043 [2024-07-25 10:17:31.089224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.043 qpair failed and we were unable to recover it. 00:28:46.043 [2024-07-25 10:17:31.089380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.043 [2024-07-25 10:17:31.089409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.043 qpair failed and we were unable to recover it. 00:28:46.043 [2024-07-25 10:17:31.089593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.043 [2024-07-25 10:17:31.089622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.043 qpair failed and we were unable to recover it. 00:28:46.043 [2024-07-25 10:17:31.089807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.043 [2024-07-25 10:17:31.089830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.043 qpair failed and we were unable to recover it. 00:28:46.043 [2024-07-25 10:17:31.089986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.043 [2024-07-25 10:17:31.090015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.043 qpair failed and we were unable to recover it. 00:28:46.043 [2024-07-25 10:17:31.090231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.043 [2024-07-25 10:17:31.090259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.043 qpair failed and we were unable to recover it. 00:28:46.043 [2024-07-25 10:17:31.090418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.043 [2024-07-25 10:17:31.090459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.043 qpair failed and we were unable to recover it. 00:28:46.043 [2024-07-25 10:17:31.090641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.043 [2024-07-25 10:17:31.090665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.043 qpair failed and we were unable to recover it. 00:28:46.043 [2024-07-25 10:17:31.090839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.043 [2024-07-25 10:17:31.090868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.043 qpair failed and we were unable to recover it. 00:28:46.043 [2024-07-25 10:17:31.091052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.043 [2024-07-25 10:17:31.091075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.043 qpair failed and we were unable to recover it. 00:28:46.043 [2024-07-25 10:17:31.091267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.043 [2024-07-25 10:17:31.091300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.043 qpair failed and we were unable to recover it. 00:28:46.043 [2024-07-25 10:17:31.091513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.043 [2024-07-25 10:17:31.091538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.043 qpair failed and we were unable to recover it. 00:28:46.043 [2024-07-25 10:17:31.091719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.043 [2024-07-25 10:17:31.091757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.043 qpair failed and we were unable to recover it. 00:28:46.043 [2024-07-25 10:17:31.091940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.043 [2024-07-25 10:17:31.091968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.043 qpair failed and we were unable to recover it. 00:28:46.043 [2024-07-25 10:17:31.092166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.043 [2024-07-25 10:17:31.092195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.043 qpair failed and we were unable to recover it. 00:28:46.043 [2024-07-25 10:17:31.092404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.043 [2024-07-25 10:17:31.092435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.044 qpair failed and we were unable to recover it. 00:28:46.044 [2024-07-25 10:17:31.092676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.044 [2024-07-25 10:17:31.092704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.044 qpair failed and we were unable to recover it. 00:28:46.044 [2024-07-25 10:17:31.092878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.044 [2024-07-25 10:17:31.092906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.044 qpair failed and we were unable to recover it. 00:28:46.044 [2024-07-25 10:17:31.093084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.044 [2024-07-25 10:17:31.093107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.044 qpair failed and we were unable to recover it. 00:28:46.044 [2024-07-25 10:17:31.093317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.044 [2024-07-25 10:17:31.093345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.044 qpair failed and we were unable to recover it. 00:28:46.044 [2024-07-25 10:17:31.093558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.044 [2024-07-25 10:17:31.093587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.044 qpair failed and we were unable to recover it. 00:28:46.044 [2024-07-25 10:17:31.093758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.044 [2024-07-25 10:17:31.093781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.044 qpair failed and we were unable to recover it. 00:28:46.044 [2024-07-25 10:17:31.093959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.044 [2024-07-25 10:17:31.093999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.044 qpair failed and we were unable to recover it. 00:28:46.044 [2024-07-25 10:17:31.094196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.044 [2024-07-25 10:17:31.094224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.044 qpair failed and we were unable to recover it. 00:28:46.044 [2024-07-25 10:17:31.094414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.044 [2024-07-25 10:17:31.094457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.044 qpair failed and we were unable to recover it. 00:28:46.044 [2024-07-25 10:17:31.094652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.044 [2024-07-25 10:17:31.094681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.044 qpair failed and we were unable to recover it. 00:28:46.044 [2024-07-25 10:17:31.094856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.044 [2024-07-25 10:17:31.094885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.044 qpair failed and we were unable to recover it. 00:28:46.044 [2024-07-25 10:17:31.095083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.044 [2024-07-25 10:17:31.095104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.044 qpair failed and we were unable to recover it. 00:28:46.044 [2024-07-25 10:17:31.095321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.044 [2024-07-25 10:17:31.095350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.044 qpair failed and we were unable to recover it. 00:28:46.044 [2024-07-25 10:17:31.095528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.044 [2024-07-25 10:17:31.095558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.044 qpair failed and we were unable to recover it. 00:28:46.044 [2024-07-25 10:17:31.095731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.044 [2024-07-25 10:17:31.095755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.044 qpair failed and we were unable to recover it. 00:28:46.044 [2024-07-25 10:17:31.095970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.044 [2024-07-25 10:17:31.096000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.044 qpair failed and we were unable to recover it. 00:28:46.044 [2024-07-25 10:17:31.096189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.044 [2024-07-25 10:17:31.096218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.044 qpair failed and we were unable to recover it. 00:28:46.044 [2024-07-25 10:17:31.096384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.044 [2024-07-25 10:17:31.096407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.044 qpair failed and we were unable to recover it. 00:28:46.044 [2024-07-25 10:17:31.096627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.044 [2024-07-25 10:17:31.096656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.044 qpair failed and we were unable to recover it. 00:28:46.044 [2024-07-25 10:17:31.096818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.044 [2024-07-25 10:17:31.096847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.044 qpair failed and we were unable to recover it. 00:28:46.044 [2024-07-25 10:17:31.097052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.044 [2024-07-25 10:17:31.097075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.044 qpair failed and we were unable to recover it. 00:28:46.044 [2024-07-25 10:17:31.097285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.044 [2024-07-25 10:17:31.097318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.044 qpair failed and we were unable to recover it. 00:28:46.044 [2024-07-25 10:17:31.097501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.044 [2024-07-25 10:17:31.097526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.044 qpair failed and we were unable to recover it. 00:28:46.044 [2024-07-25 10:17:31.097727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.045 [2024-07-25 10:17:31.097764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.045 qpair failed and we were unable to recover it. 00:28:46.045 [2024-07-25 10:17:31.097958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.045 [2024-07-25 10:17:31.097987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.045 qpair failed and we were unable to recover it. 00:28:46.045 [2024-07-25 10:17:31.098163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.045 [2024-07-25 10:17:31.098191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.045 qpair failed and we were unable to recover it. 00:28:46.045 [2024-07-25 10:17:31.098384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.045 [2024-07-25 10:17:31.098407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.045 qpair failed and we were unable to recover it. 00:28:46.045 [2024-07-25 10:17:31.098588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.045 [2024-07-25 10:17:31.098613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.045 qpair failed and we were unable to recover it. 00:28:46.045 [2024-07-25 10:17:31.098755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.045 [2024-07-25 10:17:31.098778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.045 qpair failed and we were unable to recover it. 00:28:46.045 [2024-07-25 10:17:31.098961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.045 [2024-07-25 10:17:31.098984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.045 qpair failed and we were unable to recover it. 00:28:46.045 [2024-07-25 10:17:31.099205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.045 [2024-07-25 10:17:31.099234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.045 qpair failed and we were unable to recover it. 00:28:46.045 [2024-07-25 10:17:31.099439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.045 [2024-07-25 10:17:31.099469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.045 qpair failed and we were unable to recover it. 00:28:46.045 [2024-07-25 10:17:31.099688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.045 [2024-07-25 10:17:31.099712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.045 qpair failed and we were unable to recover it. 00:28:46.045 [2024-07-25 10:17:31.099941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.045 [2024-07-25 10:17:31.099969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.045 qpair failed and we were unable to recover it. 00:28:46.045 [2024-07-25 10:17:31.100142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.045 [2024-07-25 10:17:31.100171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.045 qpair failed and we were unable to recover it. 00:28:46.045 [2024-07-25 10:17:31.100390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.045 [2024-07-25 10:17:31.100413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.045 qpair failed and we were unable to recover it. 00:28:46.045 [2024-07-25 10:17:31.100633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.045 [2024-07-25 10:17:31.100662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.045 qpair failed and we were unable to recover it. 00:28:46.045 [2024-07-25 10:17:31.100838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.045 [2024-07-25 10:17:31.100867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.045 qpair failed and we were unable to recover it. 00:28:46.045 [2024-07-25 10:17:31.101088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.045 [2024-07-25 10:17:31.101111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.045 qpair failed and we were unable to recover it. 00:28:46.045 [2024-07-25 10:17:31.101290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.045 [2024-07-25 10:17:31.101318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.045 qpair failed and we were unable to recover it. 00:28:46.045 [2024-07-25 10:17:31.101530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.045 [2024-07-25 10:17:31.101560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.045 qpair failed and we were unable to recover it. 00:28:46.045 [2024-07-25 10:17:31.101733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.045 [2024-07-25 10:17:31.101757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.045 qpair failed and we were unable to recover it. 00:28:46.045 [2024-07-25 10:17:31.101924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.045 [2024-07-25 10:17:31.101952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.045 qpair failed and we were unable to recover it. 00:28:46.045 [2024-07-25 10:17:31.102165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.045 [2024-07-25 10:17:31.102194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.045 qpair failed and we were unable to recover it. 00:28:46.045 [2024-07-25 10:17:31.102386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.045 [2024-07-25 10:17:31.102424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.045 qpair failed and we were unable to recover it. 00:28:46.045 [2024-07-25 10:17:31.102630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.045 [2024-07-25 10:17:31.102660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.045 qpair failed and we were unable to recover it. 00:28:46.045 [2024-07-25 10:17:31.102826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.045 [2024-07-25 10:17:31.102855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.045 qpair failed and we were unable to recover it. 00:28:46.045 [2024-07-25 10:17:31.103045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.045 [2024-07-25 10:17:31.103067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.045 qpair failed and we were unable to recover it. 00:28:46.045 [2024-07-25 10:17:31.103256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.045 [2024-07-25 10:17:31.103289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.045 qpair failed and we were unable to recover it. 00:28:46.045 [2024-07-25 10:17:31.103462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.045 [2024-07-25 10:17:31.103491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.045 qpair failed and we were unable to recover it. 00:28:46.045 [2024-07-25 10:17:31.103670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.045 [2024-07-25 10:17:31.103694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.045 qpair failed and we were unable to recover it. 00:28:46.045 [2024-07-25 10:17:31.103860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.045 [2024-07-25 10:17:31.103888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.045 qpair failed and we were unable to recover it. 00:28:46.045 [2024-07-25 10:17:31.104095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.045 [2024-07-25 10:17:31.104123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.045 qpair failed and we were unable to recover it. 00:28:46.045 [2024-07-25 10:17:31.104348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.045 [2024-07-25 10:17:31.104377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.045 qpair failed and we were unable to recover it. 00:28:46.045 [2024-07-25 10:17:31.104584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.045 [2024-07-25 10:17:31.104607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.045 qpair failed and we were unable to recover it. 00:28:46.045 [2024-07-25 10:17:31.104787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.046 [2024-07-25 10:17:31.104810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.046 qpair failed and we were unable to recover it. 00:28:46.046 [2024-07-25 10:17:31.104986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.046 [2024-07-25 10:17:31.105008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.046 qpair failed and we were unable to recover it. 00:28:46.046 [2024-07-25 10:17:31.105222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.046 [2024-07-25 10:17:31.105251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.046 qpair failed and we were unable to recover it. 00:28:46.046 [2024-07-25 10:17:31.105443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.046 [2024-07-25 10:17:31.105473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.046 qpair failed and we were unable to recover it. 00:28:46.046 [2024-07-25 10:17:31.105683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.046 [2024-07-25 10:17:31.105707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.046 qpair failed and we were unable to recover it. 00:28:46.046 [2024-07-25 10:17:31.105913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.046 [2024-07-25 10:17:31.105942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.046 qpair failed and we were unable to recover it. 00:28:46.046 [2024-07-25 10:17:31.106127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.046 [2024-07-25 10:17:31.106155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.046 qpair failed and we were unable to recover it. 00:28:46.046 [2024-07-25 10:17:31.106341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.046 [2024-07-25 10:17:31.106364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.046 qpair failed and we were unable to recover it. 00:28:46.046 [2024-07-25 10:17:31.106529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.046 [2024-07-25 10:17:31.106559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.046 qpair failed and we were unable to recover it. 00:28:46.046 [2024-07-25 10:17:31.106771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.046 [2024-07-25 10:17:31.106799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.046 qpair failed and we were unable to recover it. 00:28:46.046 [2024-07-25 10:17:31.106961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.046 [2024-07-25 10:17:31.106984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.046 qpair failed and we were unable to recover it. 00:28:46.046 [2024-07-25 10:17:31.107207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.046 [2024-07-25 10:17:31.107236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.046 qpair failed and we were unable to recover it. 00:28:46.046 [2024-07-25 10:17:31.107418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.046 [2024-07-25 10:17:31.107456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.046 qpair failed and we were unable to recover it. 00:28:46.046 [2024-07-25 10:17:31.107667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.046 [2024-07-25 10:17:31.107691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.046 qpair failed and we were unable to recover it. 00:28:46.046 [2024-07-25 10:17:31.107873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.046 [2024-07-25 10:17:31.107902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.046 qpair failed and we were unable to recover it. 00:28:46.046 [2024-07-25 10:17:31.108063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.046 [2024-07-25 10:17:31.108091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.046 qpair failed and we were unable to recover it. 00:28:46.046 [2024-07-25 10:17:31.108270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.046 [2024-07-25 10:17:31.108294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.046 qpair failed and we were unable to recover it. 00:28:46.046 [2024-07-25 10:17:31.108501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.046 [2024-07-25 10:17:31.108526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.046 qpair failed and we were unable to recover it. 00:28:46.046 [2024-07-25 10:17:31.108742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.046 [2024-07-25 10:17:31.108770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.046 qpair failed and we were unable to recover it. 00:28:46.046 [2024-07-25 10:17:31.108991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.046 [2024-07-25 10:17:31.109014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.046 qpair failed and we were unable to recover it. 00:28:46.046 [2024-07-25 10:17:31.109180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.046 [2024-07-25 10:17:31.109209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.046 qpair failed and we were unable to recover it. 00:28:46.046 [2024-07-25 10:17:31.109437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.046 [2024-07-25 10:17:31.109467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.046 qpair failed and we were unable to recover it. 00:28:46.046 [2024-07-25 10:17:31.109688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.046 [2024-07-25 10:17:31.109712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.046 qpair failed and we were unable to recover it. 00:28:46.046 [2024-07-25 10:17:31.109863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.046 [2024-07-25 10:17:31.109891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.046 qpair failed and we were unable to recover it. 00:28:46.046 [2024-07-25 10:17:31.110065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.046 [2024-07-25 10:17:31.110094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.046 qpair failed and we were unable to recover it. 00:28:46.046 [2024-07-25 10:17:31.110317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.046 [2024-07-25 10:17:31.110345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.046 qpair failed and we were unable to recover it. 00:28:46.046 [2024-07-25 10:17:31.110500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.046 [2024-07-25 10:17:31.110524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.046 qpair failed and we were unable to recover it. 00:28:46.046 [2024-07-25 10:17:31.110730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.046 [2024-07-25 10:17:31.110760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.046 qpair failed and we were unable to recover it. 00:28:46.046 [2024-07-25 10:17:31.110975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.046 [2024-07-25 10:17:31.110998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.046 qpair failed and we were unable to recover it. 00:28:46.046 [2024-07-25 10:17:31.111208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.046 [2024-07-25 10:17:31.111237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.046 qpair failed and we were unable to recover it. 00:28:46.046 [2024-07-25 10:17:31.111385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.046 [2024-07-25 10:17:31.111413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.046 qpair failed and we were unable to recover it. 00:28:46.046 [2024-07-25 10:17:31.111596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.046 [2024-07-25 10:17:31.111620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.046 qpair failed and we were unable to recover it. 00:28:46.047 [2024-07-25 10:17:31.111780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.047 [2024-07-25 10:17:31.111803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.047 qpair failed and we were unable to recover it. 00:28:46.047 [2024-07-25 10:17:31.112029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.047 [2024-07-25 10:17:31.112057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.047 qpair failed and we were unable to recover it. 00:28:46.047 [2024-07-25 10:17:31.112281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.047 [2024-07-25 10:17:31.112304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.047 qpair failed and we were unable to recover it. 00:28:46.047 [2024-07-25 10:17:31.112482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.047 [2024-07-25 10:17:31.112512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.047 qpair failed and we were unable to recover it. 00:28:46.047 [2024-07-25 10:17:31.112697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.047 [2024-07-25 10:17:31.112725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.047 qpair failed and we were unable to recover it. 00:28:46.047 [2024-07-25 10:17:31.112900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.047 [2024-07-25 10:17:31.112924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.047 qpair failed and we were unable to recover it. 00:28:46.047 [2024-07-25 10:17:31.113138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.047 [2024-07-25 10:17:31.113168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.047 qpair failed and we were unable to recover it. 00:28:46.047 [2024-07-25 10:17:31.113342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.047 [2024-07-25 10:17:31.113371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.047 qpair failed and we were unable to recover it. 00:28:46.047 [2024-07-25 10:17:31.113562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.047 [2024-07-25 10:17:31.113586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.047 qpair failed and we were unable to recover it. 00:28:46.047 [2024-07-25 10:17:31.113752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.047 [2024-07-25 10:17:31.113781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.047 qpair failed and we were unable to recover it. 00:28:46.047 [2024-07-25 10:17:31.113981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.047 [2024-07-25 10:17:31.114010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.047 qpair failed and we were unable to recover it. 00:28:46.047 [2024-07-25 10:17:31.114219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.047 [2024-07-25 10:17:31.114242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.047 qpair failed and we were unable to recover it. 00:28:46.047 [2024-07-25 10:17:31.114424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.047 [2024-07-25 10:17:31.114461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.047 qpair failed and we were unable to recover it. 00:28:46.047 [2024-07-25 10:17:31.114636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.047 [2024-07-25 10:17:31.114665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.047 qpair failed and we were unable to recover it. 00:28:46.047 [2024-07-25 10:17:31.114869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.047 [2024-07-25 10:17:31.114892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.047 qpair failed and we were unable to recover it. 00:28:46.047 [2024-07-25 10:17:31.115106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.047 [2024-07-25 10:17:31.115135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.047 qpair failed and we were unable to recover it. 00:28:46.047 [2024-07-25 10:17:31.115341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.047 [2024-07-25 10:17:31.115370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.047 qpair failed and we were unable to recover it. 00:28:46.047 [2024-07-25 10:17:31.115553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.047 [2024-07-25 10:17:31.115578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.047 qpair failed and we were unable to recover it. 00:28:46.047 [2024-07-25 10:17:31.115767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.047 [2024-07-25 10:17:31.115795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.047 qpair failed and we were unable to recover it. 00:28:46.047 [2024-07-25 10:17:31.115990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.047 [2024-07-25 10:17:31.116019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.047 qpair failed and we were unable to recover it. 00:28:46.047 [2024-07-25 10:17:31.116196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.047 [2024-07-25 10:17:31.116219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.047 qpair failed and we were unable to recover it. 00:28:46.047 [2024-07-25 10:17:31.116399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.047 [2024-07-25 10:17:31.116447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.047 qpair failed and we were unable to recover it. 00:28:46.047 [2024-07-25 10:17:31.116632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.047 [2024-07-25 10:17:31.116657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.047 qpair failed and we were unable to recover it. 00:28:46.047 [2024-07-25 10:17:31.116852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.047 [2024-07-25 10:17:31.116875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.047 qpair failed and we were unable to recover it. 00:28:46.047 [2024-07-25 10:17:31.117041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.047 [2024-07-25 10:17:31.117070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.047 qpair failed and we were unable to recover it. 00:28:46.047 [2024-07-25 10:17:31.117280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.047 [2024-07-25 10:17:31.117309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.047 qpair failed and we were unable to recover it. 00:28:46.047 [2024-07-25 10:17:31.117519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.047 [2024-07-25 10:17:31.117543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.047 qpair failed and we were unable to recover it. 00:28:46.047 [2024-07-25 10:17:31.117733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.047 [2024-07-25 10:17:31.117762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.047 qpair failed and we were unable to recover it. 00:28:46.047 [2024-07-25 10:17:31.117934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.047 [2024-07-25 10:17:31.117963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.047 qpair failed and we were unable to recover it. 00:28:46.047 [2024-07-25 10:17:31.118179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.047 [2024-07-25 10:17:31.118205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.047 qpair failed and we were unable to recover it. 00:28:46.047 [2024-07-25 10:17:31.118424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.047 [2024-07-25 10:17:31.118472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.047 qpair failed and we were unable to recover it. 00:28:46.047 [2024-07-25 10:17:31.118674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.048 [2024-07-25 10:17:31.118703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.048 qpair failed and we were unable to recover it. 00:28:46.048 [2024-07-25 10:17:31.118917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.048 [2024-07-25 10:17:31.118941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.048 qpair failed and we were unable to recover it. 00:28:46.048 [2024-07-25 10:17:31.119089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.048 [2024-07-25 10:17:31.119118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.048 qpair failed and we were unable to recover it. 00:28:46.048 [2024-07-25 10:17:31.119287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.048 [2024-07-25 10:17:31.119316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.048 qpair failed and we were unable to recover it. 00:28:46.048 [2024-07-25 10:17:31.119524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.048 [2024-07-25 10:17:31.119549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.048 qpair failed and we were unable to recover it. 00:28:46.048 [2024-07-25 10:17:31.119768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.048 [2024-07-25 10:17:31.119797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.048 qpair failed and we were unable to recover it. 00:28:46.048 [2024-07-25 10:17:31.119982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.048 [2024-07-25 10:17:31.120011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.048 qpair failed and we were unable to recover it. 00:28:46.048 [2024-07-25 10:17:31.120221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.048 [2024-07-25 10:17:31.120245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.048 qpair failed and we were unable to recover it. 00:28:46.048 [2024-07-25 10:17:31.120460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.048 [2024-07-25 10:17:31.120490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.048 qpair failed and we were unable to recover it. 00:28:46.048 [2024-07-25 10:17:31.120660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.048 [2024-07-25 10:17:31.120689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.048 qpair failed and we were unable to recover it. 00:28:46.048 [2024-07-25 10:17:31.120868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.048 [2024-07-25 10:17:31.120891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.048 qpair failed and we were unable to recover it. 00:28:46.048 [2024-07-25 10:17:31.121092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.048 [2024-07-25 10:17:31.121121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.048 qpair failed and we were unable to recover it. 00:28:46.048 [2024-07-25 10:17:31.121336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.048 [2024-07-25 10:17:31.121365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.048 qpair failed and we were unable to recover it. 00:28:46.048 [2024-07-25 10:17:31.121543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.048 [2024-07-25 10:17:31.121567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.048 qpair failed and we were unable to recover it. 00:28:46.048 [2024-07-25 10:17:31.121731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.048 [2024-07-25 10:17:31.121761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.048 qpair failed and we were unable to recover it. 00:28:46.048 [2024-07-25 10:17:31.121933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.048 [2024-07-25 10:17:31.121962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.048 qpair failed and we were unable to recover it. 00:28:46.048 [2024-07-25 10:17:31.122146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.048 [2024-07-25 10:17:31.122169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.048 qpair failed and we were unable to recover it. 00:28:46.048 [2024-07-25 10:17:31.122348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.048 [2024-07-25 10:17:31.122377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.048 qpair failed and we were unable to recover it. 00:28:46.048 [2024-07-25 10:17:31.122579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.048 [2024-07-25 10:17:31.122604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.048 qpair failed and we were unable to recover it. 00:28:46.048 [2024-07-25 10:17:31.122768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.048 [2024-07-25 10:17:31.122791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.048 qpair failed and we were unable to recover it. 00:28:46.048 [2024-07-25 10:17:31.122970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.048 [2024-07-25 10:17:31.122999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.048 qpair failed and we were unable to recover it. 00:28:46.048 [2024-07-25 10:17:31.123125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.048 [2024-07-25 10:17:31.123154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.048 qpair failed and we were unable to recover it. 00:28:46.048 [2024-07-25 10:17:31.123349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.048 [2024-07-25 10:17:31.123372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.048 qpair failed and we were unable to recover it. 00:28:46.048 [2024-07-25 10:17:31.123554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.048 [2024-07-25 10:17:31.123579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.048 qpair failed and we were unable to recover it. 00:28:46.048 [2024-07-25 10:17:31.123788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.048 [2024-07-25 10:17:31.123818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.048 qpair failed and we were unable to recover it. 00:28:46.048 [2024-07-25 10:17:31.123997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.048 [2024-07-25 10:17:31.124024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.048 qpair failed and we were unable to recover it. 00:28:46.048 [2024-07-25 10:17:31.124212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.048 [2024-07-25 10:17:31.124241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.048 qpair failed and we were unable to recover it. 00:28:46.048 [2024-07-25 10:17:31.124401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.048 [2024-07-25 10:17:31.124439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.048 qpair failed and we were unable to recover it. 00:28:46.048 [2024-07-25 10:17:31.124604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.048 [2024-07-25 10:17:31.124639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.048 qpair failed and we were unable to recover it. 00:28:46.048 [2024-07-25 10:17:31.124852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.048 [2024-07-25 10:17:31.124882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.048 qpair failed and we were unable to recover it. 00:28:46.048 [2024-07-25 10:17:31.125093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.048 [2024-07-25 10:17:31.125122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.048 qpair failed and we were unable to recover it. 00:28:46.048 [2024-07-25 10:17:31.125334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.049 [2024-07-25 10:17:31.125357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.049 qpair failed and we were unable to recover it. 00:28:46.049 [2024-07-25 10:17:31.125522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.049 [2024-07-25 10:17:31.125552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.049 qpair failed and we were unable to recover it. 00:28:46.049 [2024-07-25 10:17:31.125755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.049 [2024-07-25 10:17:31.125784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.049 qpair failed and we were unable to recover it. 00:28:46.049 [2024-07-25 10:17:31.126008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.049 [2024-07-25 10:17:31.126032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.049 qpair failed and we were unable to recover it. 00:28:46.049 [2024-07-25 10:17:31.126220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.049 [2024-07-25 10:17:31.126249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.049 qpair failed and we were unable to recover it. 00:28:46.049 [2024-07-25 10:17:31.126461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.049 [2024-07-25 10:17:31.126490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.049 qpair failed and we were unable to recover it. 00:28:46.049 [2024-07-25 10:17:31.126652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.049 [2024-07-25 10:17:31.126676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.049 qpair failed and we were unable to recover it. 00:28:46.049 [2024-07-25 10:17:31.126855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.049 [2024-07-25 10:17:31.126884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.049 qpair failed and we were unable to recover it. 00:28:46.049 [2024-07-25 10:17:31.127065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.049 [2024-07-25 10:17:31.127094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.049 qpair failed and we were unable to recover it. 00:28:46.049 [2024-07-25 10:17:31.127266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.049 [2024-07-25 10:17:31.127289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.049 qpair failed and we were unable to recover it. 00:28:46.049 [2024-07-25 10:17:31.127466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.049 [2024-07-25 10:17:31.127504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.049 qpair failed and we were unable to recover it. 00:28:46.049 [2024-07-25 10:17:31.127676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.049 [2024-07-25 10:17:31.127705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.049 qpair failed and we were unable to recover it. 00:28:46.049 [2024-07-25 10:17:31.127869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.049 [2024-07-25 10:17:31.127892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.049 qpair failed and we were unable to recover it. 00:28:46.049 [2024-07-25 10:17:31.128104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.049 [2024-07-25 10:17:31.128133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.049 qpair failed and we were unable to recover it. 00:28:46.049 [2024-07-25 10:17:31.128256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.049 [2024-07-25 10:17:31.128285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.049 qpair failed and we were unable to recover it. 00:28:46.049 [2024-07-25 10:17:31.128493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.049 [2024-07-25 10:17:31.128532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.049 qpair failed and we were unable to recover it. 00:28:46.049 [2024-07-25 10:17:31.128712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.049 [2024-07-25 10:17:31.128741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.049 qpair failed and we were unable to recover it. 00:28:46.049 [2024-07-25 10:17:31.128940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.049 [2024-07-25 10:17:31.128968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.049 qpair failed and we were unable to recover it. 00:28:46.049 [2024-07-25 10:17:31.129133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.049 [2024-07-25 10:17:31.129156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.049 qpair failed and we were unable to recover it. 00:28:46.049 [2024-07-25 10:17:31.129371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.049 [2024-07-25 10:17:31.129400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.049 qpair failed and we were unable to recover it. 00:28:46.049 [2024-07-25 10:17:31.129599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.049 [2024-07-25 10:17:31.129624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.049 qpair failed and we were unable to recover it. 00:28:46.049 [2024-07-25 10:17:31.129829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.049 [2024-07-25 10:17:31.129852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.049 qpair failed and we were unable to recover it. 00:28:46.049 [2024-07-25 10:17:31.130049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.049 [2024-07-25 10:17:31.130078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.049 qpair failed and we were unable to recover it. 00:28:46.049 [2024-07-25 10:17:31.130253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.049 [2024-07-25 10:17:31.130282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.049 qpair failed and we were unable to recover it. 00:28:46.049 [2024-07-25 10:17:31.130513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.049 [2024-07-25 10:17:31.130538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.049 qpair failed and we were unable to recover it. 00:28:46.049 [2024-07-25 10:17:31.130713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.049 [2024-07-25 10:17:31.130743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.049 qpair failed and we were unable to recover it. 00:28:46.049 [2024-07-25 10:17:31.130905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.049 [2024-07-25 10:17:31.130934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.049 qpair failed and we were unable to recover it. 00:28:46.049 [2024-07-25 10:17:31.131105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.049 [2024-07-25 10:17:31.131129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.049 qpair failed and we were unable to recover it. 00:28:46.049 [2024-07-25 10:17:31.131345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.049 [2024-07-25 10:17:31.131374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.049 qpair failed and we were unable to recover it. 00:28:46.049 [2024-07-25 10:17:31.131597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.049 [2024-07-25 10:17:31.131626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.049 qpair failed and we were unable to recover it. 00:28:46.049 [2024-07-25 10:17:31.131839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.049 [2024-07-25 10:17:31.131863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.049 qpair failed and we were unable to recover it. 00:28:46.050 [2024-07-25 10:17:31.132037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.050 [2024-07-25 10:17:31.132066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.050 qpair failed and we were unable to recover it. 00:28:46.050 [2024-07-25 10:17:31.132238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.050 [2024-07-25 10:17:31.132267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.050 qpair failed and we were unable to recover it. 00:28:46.050 [2024-07-25 10:17:31.132488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.050 [2024-07-25 10:17:31.132512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.050 qpair failed and we were unable to recover it. 00:28:46.050 [2024-07-25 10:17:31.132683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.050 [2024-07-25 10:17:31.132712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.050 qpair failed and we were unable to recover it. 00:28:46.050 [2024-07-25 10:17:31.132917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.050 [2024-07-25 10:17:31.132947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.050 qpair failed and we were unable to recover it. 00:28:46.050 [2024-07-25 10:17:31.133129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.050 [2024-07-25 10:17:31.133152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.050 qpair failed and we were unable to recover it. 00:28:46.050 [2024-07-25 10:17:31.133322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.050 [2024-07-25 10:17:31.133351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.050 qpair failed and we were unable to recover it. 00:28:46.050 [2024-07-25 10:17:31.133570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.050 [2024-07-25 10:17:31.133600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.050 qpair failed and we were unable to recover it. 00:28:46.050 [2024-07-25 10:17:31.133790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.050 [2024-07-25 10:17:31.133813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.050 qpair failed and we were unable to recover it. 00:28:46.050 [2024-07-25 10:17:31.134020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.050 [2024-07-25 10:17:31.134049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.050 qpair failed and we were unable to recover it. 00:28:46.050 [2024-07-25 10:17:31.134245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.050 [2024-07-25 10:17:31.134273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.050 qpair failed and we were unable to recover it. 00:28:46.050 [2024-07-25 10:17:31.134453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.050 [2024-07-25 10:17:31.134492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.050 qpair failed and we were unable to recover it. 00:28:46.050 [2024-07-25 10:17:31.134693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.050 [2024-07-25 10:17:31.134722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.050 qpair failed and we were unable to recover it. 00:28:46.050 [2024-07-25 10:17:31.134931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.050 [2024-07-25 10:17:31.134960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.050 qpair failed and we were unable to recover it. 00:28:46.050 [2024-07-25 10:17:31.135132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.050 [2024-07-25 10:17:31.135155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.050 qpair failed and we were unable to recover it. 00:28:46.050 [2024-07-25 10:17:31.135287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.050 [2024-07-25 10:17:31.135332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.050 qpair failed and we were unable to recover it. 00:28:46.050 [2024-07-25 10:17:31.135544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.050 [2024-07-25 10:17:31.135569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.050 qpair failed and we were unable to recover it. 00:28:46.050 [2024-07-25 10:17:31.135751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.050 [2024-07-25 10:17:31.135774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.050 qpair failed and we were unable to recover it. 00:28:46.050 [2024-07-25 10:17:31.135948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.050 [2024-07-25 10:17:31.135977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.050 qpair failed and we were unable to recover it. 00:28:46.050 [2024-07-25 10:17:31.136162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.050 [2024-07-25 10:17:31.136191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.050 qpair failed and we were unable to recover it. 00:28:46.050 [2024-07-25 10:17:31.136391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.050 [2024-07-25 10:17:31.136414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.050 qpair failed and we were unable to recover it. 00:28:46.050 [2024-07-25 10:17:31.136589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.050 [2024-07-25 10:17:31.136618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.050 qpair failed and we were unable to recover it. 00:28:46.050 [2024-07-25 10:17:31.136801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.050 [2024-07-25 10:17:31.136830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.050 qpair failed and we were unable to recover it. 00:28:46.050 [2024-07-25 10:17:31.137011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.050 [2024-07-25 10:17:31.137034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.050 qpair failed and we were unable to recover it. 00:28:46.050 [2024-07-25 10:17:31.137226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.050 [2024-07-25 10:17:31.137254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.050 qpair failed and we were unable to recover it. 00:28:46.050 [2024-07-25 10:17:31.137459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.050 [2024-07-25 10:17:31.137489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.050 qpair failed and we were unable to recover it. 00:28:46.050 [2024-07-25 10:17:31.137633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.050 [2024-07-25 10:17:31.137670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.050 qpair failed and we were unable to recover it. 00:28:46.050 [2024-07-25 10:17:31.137832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.050 [2024-07-25 10:17:31.137861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.050 qpair failed and we were unable to recover it. 00:28:46.050 [2024-07-25 10:17:31.138065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.050 [2024-07-25 10:17:31.138093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.050 qpair failed and we were unable to recover it. 00:28:46.051 [2024-07-25 10:17:31.138273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.051 [2024-07-25 10:17:31.138296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.051 qpair failed and we were unable to recover it. 00:28:46.051 [2024-07-25 10:17:31.138509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.051 [2024-07-25 10:17:31.138539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.051 qpair failed and we were unable to recover it. 00:28:46.051 [2024-07-25 10:17:31.138733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.051 [2024-07-25 10:17:31.138766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.051 qpair failed and we were unable to recover it. 00:28:46.051 [2024-07-25 10:17:31.138981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.051 [2024-07-25 10:17:31.139005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.051 qpair failed and we were unable to recover it. 00:28:46.051 [2024-07-25 10:17:31.139213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.051 [2024-07-25 10:17:31.139242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.051 qpair failed and we were unable to recover it. 00:28:46.051 [2024-07-25 10:17:31.139419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.051 [2024-07-25 10:17:31.139457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.051 qpair failed and we were unable to recover it. 00:28:46.051 [2024-07-25 10:17:31.139636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.051 [2024-07-25 10:17:31.139660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.051 qpair failed and we were unable to recover it. 00:28:46.051 [2024-07-25 10:17:31.139824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.051 [2024-07-25 10:17:31.139853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.051 qpair failed and we were unable to recover it. 00:28:46.051 [2024-07-25 10:17:31.139988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.051 [2024-07-25 10:17:31.140017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.051 qpair failed and we were unable to recover it. 00:28:46.051 [2024-07-25 10:17:31.140208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.051 [2024-07-25 10:17:31.140231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.051 qpair failed and we were unable to recover it. 00:28:46.051 [2024-07-25 10:17:31.140419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.051 [2024-07-25 10:17:31.140455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.051 qpair failed and we were unable to recover it. 00:28:46.051 [2024-07-25 10:17:31.140629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.051 [2024-07-25 10:17:31.140658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.051 qpair failed and we were unable to recover it. 00:28:46.051 [2024-07-25 10:17:31.140827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.051 [2024-07-25 10:17:31.140851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.051 qpair failed and we were unable to recover it. 00:28:46.051 [2024-07-25 10:17:31.141034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.051 [2024-07-25 10:17:31.141063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.051 qpair failed and we were unable to recover it. 00:28:46.051 [2024-07-25 10:17:31.141229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.051 [2024-07-25 10:17:31.141259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.051 qpair failed and we were unable to recover it. 00:28:46.051 [2024-07-25 10:17:31.141472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.051 [2024-07-25 10:17:31.141512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.051 qpair failed and we were unable to recover it. 00:28:46.051 [2024-07-25 10:17:31.141680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.051 [2024-07-25 10:17:31.141721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.051 qpair failed and we were unable to recover it. 00:28:46.051 [2024-07-25 10:17:31.141909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.051 [2024-07-25 10:17:31.141938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.051 qpair failed and we were unable to recover it. 00:28:46.051 [2024-07-25 10:17:31.142165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.051 [2024-07-25 10:17:31.142188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.051 qpair failed and we were unable to recover it. 00:28:46.051 [2024-07-25 10:17:31.142356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.051 [2024-07-25 10:17:31.142385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.051 qpair failed and we were unable to recover it. 00:28:46.051 [2024-07-25 10:17:31.142567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.051 [2024-07-25 10:17:31.142592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.051 qpair failed and we were unable to recover it. 00:28:46.051 [2024-07-25 10:17:31.142753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.051 [2024-07-25 10:17:31.142794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.051 qpair failed and we were unable to recover it. 00:28:46.051 [2024-07-25 10:17:31.142948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.051 [2024-07-25 10:17:31.142976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.051 qpair failed and we were unable to recover it. 00:28:46.051 [2024-07-25 10:17:31.143164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.051 [2024-07-25 10:17:31.143193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.051 qpair failed and we were unable to recover it. 00:28:46.051 [2024-07-25 10:17:31.143360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.051 [2024-07-25 10:17:31.143399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.051 qpair failed and we were unable to recover it. 00:28:46.051 [2024-07-25 10:17:31.143588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.051 [2024-07-25 10:17:31.143628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.051 qpair failed and we were unable to recover it. 00:28:46.051 [2024-07-25 10:17:31.143850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.051 [2024-07-25 10:17:31.143878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.051 qpair failed and we were unable to recover it. 00:28:46.051 [2024-07-25 10:17:31.144101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.051 [2024-07-25 10:17:31.144124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.051 qpair failed and we were unable to recover it. 00:28:46.051 [2024-07-25 10:17:31.144291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.051 [2024-07-25 10:17:31.144320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.051 qpair failed and we were unable to recover it. 00:28:46.051 [2024-07-25 10:17:31.144492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.051 [2024-07-25 10:17:31.144526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.051 qpair failed and we were unable to recover it. 00:28:46.051 [2024-07-25 10:17:31.144720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.051 [2024-07-25 10:17:31.144745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.051 qpair failed and we were unable to recover it. 00:28:46.051 [2024-07-25 10:17:31.144969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.051 [2024-07-25 10:17:31.144998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.051 qpair failed and we were unable to recover it. 00:28:46.051 [2024-07-25 10:17:31.145178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.051 [2024-07-25 10:17:31.145207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.051 qpair failed and we were unable to recover it. 00:28:46.051 [2024-07-25 10:17:31.145382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.051 [2024-07-25 10:17:31.145407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.051 qpair failed and we were unable to recover it. 00:28:46.051 [2024-07-25 10:17:31.145596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.052 [2024-07-25 10:17:31.145622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.052 qpair failed and we were unable to recover it. 00:28:46.052 [2024-07-25 10:17:31.145808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.052 [2024-07-25 10:17:31.145837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.052 qpair failed and we were unable to recover it. 00:28:46.052 [2024-07-25 10:17:31.145988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.052 [2024-07-25 10:17:31.146025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.052 qpair failed and we were unable to recover it. 00:28:46.052 [2024-07-25 10:17:31.146199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.052 [2024-07-25 10:17:31.146228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.052 qpair failed and we were unable to recover it. 00:28:46.052 [2024-07-25 10:17:31.146395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.052 [2024-07-25 10:17:31.146424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.052 qpair failed and we were unable to recover it. 00:28:46.052 [2024-07-25 10:17:31.146611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.052 [2024-07-25 10:17:31.146637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.052 qpair failed and we were unable to recover it. 00:28:46.052 [2024-07-25 10:17:31.146804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.052 [2024-07-25 10:17:31.146833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.052 qpair failed and we were unable to recover it. 00:28:46.332 [2024-07-25 10:17:31.147002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.332 [2024-07-25 10:17:31.147031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.332 qpair failed and we were unable to recover it. 00:28:46.332 [2024-07-25 10:17:31.147217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.332 [2024-07-25 10:17:31.147243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.332 qpair failed and we were unable to recover it. 00:28:46.332 [2024-07-25 10:17:31.147402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.332 [2024-07-25 10:17:31.147439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.332 qpair failed and we were unable to recover it. 00:28:46.332 [2024-07-25 10:17:31.147576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.332 [2024-07-25 10:17:31.147610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.332 qpair failed and we were unable to recover it. 00:28:46.332 [2024-07-25 10:17:31.147824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.332 [2024-07-25 10:17:31.147856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.332 qpair failed and we were unable to recover it. 00:28:46.332 [2024-07-25 10:17:31.148012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.332 [2024-07-25 10:17:31.148039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.332 qpair failed and we were unable to recover it. 00:28:46.332 [2024-07-25 10:17:31.148173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.332 [2024-07-25 10:17:31.148204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.332 qpair failed and we were unable to recover it. 00:28:46.332 [2024-07-25 10:17:31.148394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.332 [2024-07-25 10:17:31.148421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.332 qpair failed and we were unable to recover it. 00:28:46.332 [2024-07-25 10:17:31.148631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.332 [2024-07-25 10:17:31.148657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.332 qpair failed and we were unable to recover it. 00:28:46.332 [2024-07-25 10:17:31.148859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.332 [2024-07-25 10:17:31.148888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.332 qpair failed and we were unable to recover it. 00:28:46.332 [2024-07-25 10:17:31.149061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.332 [2024-07-25 10:17:31.149087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.332 qpair failed and we were unable to recover it. 00:28:46.332 [2024-07-25 10:17:31.149297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.332 [2024-07-25 10:17:31.149326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.332 qpair failed and we were unable to recover it. 00:28:46.332 [2024-07-25 10:17:31.149501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.332 [2024-07-25 10:17:31.149531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.332 qpair failed and we were unable to recover it. 00:28:46.332 [2024-07-25 10:17:31.149740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.332 [2024-07-25 10:17:31.149766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.332 qpair failed and we were unable to recover it. 00:28:46.332 [2024-07-25 10:17:31.149902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.332 [2024-07-25 10:17:31.149931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.332 qpair failed and we were unable to recover it. 00:28:46.332 [2024-07-25 10:17:31.150104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.332 [2024-07-25 10:17:31.150137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.332 qpair failed and we were unable to recover it. 00:28:46.332 [2024-07-25 10:17:31.150347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.332 [2024-07-25 10:17:31.150373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.332 qpair failed and we were unable to recover it. 00:28:46.332 [2024-07-25 10:17:31.150542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.332 [2024-07-25 10:17:31.150569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.332 qpair failed and we were unable to recover it. 00:28:46.332 [2024-07-25 10:17:31.150736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.332 [2024-07-25 10:17:31.150764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.332 qpair failed and we were unable to recover it. 00:28:46.332 [2024-07-25 10:17:31.150958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.332 [2024-07-25 10:17:31.150996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.332 qpair failed and we were unable to recover it. 00:28:46.332 [2024-07-25 10:17:31.151167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.332 [2024-07-25 10:17:31.151196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.332 qpair failed and we were unable to recover it. 00:28:46.332 [2024-07-25 10:17:31.151344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.332 [2024-07-25 10:17:31.151373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.332 qpair failed and we were unable to recover it. 00:28:46.332 [2024-07-25 10:17:31.151539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.332 [2024-07-25 10:17:31.151565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.332 qpair failed and we were unable to recover it. 00:28:46.332 [2024-07-25 10:17:31.151695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.332 [2024-07-25 10:17:31.151736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.332 qpair failed and we were unable to recover it. 00:28:46.332 [2024-07-25 10:17:31.151926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.332 [2024-07-25 10:17:31.151955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.332 qpair failed and we were unable to recover it. 00:28:46.332 [2024-07-25 10:17:31.152149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.332 [2024-07-25 10:17:31.152175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.332 qpair failed and we were unable to recover it. 00:28:46.332 [2024-07-25 10:17:31.152315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.332 [2024-07-25 10:17:31.152344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.332 qpair failed and we were unable to recover it. 00:28:46.332 [2024-07-25 10:17:31.152505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.332 [2024-07-25 10:17:31.152534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.332 qpair failed and we were unable to recover it. 00:28:46.332 [2024-07-25 10:17:31.152682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.332 [2024-07-25 10:17:31.152709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.332 qpair failed and we were unable to recover it. 00:28:46.332 [2024-07-25 10:17:31.152862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.332 [2024-07-25 10:17:31.152906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.332 qpair failed and we were unable to recover it. 00:28:46.332 [2024-07-25 10:17:31.153066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.332 [2024-07-25 10:17:31.153095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.332 qpair failed and we were unable to recover it. 00:28:46.332 [2024-07-25 10:17:31.153302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.332 [2024-07-25 10:17:31.153327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.332 qpair failed and we were unable to recover it. 00:28:46.332 [2024-07-25 10:17:31.153504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.332 [2024-07-25 10:17:31.153534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.332 qpair failed and we were unable to recover it. 00:28:46.332 [2024-07-25 10:17:31.153692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.333 [2024-07-25 10:17:31.153721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.333 qpair failed and we were unable to recover it. 00:28:46.333 [2024-07-25 10:17:31.153887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.333 [2024-07-25 10:17:31.153913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.333 qpair failed and we were unable to recover it. 00:28:46.333 [2024-07-25 10:17:31.154081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.333 [2024-07-25 10:17:31.154110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.333 qpair failed and we were unable to recover it. 00:28:46.333 [2024-07-25 10:17:31.154285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.333 [2024-07-25 10:17:31.154314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.333 qpair failed and we were unable to recover it. 00:28:46.333 [2024-07-25 10:17:31.154487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.333 [2024-07-25 10:17:31.154513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.333 qpair failed and we were unable to recover it. 00:28:46.333 [2024-07-25 10:17:31.154670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.333 [2024-07-25 10:17:31.154699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.333 qpair failed and we were unable to recover it. 00:28:46.333 [2024-07-25 10:17:31.154830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.333 [2024-07-25 10:17:31.154859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.333 qpair failed and we were unable to recover it. 00:28:46.333 [2024-07-25 10:17:31.155026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.333 [2024-07-25 10:17:31.155052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.333 qpair failed and we were unable to recover it. 00:28:46.333 [2024-07-25 10:17:31.155213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.333 [2024-07-25 10:17:31.155242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.333 qpair failed and we were unable to recover it. 00:28:46.333 [2024-07-25 10:17:31.155399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.333 [2024-07-25 10:17:31.155436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.333 qpair failed and we were unable to recover it. 00:28:46.333 [2024-07-25 10:17:31.155616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.333 [2024-07-25 10:17:31.155642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.333 qpair failed and we were unable to recover it. 00:28:46.333 [2024-07-25 10:17:31.155785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.333 [2024-07-25 10:17:31.155814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.333 qpair failed and we were unable to recover it. 00:28:46.333 [2024-07-25 10:17:31.155974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.333 [2024-07-25 10:17:31.156003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.333 qpair failed and we were unable to recover it. 00:28:46.333 [2024-07-25 10:17:31.156165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.333 [2024-07-25 10:17:31.156205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.333 qpair failed and we were unable to recover it. 00:28:46.333 [2024-07-25 10:17:31.156369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.333 [2024-07-25 10:17:31.156397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.333 qpair failed and we were unable to recover it. 00:28:46.333 [2024-07-25 10:17:31.156582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.333 [2024-07-25 10:17:31.156612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.333 qpair failed and we were unable to recover it. 00:28:46.333 [2024-07-25 10:17:31.156779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.333 [2024-07-25 10:17:31.156805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.333 qpair failed and we were unable to recover it. 00:28:46.333 [2024-07-25 10:17:31.156964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.333 [2024-07-25 10:17:31.156992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.333 qpair failed and we were unable to recover it. 00:28:46.333 [2024-07-25 10:17:31.157144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.333 [2024-07-25 10:17:31.157173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.333 qpair failed and we were unable to recover it. 00:28:46.333 [2024-07-25 10:17:31.157359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.333 [2024-07-25 10:17:31.157385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.333 qpair failed and we were unable to recover it. 00:28:46.333 [2024-07-25 10:17:31.157519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.333 [2024-07-25 10:17:31.157546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.333 qpair failed and we were unable to recover it. 00:28:46.333 [2024-07-25 10:17:31.157708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.333 [2024-07-25 10:17:31.157750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.333 qpair failed and we were unable to recover it. 00:28:46.333 [2024-07-25 10:17:31.157920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.333 [2024-07-25 10:17:31.157946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.333 qpair failed and we were unable to recover it. 00:28:46.333 [2024-07-25 10:17:31.158131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.333 [2024-07-25 10:17:31.158161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.333 qpair failed and we were unable to recover it. 00:28:46.333 [2024-07-25 10:17:31.158327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.333 [2024-07-25 10:17:31.158356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.333 qpair failed and we were unable to recover it. 00:28:46.333 [2024-07-25 10:17:31.158516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.333 [2024-07-25 10:17:31.158543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.333 qpair failed and we were unable to recover it. 00:28:46.333 [2024-07-25 10:17:31.158709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.333 [2024-07-25 10:17:31.158738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.333 qpair failed and we were unable to recover it. 00:28:46.333 [2024-07-25 10:17:31.158925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.333 [2024-07-25 10:17:31.158954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.333 qpair failed and we were unable to recover it. 00:28:46.333 [2024-07-25 10:17:31.159111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.333 [2024-07-25 10:17:31.159136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.333 qpair failed and we were unable to recover it. 00:28:46.333 [2024-07-25 10:17:31.159317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.333 [2024-07-25 10:17:31.159346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.333 qpair failed and we were unable to recover it. 00:28:46.333 [2024-07-25 10:17:31.159508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.333 [2024-07-25 10:17:31.159538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.333 qpair failed and we were unable to recover it. 00:28:46.333 [2024-07-25 10:17:31.159710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.333 [2024-07-25 10:17:31.159735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.333 qpair failed and we were unable to recover it. 00:28:46.333 [2024-07-25 10:17:31.159892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.333 [2024-07-25 10:17:31.159921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.333 qpair failed and we were unable to recover it. 00:28:46.333 [2024-07-25 10:17:31.160096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.333 [2024-07-25 10:17:31.160124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.333 qpair failed and we were unable to recover it. 00:28:46.333 [2024-07-25 10:17:31.160289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.333 [2024-07-25 10:17:31.160315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.333 qpair failed and we were unable to recover it. 00:28:46.333 [2024-07-25 10:17:31.160487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.333 [2024-07-25 10:17:31.160516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.333 qpair failed and we were unable to recover it. 00:28:46.333 [2024-07-25 10:17:31.160686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.333 [2024-07-25 10:17:31.160714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.333 qpair failed and we were unable to recover it. 00:28:46.333 [2024-07-25 10:17:31.160881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.333 [2024-07-25 10:17:31.160907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.333 qpair failed and we were unable to recover it. 00:28:46.333 [2024-07-25 10:17:31.161090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.334 [2024-07-25 10:17:31.161119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.334 qpair failed and we were unable to recover it. 00:28:46.334 [2024-07-25 10:17:31.161310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.334 [2024-07-25 10:17:31.161339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.334 qpair failed and we were unable to recover it. 00:28:46.334 [2024-07-25 10:17:31.161550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.334 [2024-07-25 10:17:31.161577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.334 qpair failed and we were unable to recover it. 00:28:46.334 [2024-07-25 10:17:31.161765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.334 [2024-07-25 10:17:31.161794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.334 qpair failed and we were unable to recover it. 00:28:46.334 [2024-07-25 10:17:31.161954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.334 [2024-07-25 10:17:31.161982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.334 qpair failed and we were unable to recover it. 00:28:46.334 [2024-07-25 10:17:31.162165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.334 [2024-07-25 10:17:31.162191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.334 qpair failed and we were unable to recover it. 00:28:46.334 [2024-07-25 10:17:31.162397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.334 [2024-07-25 10:17:31.162426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.334 qpair failed and we were unable to recover it. 00:28:46.334 [2024-07-25 10:17:31.162625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.334 [2024-07-25 10:17:31.162654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.334 qpair failed and we were unable to recover it. 00:28:46.334 [2024-07-25 10:17:31.162864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.334 [2024-07-25 10:17:31.162890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.334 qpair failed and we were unable to recover it. 00:28:46.334 [2024-07-25 10:17:31.163072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.334 [2024-07-25 10:17:31.163101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.334 qpair failed and we were unable to recover it. 00:28:46.334 [2024-07-25 10:17:31.163273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.334 [2024-07-25 10:17:31.163301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.334 qpair failed and we were unable to recover it. 00:28:46.334 [2024-07-25 10:17:31.163478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.334 [2024-07-25 10:17:31.163505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.334 qpair failed and we were unable to recover it. 00:28:46.334 [2024-07-25 10:17:31.163675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.334 [2024-07-25 10:17:31.163723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.334 qpair failed and we were unable to recover it. 00:28:46.334 [2024-07-25 10:17:31.163928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.334 [2024-07-25 10:17:31.163957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.334 qpair failed and we were unable to recover it. 00:28:46.334 [2024-07-25 10:17:31.164130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.334 [2024-07-25 10:17:31.164156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.334 qpair failed and we were unable to recover it. 00:28:46.334 [2024-07-25 10:17:31.164368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.334 [2024-07-25 10:17:31.164396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.334 qpair failed and we were unable to recover it. 00:28:46.334 [2024-07-25 10:17:31.164574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.334 [2024-07-25 10:17:31.164613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.334 qpair failed and we were unable to recover it. 00:28:46.334 [2024-07-25 10:17:31.164791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.334 [2024-07-25 10:17:31.164817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.334 qpair failed and we were unable to recover it. 00:28:46.334 [2024-07-25 10:17:31.164978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.334 [2024-07-25 10:17:31.165007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.334 qpair failed and we were unable to recover it. 00:28:46.334 [2024-07-25 10:17:31.165179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.334 [2024-07-25 10:17:31.165208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.334 qpair failed and we were unable to recover it. 00:28:46.334 [2024-07-25 10:17:31.165353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.334 [2024-07-25 10:17:31.165380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.334 qpair failed and we were unable to recover it. 00:28:46.334 [2024-07-25 10:17:31.165579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.334 [2024-07-25 10:17:31.165606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.334 qpair failed and we were unable to recover it. 00:28:46.334 [2024-07-25 10:17:31.165740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.334 [2024-07-25 10:17:31.165769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.334 qpair failed and we were unable to recover it. 00:28:46.334 [2024-07-25 10:17:31.165937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.334 [2024-07-25 10:17:31.165964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.334 qpair failed and we were unable to recover it. 00:28:46.334 [2024-07-25 10:17:31.166133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.334 [2024-07-25 10:17:31.166162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.334 qpair failed and we were unable to recover it. 00:28:46.334 [2024-07-25 10:17:31.166338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.334 [2024-07-25 10:17:31.166367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.334 qpair failed and we were unable to recover it. 00:28:46.334 [2024-07-25 10:17:31.166582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.334 [2024-07-25 10:17:31.166609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.334 qpair failed and we were unable to recover it. 00:28:46.334 [2024-07-25 10:17:31.166803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.334 [2024-07-25 10:17:31.166832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.334 qpair failed and we were unable to recover it. 00:28:46.334 [2024-07-25 10:17:31.166975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.334 [2024-07-25 10:17:31.167004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.334 qpair failed and we were unable to recover it. 00:28:46.334 [2024-07-25 10:17:31.167218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.334 [2024-07-25 10:17:31.167244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.334 qpair failed and we were unable to recover it. 00:28:46.334 [2024-07-25 10:17:31.167458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.334 [2024-07-25 10:17:31.167489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.334 qpair failed and we were unable to recover it. 00:28:46.334 [2024-07-25 10:17:31.167628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.334 [2024-07-25 10:17:31.167658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.334 qpair failed and we were unable to recover it. 00:28:46.334 [2024-07-25 10:17:31.167830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.334 [2024-07-25 10:17:31.167857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.334 qpair failed and we were unable to recover it. 00:28:46.334 [2024-07-25 10:17:31.167983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.334 [2024-07-25 10:17:31.168026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.334 qpair failed and we were unable to recover it. 00:28:46.334 [2024-07-25 10:17:31.168235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.334 [2024-07-25 10:17:31.168264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.334 qpair failed and we were unable to recover it. 00:28:46.334 [2024-07-25 10:17:31.168459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.334 [2024-07-25 10:17:31.168486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.334 qpair failed and we were unable to recover it. 00:28:46.334 [2024-07-25 10:17:31.168647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.334 [2024-07-25 10:17:31.168675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.334 qpair failed and we were unable to recover it. 00:28:46.334 [2024-07-25 10:17:31.168848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.334 [2024-07-25 10:17:31.168877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.334 qpair failed and we were unable to recover it. 00:28:46.335 [2024-07-25 10:17:31.169045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.335 [2024-07-25 10:17:31.169072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.335 qpair failed and we were unable to recover it. 00:28:46.335 [2024-07-25 10:17:31.169242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.335 [2024-07-25 10:17:31.169273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.335 qpair failed and we were unable to recover it. 00:28:46.335 [2024-07-25 10:17:31.169465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.335 [2024-07-25 10:17:31.169514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.335 qpair failed and we were unable to recover it. 00:28:46.335 [2024-07-25 10:17:31.169716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.335 [2024-07-25 10:17:31.169742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.335 qpair failed and we were unable to recover it. 00:28:46.335 [2024-07-25 10:17:31.169947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.335 [2024-07-25 10:17:31.169976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.335 qpair failed and we were unable to recover it. 00:28:46.335 [2024-07-25 10:17:31.170188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.335 [2024-07-25 10:17:31.170217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.335 qpair failed and we were unable to recover it. 00:28:46.335 [2024-07-25 10:17:31.170435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.335 [2024-07-25 10:17:31.170465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.335 qpair failed and we were unable to recover it. 00:28:46.335 [2024-07-25 10:17:31.170627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.335 [2024-07-25 10:17:31.170653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.335 qpair failed and we were unable to recover it. 00:28:46.335 [2024-07-25 10:17:31.170869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.335 [2024-07-25 10:17:31.170898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.335 qpair failed and we were unable to recover it. 00:28:46.335 [2024-07-25 10:17:31.171072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.335 [2024-07-25 10:17:31.171107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.335 qpair failed and we were unable to recover it. 00:28:46.335 [2024-07-25 10:17:31.171298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.335 [2024-07-25 10:17:31.171327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.335 qpair failed and we were unable to recover it. 00:28:46.335 [2024-07-25 10:17:31.171501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.335 [2024-07-25 10:17:31.171527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.335 qpair failed and we were unable to recover it. 00:28:46.335 [2024-07-25 10:17:31.171667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.335 [2024-07-25 10:17:31.171693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.335 qpair failed and we were unable to recover it. 00:28:46.335 [2024-07-25 10:17:31.171878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.335 [2024-07-25 10:17:31.171907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.335 qpair failed and we were unable to recover it. 00:28:46.335 [2024-07-25 10:17:31.172108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.335 [2024-07-25 10:17:31.172137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.335 qpair failed and we were unable to recover it. 00:28:46.335 [2024-07-25 10:17:31.172302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.335 [2024-07-25 10:17:31.172327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.335 qpair failed and we were unable to recover it. 00:28:46.335 [2024-07-25 10:17:31.172519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.335 [2024-07-25 10:17:31.172548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.335 qpair failed and we were unable to recover it. 00:28:46.335 [2024-07-25 10:17:31.172720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.335 [2024-07-25 10:17:31.172749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.335 qpair failed and we were unable to recover it. 00:28:46.335 [2024-07-25 10:17:31.172923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.335 [2024-07-25 10:17:31.172949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.335 qpair failed and we were unable to recover it. 00:28:46.335 [2024-07-25 10:17:31.173160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.335 [2024-07-25 10:17:31.173189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.335 qpair failed and we were unable to recover it. 00:28:46.335 [2024-07-25 10:17:31.173329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.335 [2024-07-25 10:17:31.173358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.335 qpair failed and we were unable to recover it. 00:28:46.335 [2024-07-25 10:17:31.173506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.335 [2024-07-25 10:17:31.173534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.335 qpair failed and we were unable to recover it. 00:28:46.335 [2024-07-25 10:17:31.173659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.335 [2024-07-25 10:17:31.173685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.335 qpair failed and we were unable to recover it. 00:28:46.335 [2024-07-25 10:17:31.173870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.335 [2024-07-25 10:17:31.173899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.335 qpair failed and we were unable to recover it. 00:28:46.335 [2024-07-25 10:17:31.174066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.335 [2024-07-25 10:17:31.174092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.335 qpair failed and we were unable to recover it. 00:28:46.335 [2024-07-25 10:17:31.174301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.335 [2024-07-25 10:17:31.174330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.335 qpair failed and we were unable to recover it. 00:28:46.335 [2024-07-25 10:17:31.174508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.335 [2024-07-25 10:17:31.174537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.335 qpair failed and we were unable to recover it. 00:28:46.335 [2024-07-25 10:17:31.174732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.335 [2024-07-25 10:17:31.174759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.335 qpair failed and we were unable to recover it. 00:28:46.335 [2024-07-25 10:17:31.174942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.335 [2024-07-25 10:17:31.174984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.335 qpair failed and we were unable to recover it. 00:28:46.335 [2024-07-25 10:17:31.175216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.335 [2024-07-25 10:17:31.175245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.335 qpair failed and we were unable to recover it. 00:28:46.335 [2024-07-25 10:17:31.175424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.335 [2024-07-25 10:17:31.175479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.335 qpair failed and we were unable to recover it. 00:28:46.335 [2024-07-25 10:17:31.175661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.335 [2024-07-25 10:17:31.175690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.335 qpair failed and we were unable to recover it. 00:28:46.335 [2024-07-25 10:17:31.175877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.335 [2024-07-25 10:17:31.175910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.335 qpair failed and we were unable to recover it. 00:28:46.335 [2024-07-25 10:17:31.176104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.335 [2024-07-25 10:17:31.176140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.335 qpair failed and we were unable to recover it. 00:28:46.335 [2024-07-25 10:17:31.176350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.335 [2024-07-25 10:17:31.176379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.335 qpair failed and we were unable to recover it. 00:28:46.335 [2024-07-25 10:17:31.176608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.335 [2024-07-25 10:17:31.176638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.335 qpair failed and we were unable to recover it. 00:28:46.335 [2024-07-25 10:17:31.176854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.335 [2024-07-25 10:17:31.176891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.335 qpair failed and we were unable to recover it. 00:28:46.336 [2024-07-25 10:17:31.177112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.336 [2024-07-25 10:17:31.177149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.336 qpair failed and we were unable to recover it. 00:28:46.336 [2024-07-25 10:17:31.177358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.336 [2024-07-25 10:17:31.177387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.336 qpair failed and we were unable to recover it. 00:28:46.336 [2024-07-25 10:17:31.177570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.336 [2024-07-25 10:17:31.177597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.336 qpair failed and we were unable to recover it. 00:28:46.336 [2024-07-25 10:17:31.177801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.336 [2024-07-25 10:17:31.177830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.336 qpair failed and we were unable to recover it. 00:28:46.336 [2024-07-25 10:17:31.178041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.336 [2024-07-25 10:17:31.178070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.336 qpair failed and we were unable to recover it. 00:28:46.336 [2024-07-25 10:17:31.178347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.336 [2024-07-25 10:17:31.178394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.336 qpair failed and we were unable to recover it. 00:28:46.336 [2024-07-25 10:17:31.178602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.336 [2024-07-25 10:17:31.178629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.336 qpair failed and we were unable to recover it. 00:28:46.336 [2024-07-25 10:17:31.178847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.336 [2024-07-25 10:17:31.178876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.336 qpair failed and we were unable to recover it. 00:28:46.336 [2024-07-25 10:17:31.179057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.336 [2024-07-25 10:17:31.179082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.336 qpair failed and we were unable to recover it. 00:28:46.336 [2024-07-25 10:17:31.179263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.336 [2024-07-25 10:17:31.179292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.336 qpair failed and we were unable to recover it. 00:28:46.336 [2024-07-25 10:17:31.179483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.336 [2024-07-25 10:17:31.179510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.336 qpair failed and we were unable to recover it. 00:28:46.336 [2024-07-25 10:17:31.179703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.336 [2024-07-25 10:17:31.179730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.336 qpair failed and we were unable to recover it. 00:28:46.336 [2024-07-25 10:17:31.179893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.336 [2024-07-25 10:17:31.179922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.336 qpair failed and we were unable to recover it. 00:28:46.336 [2024-07-25 10:17:31.180121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.336 [2024-07-25 10:17:31.180150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.336 qpair failed and we were unable to recover it. 00:28:46.336 [2024-07-25 10:17:31.180356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.336 [2024-07-25 10:17:31.180396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.336 qpair failed and we were unable to recover it. 00:28:46.336 [2024-07-25 10:17:31.180597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.336 [2024-07-25 10:17:31.180624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.336 qpair failed and we were unable to recover it. 00:28:46.336 [2024-07-25 10:17:31.180836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.336 [2024-07-25 10:17:31.180866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.336 qpair failed and we were unable to recover it. 00:28:46.336 [2024-07-25 10:17:31.181091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.336 [2024-07-25 10:17:31.181116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.336 qpair failed and we were unable to recover it. 00:28:46.336 [2024-07-25 10:17:31.181312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.336 [2024-07-25 10:17:31.181351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.336 qpair failed and we were unable to recover it. 00:28:46.336 [2024-07-25 10:17:31.181548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.336 [2024-07-25 10:17:31.181578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.336 qpair failed and we were unable to recover it. 00:28:46.336 [2024-07-25 10:17:31.181725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.336 [2024-07-25 10:17:31.181751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.336 qpair failed and we were unable to recover it. 00:28:46.336 [2024-07-25 10:17:31.181954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.336 [2024-07-25 10:17:31.181983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.336 qpair failed and we were unable to recover it. 00:28:46.336 [2024-07-25 10:17:31.182191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.336 [2024-07-25 10:17:31.182228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.336 qpair failed and we were unable to recover it. 00:28:46.336 [2024-07-25 10:17:31.182492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.336 [2024-07-25 10:17:31.182519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.336 qpair failed and we were unable to recover it. 00:28:46.336 [2024-07-25 10:17:31.182678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.336 [2024-07-25 10:17:31.182706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.336 qpair failed and we were unable to recover it. 00:28:46.336 [2024-07-25 10:17:31.182905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.336 [2024-07-25 10:17:31.182934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.336 qpair failed and we were unable to recover it. 00:28:46.336 [2024-07-25 10:17:31.183194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.336 [2024-07-25 10:17:31.183219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.336 qpair failed and we were unable to recover it. 00:28:46.336 [2024-07-25 10:17:31.183425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.336 [2024-07-25 10:17:31.183482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.336 qpair failed and we were unable to recover it. 00:28:46.336 [2024-07-25 10:17:31.183662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.336 [2024-07-25 10:17:31.183690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.336 qpair failed and we were unable to recover it. 00:28:46.336 [2024-07-25 10:17:31.183938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.336 [2024-07-25 10:17:31.183965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.336 qpair failed and we were unable to recover it. 00:28:46.336 [2024-07-25 10:17:31.184211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.337 [2024-07-25 10:17:31.184240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.337 qpair failed and we were unable to recover it. 00:28:46.337 [2024-07-25 10:17:31.184438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.337 [2024-07-25 10:17:31.184468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.337 qpair failed and we were unable to recover it. 00:28:46.337 [2024-07-25 10:17:31.184686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.337 [2024-07-25 10:17:31.184731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.337 qpair failed and we were unable to recover it. 00:28:46.337 [2024-07-25 10:17:31.185015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.337 [2024-07-25 10:17:31.185046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.337 qpair failed and we were unable to recover it. 00:28:46.337 [2024-07-25 10:17:31.185293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.337 [2024-07-25 10:17:31.185322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.337 qpair failed and we were unable to recover it. 00:28:46.337 [2024-07-25 10:17:31.185574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.337 [2024-07-25 10:17:31.185601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.337 qpair failed and we were unable to recover it. 00:28:46.337 [2024-07-25 10:17:31.185771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.337 [2024-07-25 10:17:31.185799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.337 qpair failed and we were unable to recover it. 00:28:46.337 [2024-07-25 10:17:31.185984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.337 [2024-07-25 10:17:31.186018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.337 qpair failed and we were unable to recover it. 00:28:46.337 [2024-07-25 10:17:31.186249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.337 [2024-07-25 10:17:31.186275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.337 qpair failed and we were unable to recover it. 00:28:46.337 [2024-07-25 10:17:31.186507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.337 [2024-07-25 10:17:31.186534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.337 qpair failed and we were unable to recover it. 00:28:46.337 [2024-07-25 10:17:31.186730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.337 [2024-07-25 10:17:31.186754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.337 qpair failed and we were unable to recover it. 00:28:46.337 [2024-07-25 10:17:31.186972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.337 [2024-07-25 10:17:31.187012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.337 qpair failed and we were unable to recover it. 00:28:46.337 [2024-07-25 10:17:31.187186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.337 [2024-07-25 10:17:31.187214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.337 qpair failed and we were unable to recover it. 00:28:46.337 [2024-07-25 10:17:31.187444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.337 [2024-07-25 10:17:31.187482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.337 qpair failed and we were unable to recover it. 00:28:46.337 [2024-07-25 10:17:31.187710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.337 [2024-07-25 10:17:31.187737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.337 qpair failed and we were unable to recover it. 00:28:46.337 [2024-07-25 10:17:31.187973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.337 [2024-07-25 10:17:31.188003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.337 qpair failed and we were unable to recover it. 00:28:46.337 [2024-07-25 10:17:31.188218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.337 [2024-07-25 10:17:31.188247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.337 qpair failed and we were unable to recover it. 00:28:46.337 [2024-07-25 10:17:31.188387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.337 [2024-07-25 10:17:31.188433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.337 qpair failed and we were unable to recover it. 00:28:46.337 [2024-07-25 10:17:31.188618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.337 [2024-07-25 10:17:31.188647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.337 qpair failed and we were unable to recover it. 00:28:46.337 [2024-07-25 10:17:31.188816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.337 [2024-07-25 10:17:31.188845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.337 qpair failed and we were unable to recover it. 00:28:46.337 [2024-07-25 10:17:31.189048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.337 [2024-07-25 10:17:31.189073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.337 qpair failed and we were unable to recover it. 00:28:46.337 [2024-07-25 10:17:31.189324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.337 [2024-07-25 10:17:31.189353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.337 qpair failed and we were unable to recover it. 00:28:46.337 [2024-07-25 10:17:31.189539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.337 [2024-07-25 10:17:31.189569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.337 qpair failed and we were unable to recover it. 00:28:46.337 [2024-07-25 10:17:31.189784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.337 [2024-07-25 10:17:31.189809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.337 qpair failed and we were unable to recover it. 00:28:46.337 [2024-07-25 10:17:31.190007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.337 [2024-07-25 10:17:31.190035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.337 qpair failed and we were unable to recover it. 00:28:46.337 [2024-07-25 10:17:31.190238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.337 [2024-07-25 10:17:31.190267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.337 qpair failed and we were unable to recover it. 00:28:46.337 [2024-07-25 10:17:31.190468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.337 [2024-07-25 10:17:31.190495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.337 qpair failed and we were unable to recover it. 00:28:46.337 [2024-07-25 10:17:31.190674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.337 [2024-07-25 10:17:31.190703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.337 qpair failed and we were unable to recover it. 00:28:46.337 [2024-07-25 10:17:31.190901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.337 [2024-07-25 10:17:31.190931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.337 qpair failed and we were unable to recover it. 00:28:46.337 [2024-07-25 10:17:31.191072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.337 [2024-07-25 10:17:31.191101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.337 qpair failed and we were unable to recover it. 00:28:46.337 [2024-07-25 10:17:31.191322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.337 [2024-07-25 10:17:31.191351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.337 qpair failed and we were unable to recover it. 00:28:46.337 [2024-07-25 10:17:31.191553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.337 [2024-07-25 10:17:31.191583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.337 qpair failed and we were unable to recover it. 00:28:46.337 [2024-07-25 10:17:31.191747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.337 [2024-07-25 10:17:31.191772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.337 qpair failed and we were unable to recover it. 00:28:46.337 [2024-07-25 10:17:31.191989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.337 [2024-07-25 10:17:31.192018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.337 qpair failed and we were unable to recover it. 00:28:46.337 [2024-07-25 10:17:31.192244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.337 [2024-07-25 10:17:31.192273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.337 qpair failed and we were unable to recover it. 00:28:46.337 [2024-07-25 10:17:31.192531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.337 [2024-07-25 10:17:31.192558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.337 qpair failed and we were unable to recover it. 00:28:46.337 [2024-07-25 10:17:31.192694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.337 [2024-07-25 10:17:31.192735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.337 qpair failed and we were unable to recover it. 00:28:46.337 [2024-07-25 10:17:31.192946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.337 [2024-07-25 10:17:31.192974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.337 qpair failed and we were unable to recover it. 00:28:46.338 [2024-07-25 10:17:31.193183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.338 [2024-07-25 10:17:31.193208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.338 qpair failed and we were unable to recover it. 00:28:46.338 [2024-07-25 10:17:31.193402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.338 [2024-07-25 10:17:31.193439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.338 qpair failed and we were unable to recover it. 00:28:46.338 [2024-07-25 10:17:31.193675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.338 [2024-07-25 10:17:31.193717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.338 qpair failed and we were unable to recover it. 00:28:46.338 [2024-07-25 10:17:31.193970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.338 [2024-07-25 10:17:31.194012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.338 qpair failed and we were unable to recover it. 00:28:46.338 [2024-07-25 10:17:31.194188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.338 [2024-07-25 10:17:31.194217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.338 qpair failed and we were unable to recover it. 00:28:46.338 [2024-07-25 10:17:31.194447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.338 [2024-07-25 10:17:31.194476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.338 qpair failed and we were unable to recover it. 00:28:46.338 [2024-07-25 10:17:31.194688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.338 [2024-07-25 10:17:31.194714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.338 qpair failed and we were unable to recover it. 00:28:46.338 [2024-07-25 10:17:31.194923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.338 [2024-07-25 10:17:31.194952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.338 qpair failed and we were unable to recover it. 00:28:46.338 [2024-07-25 10:17:31.195111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.338 [2024-07-25 10:17:31.195139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.338 qpair failed and we were unable to recover it. 00:28:46.338 [2024-07-25 10:17:31.195315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.338 [2024-07-25 10:17:31.195356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.338 qpair failed and we were unable to recover it. 00:28:46.338 [2024-07-25 10:17:31.195617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.338 [2024-07-25 10:17:31.195647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.338 qpair failed and we were unable to recover it. 00:28:46.338 [2024-07-25 10:17:31.195823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.338 [2024-07-25 10:17:31.195852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.338 qpair failed and we were unable to recover it. 00:28:46.338 [2024-07-25 10:17:31.196057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.338 [2024-07-25 10:17:31.196100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.338 qpair failed and we were unable to recover it. 00:28:46.338 [2024-07-25 10:17:31.196343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.338 [2024-07-25 10:17:31.196373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.338 qpair failed and we were unable to recover it. 00:28:46.338 [2024-07-25 10:17:31.196617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.338 [2024-07-25 10:17:31.196647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.338 qpair failed and we were unable to recover it. 00:28:46.338 [2024-07-25 10:17:31.196824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.338 [2024-07-25 10:17:31.196868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.338 qpair failed and we were unable to recover it. 00:28:46.338 [2024-07-25 10:17:31.197072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.338 [2024-07-25 10:17:31.197101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.338 qpair failed and we were unable to recover it. 00:28:46.338 [2024-07-25 10:17:31.197309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.338 [2024-07-25 10:17:31.197338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.338 qpair failed and we were unable to recover it. 00:28:46.338 [2024-07-25 10:17:31.197567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.338 [2024-07-25 10:17:31.197598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.338 qpair failed and we were unable to recover it. 00:28:46.338 [2024-07-25 10:17:31.197848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.338 [2024-07-25 10:17:31.197877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.338 qpair failed and we were unable to recover it. 00:28:46.338 [2024-07-25 10:17:31.198039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.338 [2024-07-25 10:17:31.198068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.338 qpair failed and we were unable to recover it. 00:28:46.338 [2024-07-25 10:17:31.198257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.338 [2024-07-25 10:17:31.198283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.338 qpair failed and we were unable to recover it. 00:28:46.338 [2024-07-25 10:17:31.198441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.338 [2024-07-25 10:17:31.198471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.338 qpair failed and we were unable to recover it. 00:28:46.338 [2024-07-25 10:17:31.198635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.338 [2024-07-25 10:17:31.198664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.338 qpair failed and we were unable to recover it. 00:28:46.338 [2024-07-25 10:17:31.198814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.338 [2024-07-25 10:17:31.198854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.338 qpair failed and we were unable to recover it. 00:28:46.338 [2024-07-25 10:17:31.199069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.338 [2024-07-25 10:17:31.199097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.338 qpair failed and we were unable to recover it. 00:28:46.338 [2024-07-25 10:17:31.199298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.338 [2024-07-25 10:17:31.199326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.338 qpair failed and we were unable to recover it. 00:28:46.338 [2024-07-25 10:17:31.199501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.338 [2024-07-25 10:17:31.199528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.338 qpair failed and we were unable to recover it. 00:28:46.338 [2024-07-25 10:17:31.199689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.338 [2024-07-25 10:17:31.199730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.338 qpair failed and we were unable to recover it. 00:28:46.338 [2024-07-25 10:17:31.199929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.338 [2024-07-25 10:17:31.199958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.338 qpair failed and we were unable to recover it. 00:28:46.338 [2024-07-25 10:17:31.200178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.338 [2024-07-25 10:17:31.200204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.338 qpair failed and we were unable to recover it. 00:28:46.338 [2024-07-25 10:17:31.200369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.338 [2024-07-25 10:17:31.200397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.338 qpair failed and we were unable to recover it. 00:28:46.338 [2024-07-25 10:17:31.200667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.338 [2024-07-25 10:17:31.200694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.338 qpair failed and we were unable to recover it. 00:28:46.338 [2024-07-25 10:17:31.200912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.338 [2024-07-25 10:17:31.200939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.338 qpair failed and we were unable to recover it. 00:28:46.338 [2024-07-25 10:17:31.201156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.338 [2024-07-25 10:17:31.201185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.338 qpair failed and we were unable to recover it. 00:28:46.338 [2024-07-25 10:17:31.201399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.338 [2024-07-25 10:17:31.201436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.338 qpair failed and we were unable to recover it. 00:28:46.338 [2024-07-25 10:17:31.201616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.338 [2024-07-25 10:17:31.201647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.338 qpair failed and we were unable to recover it. 00:28:46.338 [2024-07-25 10:17:31.201834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.338 [2024-07-25 10:17:31.201863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.338 qpair failed and we were unable to recover it. 00:28:46.339 [2024-07-25 10:17:31.202040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.339 [2024-07-25 10:17:31.202069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.339 qpair failed and we were unable to recover it. 00:28:46.339 [2024-07-25 10:17:31.202283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.339 [2024-07-25 10:17:31.202309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.339 qpair failed and we were unable to recover it. 00:28:46.339 [2024-07-25 10:17:31.202503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.339 [2024-07-25 10:17:31.202542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.339 qpair failed and we were unable to recover it. 00:28:46.339 [2024-07-25 10:17:31.202694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.339 [2024-07-25 10:17:31.202723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.339 qpair failed and we were unable to recover it. 00:28:46.339 [2024-07-25 10:17:31.202910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.339 [2024-07-25 10:17:31.202936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.339 qpair failed and we were unable to recover it. 00:28:46.339 [2024-07-25 10:17:31.203152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.339 [2024-07-25 10:17:31.203181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.339 qpair failed and we were unable to recover it. 00:28:46.339 [2024-07-25 10:17:31.203388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.339 [2024-07-25 10:17:31.203416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.339 qpair failed and we were unable to recover it. 00:28:46.339 [2024-07-25 10:17:31.203624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.339 [2024-07-25 10:17:31.203651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.339 qpair failed and we were unable to recover it. 00:28:46.339 [2024-07-25 10:17:31.203832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.339 [2024-07-25 10:17:31.203871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.339 qpair failed and we were unable to recover it. 00:28:46.339 [2024-07-25 10:17:31.204075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.339 [2024-07-25 10:17:31.204103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.339 qpair failed and we were unable to recover it. 00:28:46.339 [2024-07-25 10:17:31.204326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.339 [2024-07-25 10:17:31.204352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.339 qpair failed and we were unable to recover it. 00:28:46.339 [2024-07-25 10:17:31.204518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.339 [2024-07-25 10:17:31.204548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.339 qpair failed and we were unable to recover it. 00:28:46.339 [2024-07-25 10:17:31.204790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.339 [2024-07-25 10:17:31.204819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.339 qpair failed and we were unable to recover it. 00:28:46.339 [2024-07-25 10:17:31.205040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.339 [2024-07-25 10:17:31.205067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.339 qpair failed and we were unable to recover it. 00:28:46.339 [2024-07-25 10:17:31.205259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.339 [2024-07-25 10:17:31.205288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.339 qpair failed and we were unable to recover it. 00:28:46.339 [2024-07-25 10:17:31.205489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.339 [2024-07-25 10:17:31.205519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.339 qpair failed and we were unable to recover it. 00:28:46.339 [2024-07-25 10:17:31.205776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.339 [2024-07-25 10:17:31.205801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.339 qpair failed and we were unable to recover it. 00:28:46.339 [2024-07-25 10:17:31.206023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.339 [2024-07-25 10:17:31.206052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.339 qpair failed and we were unable to recover it. 00:28:46.339 [2024-07-25 10:17:31.206296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.339 [2024-07-25 10:17:31.206325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.339 qpair failed and we were unable to recover it. 00:28:46.339 [2024-07-25 10:17:31.206548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.339 [2024-07-25 10:17:31.206575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.339 qpair failed and we were unable to recover it. 00:28:46.339 [2024-07-25 10:17:31.206741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.339 [2024-07-25 10:17:31.206770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.339 qpair failed and we were unable to recover it. 00:28:46.339 [2024-07-25 10:17:31.206967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.339 [2024-07-25 10:17:31.206996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.339 qpair failed and we were unable to recover it. 00:28:46.339 [2024-07-25 10:17:31.207190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.339 [2024-07-25 10:17:31.207231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.339 qpair failed and we were unable to recover it. 00:28:46.339 [2024-07-25 10:17:31.207441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.339 [2024-07-25 10:17:31.207485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.339 qpair failed and we were unable to recover it. 00:28:46.339 [2024-07-25 10:17:31.207733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.339 [2024-07-25 10:17:31.207762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.339 qpair failed and we were unable to recover it. 00:28:46.339 [2024-07-25 10:17:31.207950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.339 [2024-07-25 10:17:31.207975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.339 qpair failed and we were unable to recover it. 00:28:46.339 [2024-07-25 10:17:31.208208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.339 [2024-07-25 10:17:31.208237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.339 qpair failed and we were unable to recover it. 00:28:46.339 [2024-07-25 10:17:31.208448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.339 [2024-07-25 10:17:31.208478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.339 qpair failed and we were unable to recover it. 00:28:46.339 [2024-07-25 10:17:31.208702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.339 [2024-07-25 10:17:31.208744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.339 qpair failed and we were unable to recover it. 00:28:46.339 [2024-07-25 10:17:31.208956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.339 [2024-07-25 10:17:31.208985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.339 qpair failed and we were unable to recover it. 00:28:46.339 [2024-07-25 10:17:31.209219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.339 [2024-07-25 10:17:31.209247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.339 qpair failed and we were unable to recover it. 00:28:46.339 [2024-07-25 10:17:31.209459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.339 [2024-07-25 10:17:31.209485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.339 qpair failed and we were unable to recover it. 00:28:46.339 [2024-07-25 10:17:31.209647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.339 [2024-07-25 10:17:31.209676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.339 qpair failed and we were unable to recover it. 00:28:46.339 [2024-07-25 10:17:31.209879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.339 [2024-07-25 10:17:31.209908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.339 qpair failed and we were unable to recover it. 00:28:46.339 [2024-07-25 10:17:31.210120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.339 [2024-07-25 10:17:31.210146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.339 qpair failed and we were unable to recover it. 00:28:46.339 [2024-07-25 10:17:31.210390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.339 [2024-07-25 10:17:31.210419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.339 qpair failed and we were unable to recover it. 00:28:46.339 [2024-07-25 10:17:31.210608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.339 [2024-07-25 10:17:31.210638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.339 qpair failed and we were unable to recover it. 00:28:46.339 [2024-07-25 10:17:31.210797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.340 [2024-07-25 10:17:31.210822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.340 qpair failed and we were unable to recover it. 00:28:46.340 [2024-07-25 10:17:31.211004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.340 [2024-07-25 10:17:31.211032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.340 qpair failed and we were unable to recover it. 00:28:46.340 [2024-07-25 10:17:31.211238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.340 [2024-07-25 10:17:31.211267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.340 qpair failed and we were unable to recover it. 00:28:46.340 [2024-07-25 10:17:31.211438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.340 [2024-07-25 10:17:31.211479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.340 qpair failed and we were unable to recover it. 00:28:46.340 [2024-07-25 10:17:31.211654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.340 [2024-07-25 10:17:31.211682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.340 qpair failed and we were unable to recover it. 00:28:46.340 [2024-07-25 10:17:31.211892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.340 [2024-07-25 10:17:31.211920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.340 qpair failed and we were unable to recover it. 00:28:46.340 [2024-07-25 10:17:31.212121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.340 [2024-07-25 10:17:31.212146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.340 qpair failed and we were unable to recover it. 00:28:46.340 [2024-07-25 10:17:31.212383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.340 [2024-07-25 10:17:31.212412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.340 qpair failed and we were unable to recover it. 00:28:46.340 [2024-07-25 10:17:31.212600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.340 [2024-07-25 10:17:31.212632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.340 qpair failed and we were unable to recover it. 00:28:46.340 [2024-07-25 10:17:31.212853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.340 [2024-07-25 10:17:31.212878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.340 qpair failed and we were unable to recover it. 00:28:46.340 [2024-07-25 10:17:31.213116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.340 [2024-07-25 10:17:31.213145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.340 qpair failed and we were unable to recover it. 00:28:46.340 [2024-07-25 10:17:31.213324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.340 [2024-07-25 10:17:31.213358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.340 qpair failed and we were unable to recover it. 00:28:46.340 [2024-07-25 10:17:31.213580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.340 [2024-07-25 10:17:31.213607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.340 qpair failed and we were unable to recover it. 00:28:46.340 [2024-07-25 10:17:31.213853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.340 [2024-07-25 10:17:31.213882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.340 qpair failed and we were unable to recover it. 00:28:46.340 [2024-07-25 10:17:31.214105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.340 [2024-07-25 10:17:31.214134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.340 qpair failed and we were unable to recover it. 00:28:46.340 [2024-07-25 10:17:31.214304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.340 [2024-07-25 10:17:31.214333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.340 qpair failed and we were unable to recover it. 00:28:46.340 [2024-07-25 10:17:31.214531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.340 [2024-07-25 10:17:31.214558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.340 qpair failed and we were unable to recover it. 00:28:46.340 [2024-07-25 10:17:31.214752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.340 [2024-07-25 10:17:31.214781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.340 qpair failed and we were unable to recover it. 00:28:46.340 [2024-07-25 10:17:31.215007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.340 [2024-07-25 10:17:31.215033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.340 qpair failed and we were unable to recover it. 00:28:46.340 [2024-07-25 10:17:31.215226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.340 [2024-07-25 10:17:31.215255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.340 qpair failed and we were unable to recover it. 00:28:46.340 [2024-07-25 10:17:31.215462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.340 [2024-07-25 10:17:31.215493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.340 qpair failed and we were unable to recover it. 00:28:46.340 [2024-07-25 10:17:31.215738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.340 [2024-07-25 10:17:31.215764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.340 qpair failed and we were unable to recover it. 00:28:46.340 [2024-07-25 10:17:31.215976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.340 [2024-07-25 10:17:31.216005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.340 qpair failed and we were unable to recover it. 00:28:46.340 [2024-07-25 10:17:31.216152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.340 [2024-07-25 10:17:31.216181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.340 qpair failed and we were unable to recover it. 00:28:46.340 [2024-07-25 10:17:31.216367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.340 [2024-07-25 10:17:31.216393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.340 qpair failed and we were unable to recover it. 00:28:46.340 [2024-07-25 10:17:31.216635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.340 [2024-07-25 10:17:31.216661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.340 qpair failed and we were unable to recover it. 00:28:46.340 [2024-07-25 10:17:31.216845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.340 [2024-07-25 10:17:31.216883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.340 qpair failed and we were unable to recover it. 00:28:46.340 [2024-07-25 10:17:31.217127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.340 [2024-07-25 10:17:31.217152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.340 qpair failed and we were unable to recover it. 00:28:46.340 [2024-07-25 10:17:31.217333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.340 [2024-07-25 10:17:31.217362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.340 qpair failed and we were unable to recover it. 00:28:46.340 [2024-07-25 10:17:31.217600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.340 [2024-07-25 10:17:31.217630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.340 qpair failed and we were unable to recover it. 00:28:46.340 [2024-07-25 10:17:31.217849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.340 [2024-07-25 10:17:31.217874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.340 qpair failed and we were unable to recover it. 00:28:46.340 [2024-07-25 10:17:31.218050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.340 [2024-07-25 10:17:31.218078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.340 qpair failed and we were unable to recover it. 00:28:46.340 [2024-07-25 10:17:31.218281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.340 [2024-07-25 10:17:31.218311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.340 qpair failed and we were unable to recover it. 00:28:46.340 [2024-07-25 10:17:31.218517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.340 [2024-07-25 10:17:31.218543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.340 qpair failed and we were unable to recover it. 00:28:46.340 [2024-07-25 10:17:31.218749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.340 [2024-07-25 10:17:31.218777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.340 qpair failed and we were unable to recover it. 00:28:46.340 [2024-07-25 10:17:31.218928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.340 [2024-07-25 10:17:31.218957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.340 qpair failed and we were unable to recover it. 00:28:46.340 [2024-07-25 10:17:31.219177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.340 [2024-07-25 10:17:31.219202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.340 qpair failed and we were unable to recover it. 00:28:46.340 [2024-07-25 10:17:31.219440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.340 [2024-07-25 10:17:31.219469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.340 qpair failed and we were unable to recover it. 00:28:46.340 [2024-07-25 10:17:31.219694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.341 [2024-07-25 10:17:31.219728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.341 qpair failed and we were unable to recover it. 00:28:46.341 [2024-07-25 10:17:31.219978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.341 [2024-07-25 10:17:31.220003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.341 qpair failed and we were unable to recover it. 00:28:46.341 [2024-07-25 10:17:31.220255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.341 [2024-07-25 10:17:31.220284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.341 qpair failed and we were unable to recover it. 00:28:46.341 [2024-07-25 10:17:31.220552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.341 [2024-07-25 10:17:31.220582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.341 qpair failed and we were unable to recover it. 00:28:46.341 [2024-07-25 10:17:31.220816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.341 [2024-07-25 10:17:31.220842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.341 qpair failed and we were unable to recover it. 00:28:46.341 [2024-07-25 10:17:31.221045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.341 [2024-07-25 10:17:31.221074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.341 qpair failed and we were unable to recover it. 00:28:46.341 [2024-07-25 10:17:31.221275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.341 [2024-07-25 10:17:31.221304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.341 qpair failed and we were unable to recover it. 00:28:46.341 [2024-07-25 10:17:31.221512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.341 [2024-07-25 10:17:31.221539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.341 qpair failed and we were unable to recover it. 00:28:46.341 [2024-07-25 10:17:31.221734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.341 [2024-07-25 10:17:31.221763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.341 qpair failed and we were unable to recover it. 00:28:46.341 [2024-07-25 10:17:31.221939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.341 [2024-07-25 10:17:31.221967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.341 qpair failed and we were unable to recover it. 00:28:46.341 [2024-07-25 10:17:31.222142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.341 [2024-07-25 10:17:31.222181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.341 qpair failed and we were unable to recover it. 00:28:46.341 [2024-07-25 10:17:31.222373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.341 [2024-07-25 10:17:31.222413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.341 qpair failed and we were unable to recover it. 00:28:46.341 [2024-07-25 10:17:31.222630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.341 [2024-07-25 10:17:31.222656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.341 qpair failed and we were unable to recover it. 00:28:46.341 [2024-07-25 10:17:31.222856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.341 [2024-07-25 10:17:31.222882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.341 qpair failed and we were unable to recover it. 00:28:46.341 [2024-07-25 10:17:31.223082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.341 [2024-07-25 10:17:31.223111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.341 qpair failed and we were unable to recover it. 00:28:46.341 [2024-07-25 10:17:31.223320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.341 [2024-07-25 10:17:31.223348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.341 qpair failed and we were unable to recover it. 00:28:46.341 [2024-07-25 10:17:31.223529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.341 [2024-07-25 10:17:31.223556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.341 qpair failed and we were unable to recover it. 00:28:46.341 [2024-07-25 10:17:31.223730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.341 [2024-07-25 10:17:31.223760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.341 qpair failed and we were unable to recover it. 00:28:46.341 [2024-07-25 10:17:31.223895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.341 [2024-07-25 10:17:31.223924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.341 qpair failed and we were unable to recover it. 00:28:46.341 [2024-07-25 10:17:31.224067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.341 [2024-07-25 10:17:31.224107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.341 qpair failed and we were unable to recover it. 00:28:46.341 [2024-07-25 10:17:31.224280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.341 [2024-07-25 10:17:31.224315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.341 qpair failed and we were unable to recover it. 00:28:46.341 [2024-07-25 10:17:31.224548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.341 [2024-07-25 10:17:31.224577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.341 qpair failed and we were unable to recover it. 00:28:46.341 [2024-07-25 10:17:31.224756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.341 [2024-07-25 10:17:31.224792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.341 qpair failed and we were unable to recover it. 00:28:46.341 [2024-07-25 10:17:31.225028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.341 [2024-07-25 10:17:31.225057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.341 qpair failed and we were unable to recover it. 00:28:46.341 [2024-07-25 10:17:31.225253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.341 [2024-07-25 10:17:31.225282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.341 qpair failed and we were unable to recover it. 00:28:46.342 [2024-07-25 10:17:31.225520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.342 [2024-07-25 10:17:31.225547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.342 qpair failed and we were unable to recover it. 00:28:46.342 [2024-07-25 10:17:31.225750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.342 [2024-07-25 10:17:31.225779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.342 qpair failed and we were unable to recover it. 00:28:46.342 [2024-07-25 10:17:31.225988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.342 [2024-07-25 10:17:31.226017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.342 qpair failed and we were unable to recover it. 00:28:46.342 [2024-07-25 10:17:31.226262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.342 [2024-07-25 10:17:31.226302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.342 qpair failed and we were unable to recover it. 00:28:46.342 [2024-07-25 10:17:31.226495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.342 [2024-07-25 10:17:31.226524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.342 qpair failed and we were unable to recover it. 00:28:46.342 [2024-07-25 10:17:31.226698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.342 [2024-07-25 10:17:31.226727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.342 qpair failed and we were unable to recover it. 00:28:46.342 [2024-07-25 10:17:31.226989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.342 [2024-07-25 10:17:31.227015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.342 qpair failed and we were unable to recover it. 00:28:46.342 [2024-07-25 10:17:31.227244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.342 [2024-07-25 10:17:31.227273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.342 qpair failed and we were unable to recover it. 00:28:46.342 [2024-07-25 10:17:31.227517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.342 [2024-07-25 10:17:31.227547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.342 qpair failed and we were unable to recover it. 00:28:46.342 [2024-07-25 10:17:31.227786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.342 [2024-07-25 10:17:31.227812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.342 qpair failed and we were unable to recover it. 00:28:46.342 [2024-07-25 10:17:31.228027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.342 [2024-07-25 10:17:31.228056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.342 qpair failed and we were unable to recover it. 00:28:46.342 [2024-07-25 10:17:31.228291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.342 [2024-07-25 10:17:31.228320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.342 qpair failed and we were unable to recover it. 00:28:46.342 [2024-07-25 10:17:31.228527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.342 [2024-07-25 10:17:31.228555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.342 qpair failed and we were unable to recover it. 00:28:46.342 [2024-07-25 10:17:31.228752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.342 [2024-07-25 10:17:31.228781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.342 qpair failed and we were unable to recover it. 00:28:46.342 [2024-07-25 10:17:31.228991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.342 [2024-07-25 10:17:31.229020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.342 qpair failed and we were unable to recover it. 00:28:46.342 [2024-07-25 10:17:31.229228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.342 [2024-07-25 10:17:31.229254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.342 qpair failed and we were unable to recover it. 00:28:46.342 [2024-07-25 10:17:31.229440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.342 [2024-07-25 10:17:31.229494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.342 qpair failed and we were unable to recover it. 00:28:46.342 [2024-07-25 10:17:31.229708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.342 [2024-07-25 10:17:31.229749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.342 qpair failed and we were unable to recover it. 00:28:46.342 [2024-07-25 10:17:31.229986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.342 [2024-07-25 10:17:31.230027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.342 qpair failed and we were unable to recover it. 00:28:46.342 [2024-07-25 10:17:31.230249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.342 [2024-07-25 10:17:31.230279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.342 qpair failed and we were unable to recover it. 00:28:46.342 [2024-07-25 10:17:31.230486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.342 [2024-07-25 10:17:31.230515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.342 qpair failed and we were unable to recover it. 00:28:46.342 [2024-07-25 10:17:31.230675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.342 [2024-07-25 10:17:31.230702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.342 qpair failed and we were unable to recover it. 00:28:46.342 [2024-07-25 10:17:31.230878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.342 [2024-07-25 10:17:31.230908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.342 qpair failed and we were unable to recover it. 00:28:46.342 [2024-07-25 10:17:31.231108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.342 [2024-07-25 10:17:31.231137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.342 qpair failed and we were unable to recover it. 00:28:46.342 [2024-07-25 10:17:31.231370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.342 [2024-07-25 10:17:31.231396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.342 qpair failed and we were unable to recover it. 00:28:46.342 [2024-07-25 10:17:31.231578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.342 [2024-07-25 10:17:31.231603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.342 qpair failed and we were unable to recover it. 00:28:46.342 [2024-07-25 10:17:31.231842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.342 [2024-07-25 10:17:31.231872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.342 qpair failed and we were unable to recover it. 00:28:46.342 [2024-07-25 10:17:31.232105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.342 [2024-07-25 10:17:31.232132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.342 qpair failed and we were unable to recover it. 00:28:46.342 [2024-07-25 10:17:31.232353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.342 [2024-07-25 10:17:31.232382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.342 qpair failed and we were unable to recover it. 00:28:46.342 [2024-07-25 10:17:31.232554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.342 [2024-07-25 10:17:31.232583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.342 qpair failed and we were unable to recover it. 00:28:46.342 [2024-07-25 10:17:31.232774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.342 [2024-07-25 10:17:31.232801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.342 qpair failed and we were unable to recover it. 00:28:46.342 [2024-07-25 10:17:31.233010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.342 [2024-07-25 10:17:31.233038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.342 qpair failed and we were unable to recover it. 00:28:46.342 [2024-07-25 10:17:31.233188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.342 [2024-07-25 10:17:31.233217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.342 qpair failed and we were unable to recover it. 00:28:46.342 [2024-07-25 10:17:31.233395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.342 [2024-07-25 10:17:31.233421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.342 qpair failed and we were unable to recover it. 00:28:46.342 [2024-07-25 10:17:31.233597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.342 [2024-07-25 10:17:31.233626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.342 qpair failed and we were unable to recover it. 00:28:46.342 [2024-07-25 10:17:31.233822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.342 [2024-07-25 10:17:31.233851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.342 qpair failed and we were unable to recover it. 00:28:46.342 [2024-07-25 10:17:31.234055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.342 [2024-07-25 10:17:31.234081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.342 qpair failed and we were unable to recover it. 00:28:46.342 [2024-07-25 10:17:31.234300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.342 [2024-07-25 10:17:31.234329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.342 qpair failed and we were unable to recover it. 00:28:46.343 [2024-07-25 10:17:31.234550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.343 [2024-07-25 10:17:31.234580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.343 qpair failed and we were unable to recover it. 00:28:46.343 [2024-07-25 10:17:31.234766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.343 [2024-07-25 10:17:31.234817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.343 qpair failed and we were unable to recover it. 00:28:46.343 [2024-07-25 10:17:31.235031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.343 [2024-07-25 10:17:31.235060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.343 qpair failed and we were unable to recover it. 00:28:46.343 [2024-07-25 10:17:31.235277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.343 [2024-07-25 10:17:31.235306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.343 qpair failed and we were unable to recover it. 00:28:46.343 [2024-07-25 10:17:31.235587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.343 [2024-07-25 10:17:31.235614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.343 qpair failed and we were unable to recover it. 00:28:46.343 [2024-07-25 10:17:31.235864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.343 [2024-07-25 10:17:31.235897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.343 qpair failed and we were unable to recover it. 00:28:46.343 [2024-07-25 10:17:31.236103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.343 [2024-07-25 10:17:31.236133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.343 qpair failed and we were unable to recover it. 00:28:46.343 [2024-07-25 10:17:31.236399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.343 [2024-07-25 10:17:31.236425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.343 qpair failed and we were unable to recover it. 00:28:46.343 [2024-07-25 10:17:31.236696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.343 [2024-07-25 10:17:31.236725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.343 qpair failed and we were unable to recover it. 00:28:46.343 [2024-07-25 10:17:31.236947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.343 [2024-07-25 10:17:31.236977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.343 qpair failed and we were unable to recover it. 00:28:46.343 [2024-07-25 10:17:31.237194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.343 [2024-07-25 10:17:31.237219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.343 qpair failed and we were unable to recover it. 00:28:46.343 [2024-07-25 10:17:31.237509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.343 [2024-07-25 10:17:31.237547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.343 qpair failed and we were unable to recover it. 00:28:46.343 [2024-07-25 10:17:31.237778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.343 [2024-07-25 10:17:31.237807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.343 qpair failed and we were unable to recover it. 00:28:46.343 [2024-07-25 10:17:31.238024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.343 [2024-07-25 10:17:31.238050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.343 qpair failed and we were unable to recover it. 00:28:46.343 [2024-07-25 10:17:31.238291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.343 [2024-07-25 10:17:31.238320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.343 qpair failed and we were unable to recover it. 00:28:46.343 [2024-07-25 10:17:31.238458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.343 [2024-07-25 10:17:31.238487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.343 qpair failed and we were unable to recover it. 00:28:46.343 [2024-07-25 10:17:31.238703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.343 [2024-07-25 10:17:31.238729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.343 qpair failed and we were unable to recover it. 00:28:46.343 [2024-07-25 10:17:31.238957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.343 [2024-07-25 10:17:31.238986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.343 qpair failed and we were unable to recover it. 00:28:46.343 [2024-07-25 10:17:31.239196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.343 [2024-07-25 10:17:31.239225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.343 qpair failed and we were unable to recover it. 00:28:46.343 [2024-07-25 10:17:31.239413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.343 [2024-07-25 10:17:31.239452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.343 qpair failed and we were unable to recover it. 00:28:46.343 [2024-07-25 10:17:31.239661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.343 [2024-07-25 10:17:31.239690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.343 qpair failed and we were unable to recover it. 00:28:46.343 [2024-07-25 10:17:31.239899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.343 [2024-07-25 10:17:31.239928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.343 qpair failed and we were unable to recover it. 00:28:46.343 [2024-07-25 10:17:31.240143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.343 [2024-07-25 10:17:31.240169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.343 qpair failed and we were unable to recover it. 00:28:46.343 [2024-07-25 10:17:31.240344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.343 [2024-07-25 10:17:31.240372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.343 qpair failed and we were unable to recover it. 00:28:46.343 [2024-07-25 10:17:31.240524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.343 [2024-07-25 10:17:31.240553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.343 qpair failed and we were unable to recover it. 00:28:46.343 [2024-07-25 10:17:31.240750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.343 [2024-07-25 10:17:31.240791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.343 qpair failed and we were unable to recover it. 00:28:46.343 [2024-07-25 10:17:31.240963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.343 [2024-07-25 10:17:31.241000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.343 qpair failed and we were unable to recover it. 00:28:46.343 [2024-07-25 10:17:31.241220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.343 [2024-07-25 10:17:31.241249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.343 qpair failed and we were unable to recover it. 00:28:46.343 [2024-07-25 10:17:31.241465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.343 [2024-07-25 10:17:31.241491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.343 qpair failed and we were unable to recover it. 00:28:46.343 [2024-07-25 10:17:31.241734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.343 [2024-07-25 10:17:31.241763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.343 qpair failed and we were unable to recover it. 00:28:46.343 [2024-07-25 10:17:31.242019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.343 [2024-07-25 10:17:31.242048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.343 qpair failed and we were unable to recover it. 00:28:46.343 [2024-07-25 10:17:31.242284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.343 [2024-07-25 10:17:31.242310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.343 qpair failed and we were unable to recover it. 00:28:46.343 [2024-07-25 10:17:31.242485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.343 [2024-07-25 10:17:31.242519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.343 qpair failed and we were unable to recover it. 00:28:46.343 [2024-07-25 10:17:31.242705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.344 [2024-07-25 10:17:31.242734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.344 qpair failed and we were unable to recover it. 00:28:46.344 [2024-07-25 10:17:31.242963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.344 [2024-07-25 10:17:31.242989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.344 qpair failed and we were unable to recover it. 00:28:46.344 [2024-07-25 10:17:31.243185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.344 [2024-07-25 10:17:31.243214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.344 qpair failed and we were unable to recover it. 00:28:46.344 [2024-07-25 10:17:31.243378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.344 [2024-07-25 10:17:31.243407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.344 qpair failed and we were unable to recover it. 00:28:46.344 [2024-07-25 10:17:31.243621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.344 [2024-07-25 10:17:31.243648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.344 qpair failed and we were unable to recover it. 00:28:46.344 [2024-07-25 10:17:31.243830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.344 [2024-07-25 10:17:31.243859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.344 qpair failed and we were unable to recover it. 00:28:46.344 [2024-07-25 10:17:31.244062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.344 [2024-07-25 10:17:31.244091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.344 qpair failed and we were unable to recover it. 00:28:46.344 [2024-07-25 10:17:31.244301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.344 [2024-07-25 10:17:31.244330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.344 qpair failed and we were unable to recover it. 00:28:46.344 [2024-07-25 10:17:31.244579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.344 [2024-07-25 10:17:31.244621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.344 qpair failed and we were unable to recover it. 00:28:46.344 [2024-07-25 10:17:31.244860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.344 [2024-07-25 10:17:31.244890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.344 qpair failed and we were unable to recover it. 00:28:46.344 [2024-07-25 10:17:31.245010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.344 [2024-07-25 10:17:31.245036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.344 qpair failed and we were unable to recover it. 00:28:46.344 [2024-07-25 10:17:31.245197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.344 [2024-07-25 10:17:31.245222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.344 qpair failed and we were unable to recover it. 00:28:46.344 [2024-07-25 10:17:31.245453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.344 [2024-07-25 10:17:31.245496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.344 qpair failed and we were unable to recover it. 00:28:46.344 [2024-07-25 10:17:31.245750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.344 [2024-07-25 10:17:31.245776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.344 qpair failed and we were unable to recover it. 00:28:46.344 [2024-07-25 10:17:31.245945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.344 [2024-07-25 10:17:31.245974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.344 qpair failed and we were unable to recover it. 00:28:46.344 [2024-07-25 10:17:31.246148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.344 [2024-07-25 10:17:31.246176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.344 qpair failed and we were unable to recover it. 00:28:46.344 [2024-07-25 10:17:31.246398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.344 [2024-07-25 10:17:31.246425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.344 qpair failed and we were unable to recover it. 00:28:46.344 [2024-07-25 10:17:31.246679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.344 [2024-07-25 10:17:31.246708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.344 qpair failed and we were unable to recover it. 00:28:46.344 [2024-07-25 10:17:31.246882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.344 [2024-07-25 10:17:31.246910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.344 qpair failed and we were unable to recover it. 00:28:46.344 [2024-07-25 10:17:31.247123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.344 [2024-07-25 10:17:31.247149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.344 qpair failed and we were unable to recover it. 00:28:46.344 [2024-07-25 10:17:31.247362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.344 [2024-07-25 10:17:31.247391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.344 qpair failed and we were unable to recover it. 00:28:46.344 [2024-07-25 10:17:31.247612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.344 [2024-07-25 10:17:31.247641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.344 qpair failed and we were unable to recover it. 00:28:46.344 [2024-07-25 10:17:31.247856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.344 [2024-07-25 10:17:31.247882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.344 qpair failed and we were unable to recover it. 00:28:46.344 [2024-07-25 10:17:31.248082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.344 [2024-07-25 10:17:31.248111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.344 qpair failed and we were unable to recover it. 00:28:46.344 [2024-07-25 10:17:31.248315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.344 [2024-07-25 10:17:31.248343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.344 qpair failed and we were unable to recover it. 00:28:46.344 [2024-07-25 10:17:31.248589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.344 [2024-07-25 10:17:31.248615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.344 qpair failed and we were unable to recover it. 00:28:46.344 [2024-07-25 10:17:31.248778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.344 [2024-07-25 10:17:31.248810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.344 qpair failed and we were unable to recover it. 00:28:46.344 [2024-07-25 10:17:31.249018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.344 [2024-07-25 10:17:31.249047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.344 qpair failed and we were unable to recover it. 00:28:46.344 [2024-07-25 10:17:31.249299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.344 [2024-07-25 10:17:31.249325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.344 qpair failed and we were unable to recover it. 00:28:46.344 [2024-07-25 10:17:31.249559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.344 [2024-07-25 10:17:31.249589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.344 qpair failed and we were unable to recover it. 00:28:46.344 [2024-07-25 10:17:31.249801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.344 [2024-07-25 10:17:31.249829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.344 qpair failed and we were unable to recover it. 00:28:46.344 [2024-07-25 10:17:31.249995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.344 [2024-07-25 10:17:31.250020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.344 qpair failed and we were unable to recover it. 00:28:46.344 [2024-07-25 10:17:31.250233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.344 [2024-07-25 10:17:31.250261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.344 qpair failed and we were unable to recover it. 00:28:46.344 [2024-07-25 10:17:31.250420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.344 [2024-07-25 10:17:31.250457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.344 qpair failed and we were unable to recover it. 00:28:46.344 [2024-07-25 10:17:31.250614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.344 [2024-07-25 10:17:31.250639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.344 qpair failed and we were unable to recover it. 00:28:46.344 [2024-07-25 10:17:31.250879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.344 [2024-07-25 10:17:31.250907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.344 qpair failed and we were unable to recover it. 00:28:46.344 [2024-07-25 10:17:31.251082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.344 [2024-07-25 10:17:31.251120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.344 qpair failed and we were unable to recover it. 00:28:46.344 [2024-07-25 10:17:31.251335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.344 [2024-07-25 10:17:31.251364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.344 qpair failed and we were unable to recover it. 00:28:46.345 [2024-07-25 10:17:31.251577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.345 [2024-07-25 10:17:31.251603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.345 qpair failed and we were unable to recover it. 00:28:46.345 [2024-07-25 10:17:31.251844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.345 [2024-07-25 10:17:31.251873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.345 qpair failed and we were unable to recover it. 00:28:46.345 [2024-07-25 10:17:31.252100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.345 [2024-07-25 10:17:31.252125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.345 qpair failed and we were unable to recover it. 00:28:46.345 [2024-07-25 10:17:31.252335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.345 [2024-07-25 10:17:31.252364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.345 qpair failed and we were unable to recover it. 00:28:46.345 [2024-07-25 10:17:31.252548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.345 [2024-07-25 10:17:31.252584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.345 qpair failed and we were unable to recover it. 00:28:46.345 [2024-07-25 10:17:31.252789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.345 [2024-07-25 10:17:31.252815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.345 qpair failed and we were unable to recover it. 00:28:46.345 [2024-07-25 10:17:31.253059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.345 [2024-07-25 10:17:31.253088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.345 qpair failed and we were unable to recover it. 00:28:46.345 [2024-07-25 10:17:31.253295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.345 [2024-07-25 10:17:31.253324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.345 qpair failed and we were unable to recover it. 00:28:46.345 [2024-07-25 10:17:31.253537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.345 [2024-07-25 10:17:31.253563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.345 qpair failed and we were unable to recover it. 00:28:46.345 [2024-07-25 10:17:31.253781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.345 [2024-07-25 10:17:31.253810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.345 qpair failed and we were unable to recover it. 00:28:46.345 [2024-07-25 10:17:31.254063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.345 [2024-07-25 10:17:31.254092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.345 qpair failed and we were unable to recover it. 00:28:46.345 [2024-07-25 10:17:31.254311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.345 [2024-07-25 10:17:31.254336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.345 qpair failed and we were unable to recover it. 00:28:46.345 [2024-07-25 10:17:31.254556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.345 [2024-07-25 10:17:31.254585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.345 qpair failed and we were unable to recover it. 00:28:46.345 [2024-07-25 10:17:31.254724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.345 [2024-07-25 10:17:31.254753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.345 qpair failed and we were unable to recover it. 00:28:46.345 [2024-07-25 10:17:31.254940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.345 [2024-07-25 10:17:31.254966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.345 qpair failed and we were unable to recover it. 00:28:46.345 [2024-07-25 10:17:31.255168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.345 [2024-07-25 10:17:31.255197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.345 qpair failed and we were unable to recover it. 00:28:46.345 [2024-07-25 10:17:31.255407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.345 [2024-07-25 10:17:31.255444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.345 qpair failed and we were unable to recover it. 00:28:46.345 [2024-07-25 10:17:31.255621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.345 [2024-07-25 10:17:31.255646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.345 qpair failed and we were unable to recover it. 00:28:46.345 [2024-07-25 10:17:31.255848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.345 [2024-07-25 10:17:31.255877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.345 qpair failed and we were unable to recover it. 00:28:46.345 [2024-07-25 10:17:31.256096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.345 [2024-07-25 10:17:31.256125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.345 qpair failed and we were unable to recover it. 00:28:46.345 [2024-07-25 10:17:31.256361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.345 [2024-07-25 10:17:31.256386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.345 qpair failed and we were unable to recover it. 00:28:46.345 [2024-07-25 10:17:31.256573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.345 [2024-07-25 10:17:31.256599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.345 qpair failed and we were unable to recover it. 00:28:46.345 [2024-07-25 10:17:31.256797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.345 [2024-07-25 10:17:31.256825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.345 qpair failed and we were unable to recover it. 00:28:46.345 [2024-07-25 10:17:31.256986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.345 [2024-07-25 10:17:31.257011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.345 qpair failed and we were unable to recover it. 00:28:46.345 [2024-07-25 10:17:31.257187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.345 [2024-07-25 10:17:31.257215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.345 qpair failed and we were unable to recover it. 00:28:46.345 [2024-07-25 10:17:31.257414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.345 [2024-07-25 10:17:31.257468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.345 qpair failed and we were unable to recover it. 00:28:46.345 [2024-07-25 10:17:31.257705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.345 [2024-07-25 10:17:31.257731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.345 qpair failed and we were unable to recover it. 00:28:46.345 [2024-07-25 10:17:31.257939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.345 [2024-07-25 10:17:31.257968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.345 qpair failed and we were unable to recover it. 00:28:46.345 [2024-07-25 10:17:31.258130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.345 [2024-07-25 10:17:31.258158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.345 qpair failed and we were unable to recover it. 00:28:46.345 [2024-07-25 10:17:31.258345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.345 [2024-07-25 10:17:31.258375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.345 qpair failed and we were unable to recover it. 00:28:46.345 [2024-07-25 10:17:31.258540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.345 [2024-07-25 10:17:31.258569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.345 qpair failed and we were unable to recover it. 00:28:46.345 [2024-07-25 10:17:31.258780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.345 [2024-07-25 10:17:31.258809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.345 qpair failed and we were unable to recover it. 00:28:46.345 [2024-07-25 10:17:31.258980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.345 [2024-07-25 10:17:31.259005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.345 qpair failed and we were unable to recover it. 00:28:46.345 [2024-07-25 10:17:31.259178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.345 [2024-07-25 10:17:31.259206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.345 qpair failed and we were unable to recover it. 00:28:46.346 [2024-07-25 10:17:31.259426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.346 [2024-07-25 10:17:31.259462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.346 qpair failed and we were unable to recover it. 00:28:46.346 [2024-07-25 10:17:31.259651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.346 [2024-07-25 10:17:31.259687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.346 qpair failed and we were unable to recover it. 00:28:46.346 [2024-07-25 10:17:31.259909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.346 [2024-07-25 10:17:31.259938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.346 qpair failed and we were unable to recover it. 00:28:46.346 [2024-07-25 10:17:31.260181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.346 [2024-07-25 10:17:31.260210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.346 qpair failed and we were unable to recover it. 00:28:46.346 [2024-07-25 10:17:31.260442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.346 [2024-07-25 10:17:31.260468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.346 qpair failed and we were unable to recover it. 00:28:46.346 [2024-07-25 10:17:31.260714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.346 [2024-07-25 10:17:31.260743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.346 qpair failed and we were unable to recover it. 00:28:46.346 [2024-07-25 10:17:31.260916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.346 [2024-07-25 10:17:31.260944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.346 qpair failed and we were unable to recover it. 00:28:46.346 [2024-07-25 10:17:31.261138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.346 [2024-07-25 10:17:31.261161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.346 qpair failed and we were unable to recover it. 00:28:46.346 [2024-07-25 10:17:31.261411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.346 [2024-07-25 10:17:31.261457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.346 qpair failed and we were unable to recover it. 00:28:46.346 [2024-07-25 10:17:31.261673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.346 [2024-07-25 10:17:31.261702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.346 qpair failed and we were unable to recover it. 00:28:46.346 [2024-07-25 10:17:31.261930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.346 [2024-07-25 10:17:31.261953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.346 qpair failed and we were unable to recover it. 00:28:46.346 [2024-07-25 10:17:31.262173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.346 [2024-07-25 10:17:31.262201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.346 qpair failed and we were unable to recover it. 00:28:46.346 [2024-07-25 10:17:31.262390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.346 [2024-07-25 10:17:31.262419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.346 qpair failed and we were unable to recover it. 00:28:46.346 [2024-07-25 10:17:31.262755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.346 [2024-07-25 10:17:31.262797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.346 qpair failed and we were unable to recover it. 00:28:46.346 [2024-07-25 10:17:31.263022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.346 [2024-07-25 10:17:31.263051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.346 qpair failed and we were unable to recover it. 00:28:46.346 [2024-07-25 10:17:31.263304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.346 [2024-07-25 10:17:31.263333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.346 qpair failed and we were unable to recover it. 00:28:46.346 [2024-07-25 10:17:31.263572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.346 [2024-07-25 10:17:31.263598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.346 qpair failed and we were unable to recover it. 00:28:46.346 [2024-07-25 10:17:31.263802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.346 [2024-07-25 10:17:31.263831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.346 qpair failed and we were unable to recover it. 00:28:46.346 [2024-07-25 10:17:31.264028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.346 [2024-07-25 10:17:31.264067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.346 qpair failed and we were unable to recover it. 00:28:46.346 [2024-07-25 10:17:31.264283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.346 [2024-07-25 10:17:31.264306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.346 qpair failed and we were unable to recover it. 00:28:46.346 [2024-07-25 10:17:31.264517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.346 [2024-07-25 10:17:31.264547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.346 qpair failed and we were unable to recover it. 00:28:46.346 [2024-07-25 10:17:31.264780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.346 [2024-07-25 10:17:31.264808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.346 qpair failed and we were unable to recover it. 00:28:46.346 [2024-07-25 10:17:31.264987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.346 [2024-07-25 10:17:31.265015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.346 qpair failed and we were unable to recover it. 00:28:46.346 [2024-07-25 10:17:31.265191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.346 [2024-07-25 10:17:31.265220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.346 qpair failed and we were unable to recover it. 00:28:46.346 [2024-07-25 10:17:31.265504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.346 [2024-07-25 10:17:31.265533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.346 qpair failed and we were unable to recover it. 00:28:46.346 [2024-07-25 10:17:31.265762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.346 [2024-07-25 10:17:31.265785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.346 qpair failed and we were unable to recover it. 00:28:46.346 [2024-07-25 10:17:31.265970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.346 [2024-07-25 10:17:31.266008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.346 qpair failed and we were unable to recover it. 00:28:46.346 [2024-07-25 10:17:31.266251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.346 [2024-07-25 10:17:31.266280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.346 qpair failed and we were unable to recover it. 00:28:46.346 [2024-07-25 10:17:31.266500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.346 [2024-07-25 10:17:31.266524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.346 qpair failed and we were unable to recover it. 00:28:46.346 [2024-07-25 10:17:31.266780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.346 [2024-07-25 10:17:31.266809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.346 qpair failed and we were unable to recover it. 00:28:46.346 [2024-07-25 10:17:31.267009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.347 [2024-07-25 10:17:31.267037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.347 qpair failed and we were unable to recover it. 00:28:46.347 [2024-07-25 10:17:31.267234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.347 [2024-07-25 10:17:31.267257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.347 qpair failed and we were unable to recover it. 00:28:46.347 [2024-07-25 10:17:31.267498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.347 [2024-07-25 10:17:31.267527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.347 qpair failed and we were unable to recover it. 00:28:46.347 [2024-07-25 10:17:31.267728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.347 [2024-07-25 10:17:31.267757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.347 qpair failed and we were unable to recover it. 00:28:46.347 [2024-07-25 10:17:31.267950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.347 [2024-07-25 10:17:31.267974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.347 qpair failed and we were unable to recover it. 00:28:46.347 [2024-07-25 10:17:31.268201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.347 [2024-07-25 10:17:31.268230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.347 qpair failed and we were unable to recover it. 00:28:46.347 [2024-07-25 10:17:31.268476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.347 [2024-07-25 10:17:31.268506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.347 qpair failed and we were unable to recover it. 00:28:46.347 [2024-07-25 10:17:31.268682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.347 [2024-07-25 10:17:31.268719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.347 qpair failed and we were unable to recover it. 00:28:46.347 [2024-07-25 10:17:31.268917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.347 [2024-07-25 10:17:31.268946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.347 qpair failed and we were unable to recover it. 00:28:46.347 [2024-07-25 10:17:31.269127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.347 [2024-07-25 10:17:31.269155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.347 qpair failed and we were unable to recover it. 00:28:46.347 [2024-07-25 10:17:31.269369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.347 [2024-07-25 10:17:31.269398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.347 qpair failed and we were unable to recover it. 00:28:46.347 [2024-07-25 10:17:31.269658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.347 [2024-07-25 10:17:31.269683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.347 qpair failed and we were unable to recover it. 00:28:46.347 [2024-07-25 10:17:31.269855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.347 [2024-07-25 10:17:31.269883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.347 qpair failed and we were unable to recover it. 00:28:46.347 [2024-07-25 10:17:31.270072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.347 [2024-07-25 10:17:31.270095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.347 qpair failed and we were unable to recover it. 00:28:46.347 [2024-07-25 10:17:31.270285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.347 [2024-07-25 10:17:31.270313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.347 qpair failed and we were unable to recover it. 00:28:46.347 [2024-07-25 10:17:31.270499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.347 [2024-07-25 10:17:31.270531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.347 qpair failed and we were unable to recover it. 00:28:46.347 [2024-07-25 10:17:31.270780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.347 [2024-07-25 10:17:31.270803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.347 qpair failed and we were unable to recover it. 00:28:46.347 [2024-07-25 10:17:31.271006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.347 [2024-07-25 10:17:31.271035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.347 qpair failed and we were unable to recover it. 00:28:46.347 [2024-07-25 10:17:31.271280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.347 [2024-07-25 10:17:31.271308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.347 qpair failed and we were unable to recover it. 00:28:46.347 [2024-07-25 10:17:31.271493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.347 [2024-07-25 10:17:31.271531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.347 qpair failed and we were unable to recover it. 00:28:46.347 [2024-07-25 10:17:31.271684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.347 [2024-07-25 10:17:31.271712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.347 qpair failed and we were unable to recover it. 00:28:46.347 [2024-07-25 10:17:31.271978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.347 [2024-07-25 10:17:31.272008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.347 qpair failed and we were unable to recover it. 00:28:46.347 [2024-07-25 10:17:31.272259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.347 [2024-07-25 10:17:31.272283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.347 qpair failed and we were unable to recover it. 00:28:46.347 [2024-07-25 10:17:31.272548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.347 [2024-07-25 10:17:31.272577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.347 qpair failed and we were unable to recover it. 00:28:46.347 [2024-07-25 10:17:31.272754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.347 [2024-07-25 10:17:31.272783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.347 qpair failed and we were unable to recover it. 00:28:46.347 [2024-07-25 10:17:31.272956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.347 [2024-07-25 10:17:31.272979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.347 qpair failed and we were unable to recover it. 00:28:46.347 [2024-07-25 10:17:31.273190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.347 [2024-07-25 10:17:31.273218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.347 qpair failed and we were unable to recover it. 00:28:46.347 [2024-07-25 10:17:31.273440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.347 [2024-07-25 10:17:31.273469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.347 qpair failed and we were unable to recover it. 00:28:46.347 [2024-07-25 10:17:31.273659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.347 [2024-07-25 10:17:31.273683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.347 qpair failed and we were unable to recover it. 00:28:46.347 [2024-07-25 10:17:31.273913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.347 [2024-07-25 10:17:31.273941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.347 qpair failed and we were unable to recover it. 00:28:46.347 [2024-07-25 10:17:31.274116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.347 [2024-07-25 10:17:31.274156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.347 qpair failed and we were unable to recover it. 00:28:46.347 [2024-07-25 10:17:31.274399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.347 [2024-07-25 10:17:31.274445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.347 qpair failed and we were unable to recover it. 00:28:46.347 [2024-07-25 10:17:31.274690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.347 [2024-07-25 10:17:31.274719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.347 qpair failed and we were unable to recover it. 00:28:46.347 [2024-07-25 10:17:31.274970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.347 [2024-07-25 10:17:31.274999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.347 qpair failed and we were unable to recover it. 00:28:46.347 [2024-07-25 10:17:31.275209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.347 [2024-07-25 10:17:31.275232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.347 qpair failed and we were unable to recover it. 00:28:46.347 [2024-07-25 10:17:31.275486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.348 [2024-07-25 10:17:31.275515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.348 qpair failed and we were unable to recover it. 00:28:46.348 [2024-07-25 10:17:31.275767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.348 [2024-07-25 10:17:31.275796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.348 qpair failed and we were unable to recover it. 00:28:46.348 [2024-07-25 10:17:31.276013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.348 [2024-07-25 10:17:31.276036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.348 qpair failed and we were unable to recover it. 00:28:46.348 [2024-07-25 10:17:31.276239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.348 [2024-07-25 10:17:31.276268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.348 qpair failed and we were unable to recover it. 00:28:46.348 [2024-07-25 10:17:31.276492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.348 [2024-07-25 10:17:31.276517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.348 qpair failed and we were unable to recover it. 00:28:46.348 [2024-07-25 10:17:31.276692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.348 [2024-07-25 10:17:31.276716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.348 qpair failed and we were unable to recover it. 00:28:46.348 [2024-07-25 10:17:31.276948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.348 [2024-07-25 10:17:31.276977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.348 qpair failed and we were unable to recover it. 00:28:46.348 [2024-07-25 10:17:31.277230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.348 [2024-07-25 10:17:31.277259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.348 qpair failed and we were unable to recover it. 00:28:46.348 [2024-07-25 10:17:31.277460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.348 [2024-07-25 10:17:31.277484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.348 qpair failed and we were unable to recover it. 00:28:46.348 [2024-07-25 10:17:31.277760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.348 [2024-07-25 10:17:31.277789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.348 qpair failed and we were unable to recover it. 00:28:46.348 [2024-07-25 10:17:31.278031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.348 [2024-07-25 10:17:31.278059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.348 qpair failed and we were unable to recover it. 00:28:46.348 [2024-07-25 10:17:31.278268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.348 [2024-07-25 10:17:31.278292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.348 qpair failed and we were unable to recover it. 00:28:46.348 [2024-07-25 10:17:31.278567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.348 [2024-07-25 10:17:31.278597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.348 qpair failed and we were unable to recover it. 00:28:46.348 [2024-07-25 10:17:31.278813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.348 [2024-07-25 10:17:31.278842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.348 qpair failed and we were unable to recover it. 00:28:46.348 [2024-07-25 10:17:31.279010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.348 [2024-07-25 10:17:31.279033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.348 qpair failed and we were unable to recover it. 00:28:46.348 [2024-07-25 10:17:31.279245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.348 [2024-07-25 10:17:31.279274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.348 qpair failed and we were unable to recover it. 00:28:46.348 [2024-07-25 10:17:31.279531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.348 [2024-07-25 10:17:31.279562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.348 qpair failed and we were unable to recover it. 00:28:46.348 [2024-07-25 10:17:31.279804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.348 [2024-07-25 10:17:31.279827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.348 qpair failed and we were unable to recover it. 00:28:46.348 [2024-07-25 10:17:31.280039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.348 [2024-07-25 10:17:31.280068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.348 qpair failed and we were unable to recover it. 00:28:46.348 [2024-07-25 10:17:31.280269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.348 [2024-07-25 10:17:31.280298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.348 qpair failed and we were unable to recover it. 00:28:46.348 [2024-07-25 10:17:31.280518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.348 [2024-07-25 10:17:31.280542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.348 qpair failed and we were unable to recover it. 00:28:46.348 [2024-07-25 10:17:31.280759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.348 [2024-07-25 10:17:31.280788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.348 qpair failed and we were unable to recover it. 00:28:46.348 [2024-07-25 10:17:31.280997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.348 [2024-07-25 10:17:31.281026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.348 qpair failed and we were unable to recover it. 00:28:46.348 [2024-07-25 10:17:31.281152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.348 [2024-07-25 10:17:31.281189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.348 qpair failed and we were unable to recover it. 00:28:46.348 [2024-07-25 10:17:31.281369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.348 [2024-07-25 10:17:31.281398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.348 qpair failed and we were unable to recover it. 00:28:46.348 [2024-07-25 10:17:31.281659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.348 [2024-07-25 10:17:31.281689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.348 qpair failed and we were unable to recover it. 00:28:46.348 [2024-07-25 10:17:31.281871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.348 [2024-07-25 10:17:31.281893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.348 qpair failed and we were unable to recover it. 00:28:46.348 [2024-07-25 10:17:31.282123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.348 [2024-07-25 10:17:31.282152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.348 qpair failed and we were unable to recover it. 00:28:46.348 [2024-07-25 10:17:31.282398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.348 [2024-07-25 10:17:31.282434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.348 qpair failed and we were unable to recover it. 00:28:46.348 [2024-07-25 10:17:31.282620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.348 [2024-07-25 10:17:31.282656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.348 qpair failed and we were unable to recover it. 00:28:46.348 [2024-07-25 10:17:31.282874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.348 [2024-07-25 10:17:31.282902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.348 qpair failed and we were unable to recover it. 00:28:46.348 [2024-07-25 10:17:31.283081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.348 [2024-07-25 10:17:31.283110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.348 qpair failed and we were unable to recover it. 00:28:46.348 [2024-07-25 10:17:31.283277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.348 [2024-07-25 10:17:31.283306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.348 qpair failed and we were unable to recover it. 00:28:46.348 [2024-07-25 10:17:31.283493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.348 [2024-07-25 10:17:31.283529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.348 qpair failed and we were unable to recover it. 00:28:46.348 [2024-07-25 10:17:31.283744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.348 [2024-07-25 10:17:31.283773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.348 qpair failed and we were unable to recover it. 00:28:46.348 [2024-07-25 10:17:31.283948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.348 [2024-07-25 10:17:31.283971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.348 qpair failed and we were unable to recover it. 00:28:46.348 [2024-07-25 10:17:31.284154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.348 [2024-07-25 10:17:31.284184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.348 qpair failed and we were unable to recover it. 00:28:46.348 [2024-07-25 10:17:31.284400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.349 [2024-07-25 10:17:31.284435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.349 qpair failed and we were unable to recover it. 00:28:46.349 [2024-07-25 10:17:31.284610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.349 [2024-07-25 10:17:31.284634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.349 qpair failed and we were unable to recover it. 00:28:46.349 [2024-07-25 10:17:31.284828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.349 [2024-07-25 10:17:31.284857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.349 qpair failed and we were unable to recover it. 00:28:46.349 [2024-07-25 10:17:31.285008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.349 [2024-07-25 10:17:31.285037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.349 qpair failed and we were unable to recover it. 00:28:46.349 [2024-07-25 10:17:31.285266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.349 [2024-07-25 10:17:31.285289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.349 qpair failed and we were unable to recover it. 00:28:46.349 [2024-07-25 10:17:31.285472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.349 [2024-07-25 10:17:31.285501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.349 qpair failed and we were unable to recover it. 00:28:46.349 [2024-07-25 10:17:31.285674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.349 [2024-07-25 10:17:31.285712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.349 qpair failed and we were unable to recover it. 00:28:46.349 [2024-07-25 10:17:31.285895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.349 [2024-07-25 10:17:31.285918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.349 qpair failed and we were unable to recover it. 00:28:46.349 [2024-07-25 10:17:31.286117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.349 [2024-07-25 10:17:31.286146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.349 qpair failed and we were unable to recover it. 00:28:46.349 [2024-07-25 10:17:31.286390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.349 [2024-07-25 10:17:31.286418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.349 qpair failed and we were unable to recover it. 00:28:46.349 [2024-07-25 10:17:31.286654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.349 [2024-07-25 10:17:31.286678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.349 qpair failed and we were unable to recover it. 00:28:46.349 [2024-07-25 10:17:31.286892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.349 [2024-07-25 10:17:31.286921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.349 qpair failed and we were unable to recover it. 00:28:46.349 [2024-07-25 10:17:31.287167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.349 [2024-07-25 10:17:31.287195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.349 qpair failed and we were unable to recover it. 00:28:46.349 [2024-07-25 10:17:31.287396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.349 [2024-07-25 10:17:31.287426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.349 qpair failed and we were unable to recover it. 00:28:46.349 [2024-07-25 10:17:31.287701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.349 [2024-07-25 10:17:31.287731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.349 qpair failed and we were unable to recover it. 00:28:46.349 [2024-07-25 10:17:31.287979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.349 [2024-07-25 10:17:31.288012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.349 qpair failed and we were unable to recover it. 00:28:46.349 [2024-07-25 10:17:31.288241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.349 [2024-07-25 10:17:31.288264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.349 qpair failed and we were unable to recover it. 00:28:46.349 [2024-07-25 10:17:31.288462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.349 [2024-07-25 10:17:31.288492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.349 qpair failed and we were unable to recover it. 00:28:46.349 [2024-07-25 10:17:31.288699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.349 [2024-07-25 10:17:31.288728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.349 qpair failed and we were unable to recover it. 00:28:46.349 [2024-07-25 10:17:31.288975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.349 [2024-07-25 10:17:31.288998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.349 qpair failed and we were unable to recover it. 00:28:46.349 [2024-07-25 10:17:31.289195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.349 [2024-07-25 10:17:31.289223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.349 qpair failed and we were unable to recover it. 00:28:46.349 [2024-07-25 10:17:31.289403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.349 [2024-07-25 10:17:31.289438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.349 qpair failed and we were unable to recover it. 00:28:46.349 [2024-07-25 10:17:31.289678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.349 [2024-07-25 10:17:31.289718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.349 qpair failed and we were unable to recover it. 00:28:46.349 [2024-07-25 10:17:31.289910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.349 [2024-07-25 10:17:31.289939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.349 qpair failed and we were unable to recover it. 00:28:46.349 [2024-07-25 10:17:31.290150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.349 [2024-07-25 10:17:31.290179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.349 qpair failed and we were unable to recover it. 00:28:46.349 [2024-07-25 10:17:31.290373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.349 [2024-07-25 10:17:31.290402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.349 qpair failed and we were unable to recover it. 00:28:46.349 [2024-07-25 10:17:31.290677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.349 [2024-07-25 10:17:31.290717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.349 qpair failed and we were unable to recover it. 00:28:46.349 [2024-07-25 10:17:31.290973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.349 [2024-07-25 10:17:31.291002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.349 qpair failed and we were unable to recover it. 00:28:46.349 [2024-07-25 10:17:31.291254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.349 [2024-07-25 10:17:31.291277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.349 qpair failed and we were unable to recover it. 00:28:46.349 [2024-07-25 10:17:31.291495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.349 [2024-07-25 10:17:31.291521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.349 qpair failed and we were unable to recover it. 00:28:46.349 [2024-07-25 10:17:31.291734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.349 [2024-07-25 10:17:31.291763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.349 qpair failed and we were unable to recover it. 00:28:46.349 [2024-07-25 10:17:31.291981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.349 [2024-07-25 10:17:31.292004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.349 qpair failed and we were unable to recover it. 00:28:46.349 [2024-07-25 10:17:31.292267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.349 [2024-07-25 10:17:31.292296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.349 qpair failed and we were unable to recover it. 00:28:46.349 [2024-07-25 10:17:31.292539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.350 [2024-07-25 10:17:31.292569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.350 qpair failed and we were unable to recover it. 00:28:46.350 [2024-07-25 10:17:31.292747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.350 [2024-07-25 10:17:31.292770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.350 qpair failed and we were unable to recover it. 00:28:46.350 [2024-07-25 10:17:31.292959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.350 [2024-07-25 10:17:31.292987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.350 qpair failed and we were unable to recover it. 00:28:46.350 [2024-07-25 10:17:31.293170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.350 [2024-07-25 10:17:31.293198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.350 qpair failed and we were unable to recover it. 00:28:46.350 [2024-07-25 10:17:31.293412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.350 [2024-07-25 10:17:31.293442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.350 qpair failed and we were unable to recover it. 00:28:46.350 [2024-07-25 10:17:31.293602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.350 [2024-07-25 10:17:31.293631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.350 qpair failed and we were unable to recover it. 00:28:46.350 [2024-07-25 10:17:31.293850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.350 [2024-07-25 10:17:31.293879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.350 qpair failed and we were unable to recover it. 00:28:46.350 [2024-07-25 10:17:31.294163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.350 [2024-07-25 10:17:31.294187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.350 qpair failed and we were unable to recover it. 00:28:46.350 [2024-07-25 10:17:31.294404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.350 [2024-07-25 10:17:31.294449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.350 qpair failed and we were unable to recover it. 00:28:46.350 [2024-07-25 10:17:31.294696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.350 [2024-07-25 10:17:31.294729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.350 qpair failed and we were unable to recover it. 00:28:46.350 [2024-07-25 10:17:31.294944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.350 [2024-07-25 10:17:31.294967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.350 qpair failed and we were unable to recover it. 00:28:46.350 [2024-07-25 10:17:31.295202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.350 [2024-07-25 10:17:31.295230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.350 qpair failed and we were unable to recover it. 00:28:46.350 [2024-07-25 10:17:31.295426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.350 [2024-07-25 10:17:31.295463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.350 qpair failed and we were unable to recover it. 00:28:46.350 [2024-07-25 10:17:31.295677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.350 [2024-07-25 10:17:31.295700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.350 qpair failed and we were unable to recover it. 00:28:46.350 [2024-07-25 10:17:31.295977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.350 [2024-07-25 10:17:31.296006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.350 qpair failed and we were unable to recover it. 00:28:46.350 [2024-07-25 10:17:31.296187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.350 [2024-07-25 10:17:31.296217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.350 qpair failed and we were unable to recover it. 00:28:46.350 [2024-07-25 10:17:31.296453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.350 [2024-07-25 10:17:31.296489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.350 qpair failed and we were unable to recover it. 00:28:46.350 [2024-07-25 10:17:31.296735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.350 [2024-07-25 10:17:31.296764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.350 qpair failed and we were unable to recover it. 00:28:46.350 [2024-07-25 10:17:31.297023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.350 [2024-07-25 10:17:31.297052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.350 qpair failed and we were unable to recover it. 00:28:46.350 [2024-07-25 10:17:31.297266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.350 [2024-07-25 10:17:31.297289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.350 qpair failed and we were unable to recover it. 00:28:46.350 [2024-07-25 10:17:31.297510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.350 [2024-07-25 10:17:31.297540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.350 qpair failed and we were unable to recover it. 00:28:46.350 [2024-07-25 10:17:31.297768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.350 [2024-07-25 10:17:31.297797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.350 qpair failed and we were unable to recover it. 00:28:46.350 [2024-07-25 10:17:31.297955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.350 [2024-07-25 10:17:31.297978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.350 qpair failed and we were unable to recover it. 00:28:46.350 [2024-07-25 10:17:31.298168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.350 [2024-07-25 10:17:31.298197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.350 qpair failed and we were unable to recover it. 00:28:46.350 [2024-07-25 10:17:31.298380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.350 [2024-07-25 10:17:31.298418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.350 qpair failed and we were unable to recover it. 00:28:46.350 [2024-07-25 10:17:31.298700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.350 [2024-07-25 10:17:31.298724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.350 qpair failed and we were unable to recover it. 00:28:46.350 [2024-07-25 10:17:31.298950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.350 [2024-07-25 10:17:31.298980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.350 qpair failed and we were unable to recover it. 00:28:46.350 [2024-07-25 10:17:31.299267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.350 [2024-07-25 10:17:31.299296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.350 qpair failed and we were unable to recover it. 00:28:46.350 [2024-07-25 10:17:31.299532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.350 [2024-07-25 10:17:31.299555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.350 qpair failed and we were unable to recover it. 00:28:46.350 [2024-07-25 10:17:31.299749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.350 [2024-07-25 10:17:31.299778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.350 qpair failed and we were unable to recover it. 00:28:46.350 [2024-07-25 10:17:31.299987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.350 [2024-07-25 10:17:31.300016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.350 qpair failed and we were unable to recover it. 00:28:46.350 [2024-07-25 10:17:31.300201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.350 [2024-07-25 10:17:31.300224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.350 qpair failed and we were unable to recover it. 00:28:46.350 [2024-07-25 10:17:31.300450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.350 [2024-07-25 10:17:31.300480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.350 qpair failed and we were unable to recover it. 00:28:46.350 [2024-07-25 10:17:31.300682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.350 [2024-07-25 10:17:31.300711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.350 qpair failed and we were unable to recover it. 00:28:46.350 [2024-07-25 10:17:31.300943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.350 [2024-07-25 10:17:31.300967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.350 qpair failed and we were unable to recover it. 00:28:46.350 [2024-07-25 10:17:31.301203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.350 [2024-07-25 10:17:31.301232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.350 qpair failed and we were unable to recover it. 00:28:46.350 [2024-07-25 10:17:31.301416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.350 [2024-07-25 10:17:31.301474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.350 qpair failed and we were unable to recover it. 00:28:46.350 [2024-07-25 10:17:31.301746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.351 [2024-07-25 10:17:31.301770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.351 qpair failed and we were unable to recover it. 00:28:46.351 [2024-07-25 10:17:31.302036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.351 [2024-07-25 10:17:31.302065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.351 qpair failed and we were unable to recover it. 00:28:46.351 [2024-07-25 10:17:31.302281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.351 [2024-07-25 10:17:31.302310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.351 qpair failed and we were unable to recover it. 00:28:46.351 [2024-07-25 10:17:31.302543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.351 [2024-07-25 10:17:31.302568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.351 qpair failed and we were unable to recover it. 00:28:46.351 [2024-07-25 10:17:31.302773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.351 [2024-07-25 10:17:31.302802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.351 qpair failed and we were unable to recover it. 00:28:46.351 [2024-07-25 10:17:31.303013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.351 [2024-07-25 10:17:31.303042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.351 qpair failed and we were unable to recover it. 00:28:46.351 [2024-07-25 10:17:31.303294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.351 [2024-07-25 10:17:31.303317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.351 qpair failed and we were unable to recover it. 00:28:46.351 [2024-07-25 10:17:31.303511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.351 [2024-07-25 10:17:31.303549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.351 qpair failed and we were unable to recover it. 00:28:46.351 [2024-07-25 10:17:31.303751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.351 [2024-07-25 10:17:31.303780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.351 qpair failed and we were unable to recover it. 00:28:46.351 [2024-07-25 10:17:31.303997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.351 [2024-07-25 10:17:31.304020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.351 qpair failed and we were unable to recover it. 00:28:46.351 [2024-07-25 10:17:31.304245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.351 [2024-07-25 10:17:31.304274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.351 qpair failed and we were unable to recover it. 00:28:46.351 [2024-07-25 10:17:31.304511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.351 [2024-07-25 10:17:31.304540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.351 qpair failed and we were unable to recover it. 00:28:46.351 [2024-07-25 10:17:31.304780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.351 [2024-07-25 10:17:31.304803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.351 qpair failed and we were unable to recover it. 00:28:46.351 [2024-07-25 10:17:31.305066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.351 [2024-07-25 10:17:31.305094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.351 qpair failed and we were unable to recover it. 00:28:46.351 [2024-07-25 10:17:31.305281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.351 [2024-07-25 10:17:31.305310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.351 qpair failed and we were unable to recover it. 00:28:46.351 [2024-07-25 10:17:31.305466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.351 [2024-07-25 10:17:31.305491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.351 qpair failed and we were unable to recover it. 00:28:46.351 [2024-07-25 10:17:31.305647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.351 [2024-07-25 10:17:31.305672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.351 qpair failed and we were unable to recover it. 00:28:46.351 [2024-07-25 10:17:31.305829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.351 [2024-07-25 10:17:31.305857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.351 qpair failed and we were unable to recover it. 00:28:46.351 [2024-07-25 10:17:31.306062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.351 [2024-07-25 10:17:31.306085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.351 qpair failed and we were unable to recover it. 00:28:46.351 [2024-07-25 10:17:31.306280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.351 [2024-07-25 10:17:31.306315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.351 qpair failed and we were unable to recover it. 00:28:46.351 [2024-07-25 10:17:31.306497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.351 [2024-07-25 10:17:31.306521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.351 qpair failed and we were unable to recover it. 00:28:46.351 [2024-07-25 10:17:31.306677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.351 [2024-07-25 10:17:31.306700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.351 qpair failed and we were unable to recover it. 00:28:46.351 [2024-07-25 10:17:31.306971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.351 [2024-07-25 10:17:31.307000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.351 qpair failed and we were unable to recover it. 00:28:46.351 [2024-07-25 10:17:31.307235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.351 [2024-07-25 10:17:31.307264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.351 qpair failed and we were unable to recover it. 00:28:46.351 [2024-07-25 10:17:31.307481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.351 [2024-07-25 10:17:31.307505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.351 qpair failed and we were unable to recover it. 00:28:46.351 [2024-07-25 10:17:31.307717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.351 [2024-07-25 10:17:31.307746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.351 qpair failed and we were unable to recover it. 00:28:46.352 [2024-07-25 10:17:31.307921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.352 [2024-07-25 10:17:31.307950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.352 qpair failed and we were unable to recover it. 00:28:46.352 [2024-07-25 10:17:31.308169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.352 [2024-07-25 10:17:31.308192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.352 qpair failed and we were unable to recover it. 00:28:46.352 [2024-07-25 10:17:31.308446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.352 [2024-07-25 10:17:31.308475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.352 qpair failed and we were unable to recover it. 00:28:46.352 [2024-07-25 10:17:31.308703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.352 [2024-07-25 10:17:31.308732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.352 qpair failed and we were unable to recover it. 00:28:46.352 [2024-07-25 10:17:31.308925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.352 [2024-07-25 10:17:31.308949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.352 qpair failed and we were unable to recover it. 00:28:46.352 [2024-07-25 10:17:31.309152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.352 [2024-07-25 10:17:31.309180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.352 qpair failed and we were unable to recover it. 00:28:46.352 [2024-07-25 10:17:31.309381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.352 [2024-07-25 10:17:31.309410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.352 qpair failed and we were unable to recover it. 00:28:46.352 [2024-07-25 10:17:31.309653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.352 [2024-07-25 10:17:31.309678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.352 qpair failed and we were unable to recover it. 00:28:46.352 [2024-07-25 10:17:31.309858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.352 [2024-07-25 10:17:31.309888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.352 qpair failed and we were unable to recover it. 00:28:46.352 [2024-07-25 10:17:31.310121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.352 [2024-07-25 10:17:31.310150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.352 qpair failed and we were unable to recover it. 00:28:46.352 [2024-07-25 10:17:31.310400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.352 [2024-07-25 10:17:31.310423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.352 qpair failed and we were unable to recover it. 00:28:46.352 [2024-07-25 10:17:31.310592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.352 [2024-07-25 10:17:31.310621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.352 qpair failed and we were unable to recover it. 00:28:46.352 [2024-07-25 10:17:31.310874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.352 [2024-07-25 10:17:31.310903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.352 qpair failed and we were unable to recover it. 00:28:46.352 [2024-07-25 10:17:31.311130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.352 [2024-07-25 10:17:31.311154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.352 qpair failed and we were unable to recover it. 00:28:46.352 [2024-07-25 10:17:31.311350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.352 [2024-07-25 10:17:31.311383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.352 qpair failed and we were unable to recover it. 00:28:46.352 [2024-07-25 10:17:31.311583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.352 [2024-07-25 10:17:31.311612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.352 qpair failed and we were unable to recover it. 00:28:46.352 [2024-07-25 10:17:31.311776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.352 [2024-07-25 10:17:31.311800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.352 qpair failed and we were unable to recover it. 00:28:46.352 [2024-07-25 10:17:31.312071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.352 [2024-07-25 10:17:31.312100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.352 qpair failed and we were unable to recover it. 00:28:46.352 [2024-07-25 10:17:31.312323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.352 [2024-07-25 10:17:31.312373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.352 qpair failed and we were unable to recover it. 00:28:46.352 [2024-07-25 10:17:31.312551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.352 [2024-07-25 10:17:31.312577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.352 qpair failed and we were unable to recover it. 00:28:46.352 [2024-07-25 10:17:31.312778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.352 [2024-07-25 10:17:31.312806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.352 qpair failed and we were unable to recover it. 00:28:46.352 [2024-07-25 10:17:31.312992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.352 [2024-07-25 10:17:31.313022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.352 qpair failed and we were unable to recover it. 00:28:46.352 [2024-07-25 10:17:31.313195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.352 [2024-07-25 10:17:31.313218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.352 qpair failed and we were unable to recover it. 00:28:46.352 [2024-07-25 10:17:31.313394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.352 [2024-07-25 10:17:31.313423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.352 qpair failed and we were unable to recover it. 00:28:46.352 [2024-07-25 10:17:31.313638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.352 [2024-07-25 10:17:31.313663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.352 qpair failed and we were unable to recover it. 00:28:46.352 [2024-07-25 10:17:31.313816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.352 [2024-07-25 10:17:31.313843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.352 qpair failed and we were unable to recover it. 00:28:46.352 [2024-07-25 10:17:31.314124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.352 [2024-07-25 10:17:31.314153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.352 qpair failed and we were unable to recover it. 00:28:46.352 [2024-07-25 10:17:31.314418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.352 [2024-07-25 10:17:31.314455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.352 qpair failed and we were unable to recover it. 00:28:46.352 [2024-07-25 10:17:31.314667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.352 [2024-07-25 10:17:31.314692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.352 qpair failed and we were unable to recover it. 00:28:46.352 [2024-07-25 10:17:31.314867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.352 [2024-07-25 10:17:31.314896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.352 qpair failed and we were unable to recover it. 00:28:46.352 [2024-07-25 10:17:31.315043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.352 [2024-07-25 10:17:31.315072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.352 qpair failed and we were unable to recover it. 00:28:46.352 [2024-07-25 10:17:31.315283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.352 [2024-07-25 10:17:31.315306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.352 qpair failed and we were unable to recover it. 00:28:46.352 [2024-07-25 10:17:31.315550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.352 [2024-07-25 10:17:31.315579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.352 qpair failed and we were unable to recover it. 00:28:46.352 [2024-07-25 10:17:31.315747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.352 [2024-07-25 10:17:31.315776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.352 qpair failed and we were unable to recover it. 00:28:46.352 [2024-07-25 10:17:31.315970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.352 [2024-07-25 10:17:31.315993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.352 qpair failed and we were unable to recover it. 00:28:46.352 [2024-07-25 10:17:31.316196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.352 [2024-07-25 10:17:31.316225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.352 qpair failed and we were unable to recover it. 00:28:46.352 [2024-07-25 10:17:31.316390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.352 [2024-07-25 10:17:31.316419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.352 qpair failed and we were unable to recover it. 00:28:46.352 [2024-07-25 10:17:31.316666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.352 [2024-07-25 10:17:31.316690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.352 qpair failed and we were unable to recover it. 00:28:46.353 [2024-07-25 10:17:31.316840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.353 [2024-07-25 10:17:31.316869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.353 qpair failed and we were unable to recover it. 00:28:46.353 [2024-07-25 10:17:31.317107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.353 [2024-07-25 10:17:31.317136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.353 qpair failed and we were unable to recover it. 00:28:46.353 [2024-07-25 10:17:31.317373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.353 [2024-07-25 10:17:31.317396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.353 qpair failed and we were unable to recover it. 00:28:46.353 [2024-07-25 10:17:31.317686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.353 [2024-07-25 10:17:31.317715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.353 qpair failed and we were unable to recover it. 00:28:46.353 [2024-07-25 10:17:31.317981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.353 [2024-07-25 10:17:31.318010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.353 qpair failed and we were unable to recover it. 00:28:46.353 [2024-07-25 10:17:31.318238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.353 [2024-07-25 10:17:31.318261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.353 qpair failed and we were unable to recover it. 00:28:46.353 [2024-07-25 10:17:31.318457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.353 [2024-07-25 10:17:31.318487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.353 qpair failed and we were unable to recover it. 00:28:46.353 [2024-07-25 10:17:31.318666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.353 [2024-07-25 10:17:31.318706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.353 qpair failed and we were unable to recover it. 00:28:46.353 [2024-07-25 10:17:31.318891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.353 [2024-07-25 10:17:31.318922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.353 qpair failed and we were unable to recover it. 00:28:46.353 [2024-07-25 10:17:31.319149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.353 [2024-07-25 10:17:31.319187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.353 qpair failed and we were unable to recover it. 00:28:46.353 [2024-07-25 10:17:31.319426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.353 [2024-07-25 10:17:31.319463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.353 qpair failed and we were unable to recover it. 00:28:46.353 [2024-07-25 10:17:31.319686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.353 [2024-07-25 10:17:31.319725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.353 qpair failed and we were unable to recover it. 00:28:46.353 [2024-07-25 10:17:31.319900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.353 [2024-07-25 10:17:31.319929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.353 qpair failed and we were unable to recover it. 00:28:46.353 [2024-07-25 10:17:31.320127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.353 [2024-07-25 10:17:31.320156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.353 qpair failed and we were unable to recover it. 00:28:46.353 [2024-07-25 10:17:31.320437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.353 [2024-07-25 10:17:31.320462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.353 qpair failed and we were unable to recover it. 00:28:46.353 [2024-07-25 10:17:31.320639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.353 [2024-07-25 10:17:31.320668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.353 qpair failed and we were unable to recover it. 00:28:46.353 [2024-07-25 10:17:31.320900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.353 [2024-07-25 10:17:31.320929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.353 qpair failed and we were unable to recover it. 00:28:46.353 [2024-07-25 10:17:31.321172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.353 [2024-07-25 10:17:31.321194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.353 qpair failed and we were unable to recover it. 00:28:46.353 [2024-07-25 10:17:31.321377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.353 [2024-07-25 10:17:31.321416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.353 qpair failed and we were unable to recover it. 00:28:46.353 [2024-07-25 10:17:31.321650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.353 [2024-07-25 10:17:31.321679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.353 qpair failed and we were unable to recover it. 00:28:46.353 [2024-07-25 10:17:31.321934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.353 [2024-07-25 10:17:31.321958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.353 qpair failed and we were unable to recover it. 00:28:46.353 [2024-07-25 10:17:31.322203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.353 [2024-07-25 10:17:31.322231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.353 qpair failed and we were unable to recover it. 00:28:46.353 [2024-07-25 10:17:31.322492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.353 [2024-07-25 10:17:31.322523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.353 qpair failed and we were unable to recover it. 00:28:46.353 [2024-07-25 10:17:31.322781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.353 [2024-07-25 10:17:31.322804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.353 qpair failed and we were unable to recover it. 00:28:46.353 [2024-07-25 10:17:31.322962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.353 [2024-07-25 10:17:31.322991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.353 qpair failed and we were unable to recover it. 00:28:46.353 [2024-07-25 10:17:31.323175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.353 [2024-07-25 10:17:31.323213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.353 qpair failed and we were unable to recover it. 00:28:46.353 [2024-07-25 10:17:31.323424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.353 [2024-07-25 10:17:31.323461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.353 qpair failed and we were unable to recover it. 00:28:46.353 [2024-07-25 10:17:31.323737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.353 [2024-07-25 10:17:31.323767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.353 qpair failed and we were unable to recover it. 00:28:46.353 [2024-07-25 10:17:31.324033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.353 [2024-07-25 10:17:31.324062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.353 qpair failed and we were unable to recover it. 00:28:46.353 [2024-07-25 10:17:31.324306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.353 [2024-07-25 10:17:31.324329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.353 qpair failed and we were unable to recover it. 00:28:46.353 [2024-07-25 10:17:31.324560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.353 [2024-07-25 10:17:31.324588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.353 qpair failed and we were unable to recover it. 00:28:46.353 [2024-07-25 10:17:31.324792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.353 [2024-07-25 10:17:31.324821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.353 qpair failed and we were unable to recover it. 00:28:46.353 [2024-07-25 10:17:31.325043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.353 [2024-07-25 10:17:31.325067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.353 qpair failed and we were unable to recover it. 00:28:46.353 [2024-07-25 10:17:31.325246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.353 [2024-07-25 10:17:31.325276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.353 qpair failed and we were unable to recover it. 00:28:46.353 [2024-07-25 10:17:31.325528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.353 [2024-07-25 10:17:31.325558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.353 qpair failed and we were unable to recover it. 00:28:46.353 [2024-07-25 10:17:31.325751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.353 [2024-07-25 10:17:31.325774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.353 qpair failed and we were unable to recover it. 00:28:46.353 [2024-07-25 10:17:31.325919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.353 [2024-07-25 10:17:31.325948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.353 qpair failed and we were unable to recover it. 00:28:46.353 [2024-07-25 10:17:31.326100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.353 [2024-07-25 10:17:31.326128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.353 qpair failed and we were unable to recover it. 00:28:46.353 [2024-07-25 10:17:31.326313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.354 [2024-07-25 10:17:31.326336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.354 qpair failed and we were unable to recover it. 00:28:46.354 [2024-07-25 10:17:31.326541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.354 [2024-07-25 10:17:31.326570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.354 qpair failed and we were unable to recover it. 00:28:46.354 [2024-07-25 10:17:31.326809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.354 [2024-07-25 10:17:31.326838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.354 qpair failed and we were unable to recover it. 00:28:46.354 [2024-07-25 10:17:31.327046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.354 [2024-07-25 10:17:31.327069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.354 qpair failed and we were unable to recover it. 00:28:46.354 [2024-07-25 10:17:31.327271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.354 [2024-07-25 10:17:31.327300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.354 qpair failed and we were unable to recover it. 00:28:46.354 [2024-07-25 10:17:31.327514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.354 [2024-07-25 10:17:31.327544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.354 qpair failed and we were unable to recover it. 00:28:46.354 [2024-07-25 10:17:31.327808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.354 [2024-07-25 10:17:31.327832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.354 qpair failed and we were unable to recover it. 00:28:46.354 [2024-07-25 10:17:31.328048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.354 [2024-07-25 10:17:31.328077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.354 qpair failed and we were unable to recover it. 00:28:46.354 [2024-07-25 10:17:31.328300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.354 [2024-07-25 10:17:31.328329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.354 qpair failed and we were unable to recover it. 00:28:46.354 [2024-07-25 10:17:31.328519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.354 [2024-07-25 10:17:31.328544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.354 qpair failed and we were unable to recover it. 00:28:46.354 [2024-07-25 10:17:31.328729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.354 [2024-07-25 10:17:31.328757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.354 qpair failed and we were unable to recover it. 00:28:46.354 [2024-07-25 10:17:31.329007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.354 [2024-07-25 10:17:31.329037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.354 qpair failed and we were unable to recover it. 00:28:46.354 [2024-07-25 10:17:31.329265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.354 [2024-07-25 10:17:31.329289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.354 qpair failed and we were unable to recover it. 00:28:46.354 [2024-07-25 10:17:31.329565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.354 [2024-07-25 10:17:31.329595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.354 qpair failed and we were unable to recover it. 00:28:46.354 [2024-07-25 10:17:31.329805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.354 [2024-07-25 10:17:31.329833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.354 qpair failed and we were unable to recover it. 00:28:46.354 [2024-07-25 10:17:31.329999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.354 [2024-07-25 10:17:31.330023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.354 qpair failed and we were unable to recover it. 00:28:46.354 [2024-07-25 10:17:31.330266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.354 [2024-07-25 10:17:31.330295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.354 qpair failed and we were unable to recover it. 00:28:46.354 [2024-07-25 10:17:31.330516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.354 [2024-07-25 10:17:31.330546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.354 qpair failed and we were unable to recover it. 00:28:46.354 [2024-07-25 10:17:31.330754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.354 [2024-07-25 10:17:31.330777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.354 qpair failed and we were unable to recover it. 00:28:46.354 [2024-07-25 10:17:31.331024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.354 [2024-07-25 10:17:31.331052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.354 qpair failed and we were unable to recover it. 00:28:46.354 [2024-07-25 10:17:31.331182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.354 [2024-07-25 10:17:31.331210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.354 qpair failed and we were unable to recover it. 00:28:46.354 [2024-07-25 10:17:31.331405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.354 [2024-07-25 10:17:31.331449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.354 qpair failed and we were unable to recover it. 00:28:46.354 [2024-07-25 10:17:31.331681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.354 [2024-07-25 10:17:31.331722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.354 qpair failed and we were unable to recover it. 00:28:46.354 [2024-07-25 10:17:31.331928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.354 [2024-07-25 10:17:31.331957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.354 qpair failed and we were unable to recover it. 00:28:46.354 [2024-07-25 10:17:31.332207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.354 [2024-07-25 10:17:31.332231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.354 qpair failed and we were unable to recover it. 00:28:46.354 [2024-07-25 10:17:31.332401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.354 [2024-07-25 10:17:31.332436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.354 qpair failed and we were unable to recover it. 00:28:46.354 [2024-07-25 10:17:31.332646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.354 [2024-07-25 10:17:31.332671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.354 qpair failed and we were unable to recover it. 00:28:46.354 [2024-07-25 10:17:31.332879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.354 [2024-07-25 10:17:31.332902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.354 qpair failed and we were unable to recover it. 00:28:46.354 [2024-07-25 10:17:31.333125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.354 [2024-07-25 10:17:31.333155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.354 qpair failed and we were unable to recover it. 00:28:46.354 [2024-07-25 10:17:31.333407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.354 [2024-07-25 10:17:31.333448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.354 qpair failed and we were unable to recover it. 00:28:46.354 [2024-07-25 10:17:31.333680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.354 [2024-07-25 10:17:31.333705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.354 qpair failed and we were unable to recover it. 00:28:46.354 [2024-07-25 10:17:31.333929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.354 [2024-07-25 10:17:31.333958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.354 qpair failed and we were unable to recover it. 00:28:46.354 [2024-07-25 10:17:31.334170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.354 [2024-07-25 10:17:31.334198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.354 qpair failed and we were unable to recover it. 00:28:46.354 [2024-07-25 10:17:31.334369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.354 [2024-07-25 10:17:31.334392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.354 qpair failed and we were unable to recover it. 00:28:46.354 [2024-07-25 10:17:31.334665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.354 [2024-07-25 10:17:31.334689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.354 qpair failed and we were unable to recover it. 00:28:46.354 [2024-07-25 10:17:31.334872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.354 [2024-07-25 10:17:31.334912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.354 qpair failed and we were unable to recover it. 00:28:46.354 [2024-07-25 10:17:31.335184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.354 [2024-07-25 10:17:31.335207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.354 qpair failed and we were unable to recover it. 00:28:46.354 [2024-07-25 10:17:31.335402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.354 [2024-07-25 10:17:31.335439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.354 qpair failed and we were unable to recover it. 00:28:46.355 [2024-07-25 10:17:31.335616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.355 [2024-07-25 10:17:31.335651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.355 qpair failed and we were unable to recover it. 00:28:46.355 [2024-07-25 10:17:31.335866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.355 [2024-07-25 10:17:31.335889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.355 qpair failed and we were unable to recover it. 00:28:46.355 [2024-07-25 10:17:31.336125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.355 [2024-07-25 10:17:31.336153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.355 qpair failed and we were unable to recover it. 00:28:46.355 [2024-07-25 10:17:31.336378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.355 [2024-07-25 10:17:31.336407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.355 qpair failed and we were unable to recover it. 00:28:46.355 [2024-07-25 10:17:31.336669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.355 [2024-07-25 10:17:31.336693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.355 qpair failed and we were unable to recover it. 00:28:46.355 [2024-07-25 10:17:31.336861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.355 [2024-07-25 10:17:31.336889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.355 qpair failed and we were unable to recover it. 00:28:46.355 [2024-07-25 10:17:31.337088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.355 [2024-07-25 10:17:31.337116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.355 qpair failed and we were unable to recover it. 00:28:46.355 [2024-07-25 10:17:31.337269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.355 [2024-07-25 10:17:31.337293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.355 qpair failed and we were unable to recover it. 00:28:46.355 [2024-07-25 10:17:31.337509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.355 [2024-07-25 10:17:31.337538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.355 qpair failed and we were unable to recover it. 00:28:46.355 [2024-07-25 10:17:31.337788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.355 [2024-07-25 10:17:31.337818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.355 qpair failed and we were unable to recover it. 00:28:46.355 [2024-07-25 10:17:31.338080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.355 [2024-07-25 10:17:31.338103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.355 qpair failed and we were unable to recover it. 00:28:46.355 [2024-07-25 10:17:31.338387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.355 [2024-07-25 10:17:31.338416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.355 qpair failed and we were unable to recover it. 00:28:46.355 [2024-07-25 10:17:31.338649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.355 [2024-07-25 10:17:31.338679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.355 qpair failed and we were unable to recover it. 00:28:46.355 [2024-07-25 10:17:31.338958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.355 [2024-07-25 10:17:31.338981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.355 qpair failed and we were unable to recover it. 00:28:46.355 [2024-07-25 10:17:31.339141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.355 [2024-07-25 10:17:31.339170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.355 qpair failed and we were unable to recover it. 00:28:46.355 [2024-07-25 10:17:31.339348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.355 [2024-07-25 10:17:31.339377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.355 qpair failed and we were unable to recover it. 00:28:46.355 [2024-07-25 10:17:31.339587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.355 [2024-07-25 10:17:31.339612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.355 qpair failed and we were unable to recover it. 00:28:46.355 [2024-07-25 10:17:31.339866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.355 [2024-07-25 10:17:31.339895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.355 qpair failed and we were unable to recover it. 00:28:46.355 [2024-07-25 10:17:31.340107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.355 [2024-07-25 10:17:31.340136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.355 qpair failed and we were unable to recover it. 00:28:46.355 [2024-07-25 10:17:31.340306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.355 [2024-07-25 10:17:31.340334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.355 qpair failed and we were unable to recover it. 00:28:46.355 [2024-07-25 10:17:31.340549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.355 [2024-07-25 10:17:31.340574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.355 qpair failed and we were unable to recover it. 00:28:46.355 [2024-07-25 10:17:31.340763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.355 [2024-07-25 10:17:31.340798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.355 qpair failed and we were unable to recover it. 00:28:46.355 [2024-07-25 10:17:31.341011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.355 [2024-07-25 10:17:31.341038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.355 qpair failed and we were unable to recover it. 00:28:46.355 [2024-07-25 10:17:31.341232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.355 [2024-07-25 10:17:31.341261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.355 qpair failed and we were unable to recover it. 00:28:46.355 [2024-07-25 10:17:31.341467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.355 [2024-07-25 10:17:31.341497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.355 qpair failed and we were unable to recover it. 00:28:46.355 [2024-07-25 10:17:31.341720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.355 [2024-07-25 10:17:31.341743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.355 qpair failed and we were unable to recover it. 00:28:46.355 [2024-07-25 10:17:31.341975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.355 [2024-07-25 10:17:31.342004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.355 qpair failed and we were unable to recover it. 00:28:46.355 [2024-07-25 10:17:31.342223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.355 [2024-07-25 10:17:31.342253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.355 qpair failed and we were unable to recover it. 00:28:46.355 [2024-07-25 10:17:31.342490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.355 [2024-07-25 10:17:31.342515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.355 qpair failed and we were unable to recover it. 00:28:46.355 [2024-07-25 10:17:31.342715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.355 [2024-07-25 10:17:31.342744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.355 qpair failed and we were unable to recover it. 00:28:46.355 [2024-07-25 10:17:31.342958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.355 [2024-07-25 10:17:31.342986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.355 qpair failed and we were unable to recover it. 00:28:46.355 [2024-07-25 10:17:31.343176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.355 [2024-07-25 10:17:31.343199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.355 qpair failed and we were unable to recover it. 00:28:46.355 [2024-07-25 10:17:31.343378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.355 [2024-07-25 10:17:31.343407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.355 qpair failed and we were unable to recover it. 00:28:46.355 [2024-07-25 10:17:31.343592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.356 [2024-07-25 10:17:31.343626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.356 qpair failed and we were unable to recover it. 00:28:46.356 [2024-07-25 10:17:31.343861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.356 [2024-07-25 10:17:31.343884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.356 qpair failed and we were unable to recover it. 00:28:46.356 [2024-07-25 10:17:31.344156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.356 [2024-07-25 10:17:31.344185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.356 qpair failed and we were unable to recover it. 00:28:46.356 [2024-07-25 10:17:31.344517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.356 [2024-07-25 10:17:31.344548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.356 qpair failed and we were unable to recover it. 00:28:46.356 [2024-07-25 10:17:31.344803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.356 [2024-07-25 10:17:31.344826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.356 qpair failed and we were unable to recover it. 00:28:46.356 [2024-07-25 10:17:31.345071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.356 [2024-07-25 10:17:31.345100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.356 qpair failed and we were unable to recover it. 00:28:46.356 [2024-07-25 10:17:31.345324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.356 [2024-07-25 10:17:31.345352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.356 qpair failed and we were unable to recover it. 00:28:46.356 [2024-07-25 10:17:31.345608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.356 [2024-07-25 10:17:31.345633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.356 qpair failed and we were unable to recover it. 00:28:46.356 [2024-07-25 10:17:31.345866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.356 [2024-07-25 10:17:31.345895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.356 qpair failed and we were unable to recover it. 00:28:46.356 [2024-07-25 10:17:31.346031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.356 [2024-07-25 10:17:31.346060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.356 qpair failed and we were unable to recover it. 00:28:46.356 [2024-07-25 10:17:31.346253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.356 [2024-07-25 10:17:31.346287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.356 qpair failed and we were unable to recover it. 00:28:46.356 [2024-07-25 10:17:31.346540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.356 [2024-07-25 10:17:31.346569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.356 qpair failed and we were unable to recover it. 00:28:46.356 [2024-07-25 10:17:31.346734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.356 [2024-07-25 10:17:31.346763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.356 qpair failed and we were unable to recover it. 00:28:46.356 [2024-07-25 10:17:31.347034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.356 [2024-07-25 10:17:31.347058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.356 qpair failed and we were unable to recover it. 00:28:46.356 [2024-07-25 10:17:31.347251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.356 [2024-07-25 10:17:31.347280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.356 qpair failed and we were unable to recover it. 00:28:46.356 [2024-07-25 10:17:31.347523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.356 [2024-07-25 10:17:31.347548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.356 qpair failed and we were unable to recover it. 00:28:46.356 [2024-07-25 10:17:31.347820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.356 [2024-07-25 10:17:31.347866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.356 qpair failed and we were unable to recover it. 00:28:46.356 [2024-07-25 10:17:31.348078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.356 [2024-07-25 10:17:31.348108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.356 qpair failed and we were unable to recover it. 00:28:46.356 [2024-07-25 10:17:31.348309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.356 [2024-07-25 10:17:31.348338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.356 qpair failed and we were unable to recover it. 00:28:46.356 [2024-07-25 10:17:31.348566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.356 [2024-07-25 10:17:31.348590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.356 qpair failed and we were unable to recover it. 00:28:46.356 [2024-07-25 10:17:31.348789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.356 [2024-07-25 10:17:31.348829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.356 qpair failed and we were unable to recover it. 00:28:46.356 [2024-07-25 10:17:31.349048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.356 [2024-07-25 10:17:31.349077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.356 qpair failed and we were unable to recover it. 00:28:46.356 [2024-07-25 10:17:31.349240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.356 [2024-07-25 10:17:31.349263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.356 qpair failed and we were unable to recover it. 00:28:46.356 [2024-07-25 10:17:31.349476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.356 [2024-07-25 10:17:31.349506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.356 qpair failed and we were unable to recover it. 00:28:46.356 [2024-07-25 10:17:31.349691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.356 [2024-07-25 10:17:31.349720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.356 qpair failed and we were unable to recover it. 00:28:46.356 [2024-07-25 10:17:31.349884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.356 [2024-07-25 10:17:31.349907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.356 qpair failed and we were unable to recover it. 00:28:46.356 [2024-07-25 10:17:31.350169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.356 [2024-07-25 10:17:31.350199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.356 qpair failed and we were unable to recover it. 00:28:46.356 [2024-07-25 10:17:31.350459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.356 [2024-07-25 10:17:31.350489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.356 qpair failed and we were unable to recover it. 00:28:46.356 [2024-07-25 10:17:31.350630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.356 [2024-07-25 10:17:31.350653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.356 qpair failed and we were unable to recover it. 00:28:46.356 [2024-07-25 10:17:31.350866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.356 [2024-07-25 10:17:31.350895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.356 qpair failed and we were unable to recover it. 00:28:46.356 [2024-07-25 10:17:31.351103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.357 [2024-07-25 10:17:31.351132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.357 qpair failed and we were unable to recover it. 00:28:46.357 [2024-07-25 10:17:31.351353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.357 [2024-07-25 10:17:31.351378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.357 qpair failed and we were unable to recover it. 00:28:46.357 [2024-07-25 10:17:31.351617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.357 [2024-07-25 10:17:31.351642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.357 qpair failed and we were unable to recover it. 00:28:46.357 [2024-07-25 10:17:31.351843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.357 [2024-07-25 10:17:31.351873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.357 qpair failed and we were unable to recover it. 00:28:46.357 [2024-07-25 10:17:31.352072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.357 [2024-07-25 10:17:31.352095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.357 qpair failed and we were unable to recover it. 00:28:46.357 [2024-07-25 10:17:31.352288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.357 [2024-07-25 10:17:31.352317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.357 qpair failed and we were unable to recover it. 00:28:46.357 [2024-07-25 10:17:31.352464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.357 [2024-07-25 10:17:31.352493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.357 qpair failed and we were unable to recover it. 00:28:46.357 [2024-07-25 10:17:31.352725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.357 [2024-07-25 10:17:31.352749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.357 qpair failed and we were unable to recover it. 00:28:46.357 [2024-07-25 10:17:31.352942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.357 [2024-07-25 10:17:31.352971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.357 qpair failed and we were unable to recover it. 00:28:46.357 [2024-07-25 10:17:31.353147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.357 [2024-07-25 10:17:31.353187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.357 qpair failed and we were unable to recover it. 00:28:46.357 [2024-07-25 10:17:31.353398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.357 [2024-07-25 10:17:31.353420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.357 qpair failed and we were unable to recover it. 00:28:46.357 [2024-07-25 10:17:31.353657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.357 [2024-07-25 10:17:31.353687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.357 qpair failed and we were unable to recover it. 00:28:46.357 [2024-07-25 10:17:31.353887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.357 [2024-07-25 10:17:31.353916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.357 qpair failed and we were unable to recover it. 00:28:46.357 [2024-07-25 10:17:31.354127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.357 [2024-07-25 10:17:31.354154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.357 qpair failed and we were unable to recover it. 00:28:46.357 [2024-07-25 10:17:31.354360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.357 [2024-07-25 10:17:31.354388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.357 qpair failed and we were unable to recover it. 00:28:46.357 [2024-07-25 10:17:31.354529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.357 [2024-07-25 10:17:31.354558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.357 qpair failed and we were unable to recover it. 00:28:46.357 [2024-07-25 10:17:31.354757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.357 [2024-07-25 10:17:31.354781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.357 qpair failed and we were unable to recover it. 00:28:46.357 [2024-07-25 10:17:31.355049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.357 [2024-07-25 10:17:31.355078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.357 qpair failed and we were unable to recover it. 00:28:46.357 [2024-07-25 10:17:31.355329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.357 [2024-07-25 10:17:31.355379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.357 qpair failed and we were unable to recover it. 00:28:46.357 [2024-07-25 10:17:31.355596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.357 [2024-07-25 10:17:31.355621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.357 qpair failed and we were unable to recover it. 00:28:46.357 [2024-07-25 10:17:31.355806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.357 [2024-07-25 10:17:31.355834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.357 qpair failed and we were unable to recover it. 00:28:46.357 [2024-07-25 10:17:31.356004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.357 [2024-07-25 10:17:31.356033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.357 qpair failed and we were unable to recover it. 00:28:46.357 [2024-07-25 10:17:31.356246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.357 [2024-07-25 10:17:31.356270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.357 qpair failed and we were unable to recover it. 00:28:46.357 [2024-07-25 10:17:31.356499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.357 [2024-07-25 10:17:31.356528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.357 qpair failed and we were unable to recover it. 00:28:46.357 [2024-07-25 10:17:31.356741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.357 [2024-07-25 10:17:31.356770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.357 qpair failed and we were unable to recover it. 00:28:46.357 [2024-07-25 10:17:31.357019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.357 [2024-07-25 10:17:31.357042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.357 qpair failed and we were unable to recover it. 00:28:46.357 [2024-07-25 10:17:31.357275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.357 [2024-07-25 10:17:31.357304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.357 qpair failed and we were unable to recover it. 00:28:46.357 [2024-07-25 10:17:31.357533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.357 [2024-07-25 10:17:31.357559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.357 qpair failed and we were unable to recover it. 00:28:46.357 [2024-07-25 10:17:31.357745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.357 [2024-07-25 10:17:31.357769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.357 qpair failed and we were unable to recover it. 00:28:46.357 [2024-07-25 10:17:31.358015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.357 [2024-07-25 10:17:31.358045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.357 qpair failed and we were unable to recover it. 00:28:46.357 [2024-07-25 10:17:31.358297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.357 [2024-07-25 10:17:31.358327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.357 qpair failed and we were unable to recover it. 00:28:46.357 [2024-07-25 10:17:31.358581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.357 [2024-07-25 10:17:31.358605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.357 qpair failed and we were unable to recover it. 00:28:46.357 [2024-07-25 10:17:31.358887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.357 [2024-07-25 10:17:31.358916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.357 qpair failed and we were unable to recover it. 00:28:46.357 [2024-07-25 10:17:31.359168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.357 [2024-07-25 10:17:31.359197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.357 qpair failed and we were unable to recover it. 00:28:46.357 [2024-07-25 10:17:31.359408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.357 [2024-07-25 10:17:31.359437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.357 qpair failed and we were unable to recover it. 00:28:46.357 [2024-07-25 10:17:31.359709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.357 [2024-07-25 10:17:31.359738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.357 qpair failed and we were unable to recover it. 00:28:46.357 [2024-07-25 10:17:31.359881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.357 [2024-07-25 10:17:31.359909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.358 qpair failed and we were unable to recover it. 00:28:46.358 [2024-07-25 10:17:31.360138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.358 [2024-07-25 10:17:31.360162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.358 qpair failed and we were unable to recover it. 00:28:46.358 [2024-07-25 10:17:31.360383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.358 [2024-07-25 10:17:31.360412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.358 qpair failed and we were unable to recover it. 00:28:46.358 [2024-07-25 10:17:31.360651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.358 [2024-07-25 10:17:31.360680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.358 qpair failed and we were unable to recover it. 00:28:46.358 [2024-07-25 10:17:31.360887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.358 [2024-07-25 10:17:31.360910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.358 qpair failed and we were unable to recover it. 00:28:46.358 [2024-07-25 10:17:31.361133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.358 [2024-07-25 10:17:31.361162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.358 qpair failed and we were unable to recover it. 00:28:46.358 [2024-07-25 10:17:31.361448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.358 [2024-07-25 10:17:31.361483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.358 qpair failed and we were unable to recover it. 00:28:46.358 [2024-07-25 10:17:31.361718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.358 [2024-07-25 10:17:31.361757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.358 qpair failed and we were unable to recover it. 00:28:46.358 [2024-07-25 10:17:31.362004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.358 [2024-07-25 10:17:31.362033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.358 qpair failed and we were unable to recover it. 00:28:46.358 [2024-07-25 10:17:31.362246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.358 [2024-07-25 10:17:31.362276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.358 qpair failed and we were unable to recover it. 00:28:46.358 [2024-07-25 10:17:31.362522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.358 [2024-07-25 10:17:31.362546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.358 qpair failed and we were unable to recover it. 00:28:46.358 [2024-07-25 10:17:31.362742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.358 [2024-07-25 10:17:31.362782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.358 qpair failed and we were unable to recover it. 00:28:46.358 [2024-07-25 10:17:31.362921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.358 [2024-07-25 10:17:31.362949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.358 qpair failed and we were unable to recover it. 00:28:46.358 [2024-07-25 10:17:31.363159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.358 [2024-07-25 10:17:31.363183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.358 qpair failed and we were unable to recover it. 00:28:46.358 [2024-07-25 10:17:31.363438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.358 [2024-07-25 10:17:31.363468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.358 qpair failed and we were unable to recover it. 00:28:46.358 [2024-07-25 10:17:31.363701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.358 [2024-07-25 10:17:31.363730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.358 qpair failed and we were unable to recover it. 00:28:46.358 [2024-07-25 10:17:31.363906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.358 [2024-07-25 10:17:31.363929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.358 qpair failed and we were unable to recover it. 00:28:46.358 [2024-07-25 10:17:31.364175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.358 [2024-07-25 10:17:31.364204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.358 qpair failed and we were unable to recover it. 00:28:46.358 [2024-07-25 10:17:31.364451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.358 [2024-07-25 10:17:31.364496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.358 qpair failed and we were unable to recover it. 00:28:46.358 [2024-07-25 10:17:31.364716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.358 [2024-07-25 10:17:31.364740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.358 qpair failed and we were unable to recover it. 00:28:46.358 [2024-07-25 10:17:31.364965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.358 [2024-07-25 10:17:31.364994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.358 qpair failed and we were unable to recover it. 00:28:46.358 [2024-07-25 10:17:31.365182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.358 [2024-07-25 10:17:31.365221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.358 qpair failed and we were unable to recover it. 00:28:46.358 [2024-07-25 10:17:31.365443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.358 [2024-07-25 10:17:31.365467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.358 qpair failed and we were unable to recover it. 00:28:46.358 [2024-07-25 10:17:31.365681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.358 [2024-07-25 10:17:31.365711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.358 qpair failed and we were unable to recover it. 00:28:46.358 [2024-07-25 10:17:31.365904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.358 [2024-07-25 10:17:31.365933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.358 qpair failed and we were unable to recover it. 00:28:46.358 [2024-07-25 10:17:31.366066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.358 [2024-07-25 10:17:31.366105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.358 qpair failed and we were unable to recover it. 00:28:46.358 [2024-07-25 10:17:31.366374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.358 [2024-07-25 10:17:31.366403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.358 qpair failed and we were unable to recover it. 00:28:46.358 [2024-07-25 10:17:31.366676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.358 [2024-07-25 10:17:31.366706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.358 qpair failed and we were unable to recover it. 00:28:46.358 [2024-07-25 10:17:31.366960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.358 [2024-07-25 10:17:31.366983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.358 qpair failed and we were unable to recover it. 00:28:46.358 [2024-07-25 10:17:31.367201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.358 [2024-07-25 10:17:31.367230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.358 qpair failed and we were unable to recover it. 00:28:46.358 [2024-07-25 10:17:31.367454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.358 [2024-07-25 10:17:31.367484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.358 qpair failed and we were unable to recover it. 00:28:46.358 [2024-07-25 10:17:31.367702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.358 [2024-07-25 10:17:31.367725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.358 qpair failed and we were unable to recover it. 00:28:46.358 [2024-07-25 10:17:31.367884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.358 [2024-07-25 10:17:31.367913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.358 qpair failed and we were unable to recover it. 00:28:46.358 [2024-07-25 10:17:31.368041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.358 [2024-07-25 10:17:31.368069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.358 qpair failed and we were unable to recover it. 00:28:46.358 [2024-07-25 10:17:31.368222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.358 [2024-07-25 10:17:31.368260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.358 qpair failed and we were unable to recover it. 00:28:46.358 [2024-07-25 10:17:31.368451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.358 [2024-07-25 10:17:31.368481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.358 qpair failed and we were unable to recover it. 00:28:46.358 [2024-07-25 10:17:31.368697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.358 [2024-07-25 10:17:31.368726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.358 qpair failed and we were unable to recover it. 00:28:46.358 [2024-07-25 10:17:31.368976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.359 [2024-07-25 10:17:31.368999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.359 qpair failed and we were unable to recover it. 00:28:46.359 [2024-07-25 10:17:31.369234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.359 [2024-07-25 10:17:31.369264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.359 qpair failed and we were unable to recover it. 00:28:46.359 [2024-07-25 10:17:31.369498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.359 [2024-07-25 10:17:31.369528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.359 qpair failed and we were unable to recover it. 00:28:46.359 [2024-07-25 10:17:31.369699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.359 [2024-07-25 10:17:31.369738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.359 qpair failed and we were unable to recover it. 00:28:46.359 [2024-07-25 10:17:31.369996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.359 [2024-07-25 10:17:31.370026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.359 qpair failed and we were unable to recover it. 00:28:46.359 [2024-07-25 10:17:31.370271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.359 [2024-07-25 10:17:31.370300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.359 qpair failed and we were unable to recover it. 00:28:46.359 [2024-07-25 10:17:31.370527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.359 [2024-07-25 10:17:31.370551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.359 qpair failed and we were unable to recover it. 00:28:46.359 [2024-07-25 10:17:31.370780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.359 [2024-07-25 10:17:31.370809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.359 qpair failed and we were unable to recover it. 00:28:46.359 [2024-07-25 10:17:31.371060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.359 [2024-07-25 10:17:31.371093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.359 qpair failed and we were unable to recover it. 00:28:46.359 [2024-07-25 10:17:31.371350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.359 [2024-07-25 10:17:31.371399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.359 qpair failed and we were unable to recover it. 00:28:46.359 [2024-07-25 10:17:31.371664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.359 [2024-07-25 10:17:31.371688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.359 qpair failed and we were unable to recover it. 00:28:46.359 [2024-07-25 10:17:31.371912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.359 [2024-07-25 10:17:31.371941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.359 qpair failed and we were unable to recover it. 00:28:46.359 [2024-07-25 10:17:31.372170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.359 [2024-07-25 10:17:31.372193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.359 qpair failed and we were unable to recover it. 00:28:46.359 [2024-07-25 10:17:31.372382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.359 [2024-07-25 10:17:31.372421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.359 qpair failed and we were unable to recover it. 00:28:46.359 [2024-07-25 10:17:31.372624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.359 [2024-07-25 10:17:31.372654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.359 qpair failed and we were unable to recover it. 00:28:46.359 [2024-07-25 10:17:31.372823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.359 [2024-07-25 10:17:31.372846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.359 qpair failed and we were unable to recover it. 00:28:46.359 [2024-07-25 10:17:31.373124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.359 [2024-07-25 10:17:31.373154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.359 qpair failed and we were unable to recover it. 00:28:46.359 [2024-07-25 10:17:31.373403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.359 [2024-07-25 10:17:31.373439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.359 qpair failed and we were unable to recover it. 00:28:46.359 [2024-07-25 10:17:31.373625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.359 [2024-07-25 10:17:31.373657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.359 qpair failed and we were unable to recover it. 00:28:46.359 [2024-07-25 10:17:31.373909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.359 [2024-07-25 10:17:31.373939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.359 qpair failed and we were unable to recover it. 00:28:46.359 [2024-07-25 10:17:31.374174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.359 [2024-07-25 10:17:31.374203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.359 qpair failed and we were unable to recover it. 00:28:46.359 [2024-07-25 10:17:31.374455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.359 [2024-07-25 10:17:31.374479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.359 qpair failed and we were unable to recover it. 00:28:46.359 [2024-07-25 10:17:31.374664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.359 [2024-07-25 10:17:31.374704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.359 qpair failed and we were unable to recover it. 00:28:46.359 [2024-07-25 10:17:31.374949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.359 [2024-07-25 10:17:31.374977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.359 qpair failed and we were unable to recover it. 00:28:46.359 [2024-07-25 10:17:31.375210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.359 [2024-07-25 10:17:31.375233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.359 qpair failed and we were unable to recover it. 00:28:46.359 [2024-07-25 10:17:31.375451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.359 [2024-07-25 10:17:31.375480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.359 qpair failed and we were unable to recover it. 00:28:46.359 [2024-07-25 10:17:31.375626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.359 [2024-07-25 10:17:31.375656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.359 qpair failed and we were unable to recover it. 00:28:46.359 [2024-07-25 10:17:31.375905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.359 [2024-07-25 10:17:31.375928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.359 qpair failed and we were unable to recover it. 00:28:46.359 [2024-07-25 10:17:31.376146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.359 [2024-07-25 10:17:31.376175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.359 qpair failed and we were unable to recover it. 00:28:46.359 [2024-07-25 10:17:31.376360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.359 [2024-07-25 10:17:31.376389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.359 qpair failed and we were unable to recover it. 00:28:46.359 [2024-07-25 10:17:31.376566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.359 [2024-07-25 10:17:31.376590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.359 qpair failed and we were unable to recover it. 00:28:46.359 [2024-07-25 10:17:31.376758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.359 [2024-07-25 10:17:31.376787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.359 qpair failed and we were unable to recover it. 00:28:46.359 [2024-07-25 10:17:31.376988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.359 [2024-07-25 10:17:31.377016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.359 qpair failed and we were unable to recover it. 00:28:46.359 [2024-07-25 10:17:31.377250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.359 [2024-07-25 10:17:31.377274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.359 qpair failed and we were unable to recover it. 00:28:46.359 [2024-07-25 10:17:31.377511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.359 [2024-07-25 10:17:31.377537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.359 qpair failed and we were unable to recover it. 00:28:46.359 [2024-07-25 10:17:31.377726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.359 [2024-07-25 10:17:31.377759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.359 qpair failed and we were unable to recover it. 00:28:46.359 [2024-07-25 10:17:31.378007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.359 [2024-07-25 10:17:31.378031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.359 qpair failed and we were unable to recover it. 00:28:46.359 [2024-07-25 10:17:31.378315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.359 [2024-07-25 10:17:31.378344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.359 qpair failed and we were unable to recover it. 00:28:46.359 [2024-07-25 10:17:31.378530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.359 [2024-07-25 10:17:31.378559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.359 qpair failed and we were unable to recover it. 00:28:46.359 [2024-07-25 10:17:31.378751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.359 [2024-07-25 10:17:31.378774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.359 qpair failed and we were unable to recover it. 00:28:46.360 [2024-07-25 10:17:31.378953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.360 [2024-07-25 10:17:31.378982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.360 qpair failed and we were unable to recover it. 00:28:46.360 [2024-07-25 10:17:31.379154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.360 [2024-07-25 10:17:31.379183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.360 qpair failed and we were unable to recover it. 00:28:46.360 [2024-07-25 10:17:31.379418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.360 [2024-07-25 10:17:31.379463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.360 qpair failed and we were unable to recover it. 00:28:46.360 [2024-07-25 10:17:31.379635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.360 [2024-07-25 10:17:31.379664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.360 qpair failed and we were unable to recover it. 00:28:46.360 [2024-07-25 10:17:31.379939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.360 [2024-07-25 10:17:31.379968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.360 qpair failed and we were unable to recover it. 00:28:46.360 [2024-07-25 10:17:31.380141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.360 [2024-07-25 10:17:31.380174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.360 qpair failed and we were unable to recover it. 00:28:46.360 [2024-07-25 10:17:31.380398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.360 [2024-07-25 10:17:31.380435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.360 qpair failed and we were unable to recover it. 00:28:46.360 [2024-07-25 10:17:31.380587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.360 [2024-07-25 10:17:31.380616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.360 qpair failed and we were unable to recover it. 00:28:46.360 [2024-07-25 10:17:31.380791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.360 [2024-07-25 10:17:31.380814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.360 qpair failed and we were unable to recover it. 00:28:46.360 [2024-07-25 10:17:31.380992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.360 [2024-07-25 10:17:31.381021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.360 qpair failed and we were unable to recover it. 00:28:46.360 [2024-07-25 10:17:31.381174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.360 [2024-07-25 10:17:31.381203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.360 qpair failed and we were unable to recover it. 00:28:46.360 [2024-07-25 10:17:31.381454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.360 [2024-07-25 10:17:31.381478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.360 qpair failed and we were unable to recover it. 00:28:46.360 [2024-07-25 10:17:31.381629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.360 [2024-07-25 10:17:31.381658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.360 qpair failed and we were unable to recover it. 00:28:46.360 [2024-07-25 10:17:31.381883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.360 [2024-07-25 10:17:31.381912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.360 qpair failed and we were unable to recover it. 00:28:46.360 [2024-07-25 10:17:31.382153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.360 [2024-07-25 10:17:31.382176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.360 qpair failed and we were unable to recover it. 00:28:46.360 [2024-07-25 10:17:31.382423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.360 [2024-07-25 10:17:31.382462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.360 qpair failed and we were unable to recover it. 00:28:46.360 [2024-07-25 10:17:31.382688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.360 [2024-07-25 10:17:31.382718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.360 qpair failed and we were unable to recover it. 00:28:46.360 [2024-07-25 10:17:31.382949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.360 [2024-07-25 10:17:31.382973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.360 qpair failed and we were unable to recover it. 00:28:46.360 [2024-07-25 10:17:31.383168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.360 [2024-07-25 10:17:31.383198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.360 qpair failed and we were unable to recover it. 00:28:46.360 [2024-07-25 10:17:31.383354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.360 [2024-07-25 10:17:31.383382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.360 qpair failed and we were unable to recover it. 00:28:46.360 [2024-07-25 10:17:31.383576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.360 [2024-07-25 10:17:31.383601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.360 qpair failed and we were unable to recover it. 00:28:46.360 [2024-07-25 10:17:31.383732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.360 [2024-07-25 10:17:31.383755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.360 qpair failed and we were unable to recover it. 00:28:46.360 [2024-07-25 10:17:31.383944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.360 [2024-07-25 10:17:31.383984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.360 qpair failed and we were unable to recover it. 00:28:46.360 [2024-07-25 10:17:31.384248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.360 [2024-07-25 10:17:31.384271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.360 qpair failed and we were unable to recover it. 00:28:46.360 [2024-07-25 10:17:31.384483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.360 [2024-07-25 10:17:31.384528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.360 qpair failed and we were unable to recover it. 00:28:46.360 [2024-07-25 10:17:31.384742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.360 [2024-07-25 10:17:31.384782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.360 qpair failed and we were unable to recover it. 00:28:46.360 [2024-07-25 10:17:31.384993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.360 [2024-07-25 10:17:31.385017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.360 qpair failed and we were unable to recover it. 00:28:46.360 [2024-07-25 10:17:31.385187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.360 [2024-07-25 10:17:31.385215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.360 qpair failed and we were unable to recover it. 00:28:46.360 [2024-07-25 10:17:31.385454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.360 [2024-07-25 10:17:31.385484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.360 qpair failed and we were unable to recover it. 00:28:46.360 [2024-07-25 10:17:31.385736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.360 [2024-07-25 10:17:31.385773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.360 qpair failed and we were unable to recover it. 00:28:46.360 [2024-07-25 10:17:31.386007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.360 [2024-07-25 10:17:31.386036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.360 qpair failed and we were unable to recover it. 00:28:46.360 [2024-07-25 10:17:31.386261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.360 [2024-07-25 10:17:31.386290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.360 qpair failed and we were unable to recover it. 00:28:46.360 [2024-07-25 10:17:31.386504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.360 [2024-07-25 10:17:31.386529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.360 qpair failed and we were unable to recover it. 00:28:46.360 [2024-07-25 10:17:31.386752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.360 [2024-07-25 10:17:31.386781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.360 qpair failed and we were unable to recover it. 00:28:46.360 [2024-07-25 10:17:31.387003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.360 [2024-07-25 10:17:31.387032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.360 qpair failed and we were unable to recover it. 00:28:46.360 [2024-07-25 10:17:31.387245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.360 [2024-07-25 10:17:31.387268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.360 qpair failed and we were unable to recover it. 00:28:46.360 [2024-07-25 10:17:31.387462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.360 [2024-07-25 10:17:31.387494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.360 qpair failed and we were unable to recover it. 00:28:46.360 [2024-07-25 10:17:31.387702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.360 [2024-07-25 10:17:31.387731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.360 qpair failed and we were unable to recover it. 00:28:46.360 [2024-07-25 10:17:31.387933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.360 [2024-07-25 10:17:31.387956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.360 qpair failed and we were unable to recover it. 00:28:46.360 [2024-07-25 10:17:31.388139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.360 [2024-07-25 10:17:31.388167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.360 qpair failed and we were unable to recover it. 00:28:46.360 [2024-07-25 10:17:31.388370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.360 [2024-07-25 10:17:31.388399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.360 qpair failed and we were unable to recover it. 00:28:46.360 [2024-07-25 10:17:31.388580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.361 [2024-07-25 10:17:31.388604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.361 qpair failed and we were unable to recover it. 00:28:46.361 [2024-07-25 10:17:31.388778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.361 [2024-07-25 10:17:31.388807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.361 qpair failed and we were unable to recover it. 00:28:46.361 [2024-07-25 10:17:31.388966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.361 [2024-07-25 10:17:31.388995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.361 qpair failed and we were unable to recover it. 00:28:46.361 [2024-07-25 10:17:31.389160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.361 [2024-07-25 10:17:31.389184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.361 qpair failed and we were unable to recover it. 00:28:46.361 [2024-07-25 10:17:31.389356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.361 [2024-07-25 10:17:31.389385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.361 qpair failed and we were unable to recover it. 00:28:46.361 [2024-07-25 10:17:31.389654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.361 [2024-07-25 10:17:31.389684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.361 qpair failed and we were unable to recover it. 00:28:46.361 [2024-07-25 10:17:31.389952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.361 [2024-07-25 10:17:31.389976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.361 qpair failed and we were unable to recover it. 00:28:46.361 [2024-07-25 10:17:31.390215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.361 [2024-07-25 10:17:31.390244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.361 qpair failed and we were unable to recover it. 00:28:46.361 [2024-07-25 10:17:31.390466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.361 [2024-07-25 10:17:31.390496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.361 qpair failed and we were unable to recover it. 00:28:46.361 [2024-07-25 10:17:31.390807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.361 [2024-07-25 10:17:31.390847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.361 qpair failed and we were unable to recover it. 00:28:46.361 [2024-07-25 10:17:31.391063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.361 [2024-07-25 10:17:31.391092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.361 qpair failed and we were unable to recover it. 00:28:46.361 [2024-07-25 10:17:31.391281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.361 [2024-07-25 10:17:31.391331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.361 qpair failed and we were unable to recover it. 00:28:46.361 [2024-07-25 10:17:31.391544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.361 [2024-07-25 10:17:31.391569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.361 qpair failed and we were unable to recover it. 00:28:46.361 [2024-07-25 10:17:31.391777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.361 [2024-07-25 10:17:31.391806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.361 qpair failed and we were unable to recover it. 00:28:46.361 [2024-07-25 10:17:31.391990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.361 [2024-07-25 10:17:31.392018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.361 qpair failed and we were unable to recover it. 00:28:46.361 [2024-07-25 10:17:31.392236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.361 [2024-07-25 10:17:31.392259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.361 qpair failed and we were unable to recover it. 00:28:46.361 [2024-07-25 10:17:31.392489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.361 [2024-07-25 10:17:31.392514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.361 qpair failed and we were unable to recover it. 00:28:46.361 [2024-07-25 10:17:31.392732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.361 [2024-07-25 10:17:31.392761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.361 qpair failed and we were unable to recover it. 00:28:46.361 [2024-07-25 10:17:31.392946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.361 [2024-07-25 10:17:31.392969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.361 qpair failed and we were unable to recover it. 00:28:46.361 [2024-07-25 10:17:31.393199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.361 [2024-07-25 10:17:31.393228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.361 qpair failed and we were unable to recover it. 00:28:46.361 [2024-07-25 10:17:31.393406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.361 [2024-07-25 10:17:31.393443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.361 qpair failed and we were unable to recover it. 00:28:46.361 [2024-07-25 10:17:31.393676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.361 [2024-07-25 10:17:31.393701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.361 qpair failed and we were unable to recover it. 00:28:46.361 [2024-07-25 10:17:31.393978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.361 [2024-07-25 10:17:31.394011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.361 qpair failed and we were unable to recover it. 00:28:46.361 [2024-07-25 10:17:31.394276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.361 [2024-07-25 10:17:31.394306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.361 qpair failed and we were unable to recover it. 00:28:46.361 [2024-07-25 10:17:31.394575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.361 [2024-07-25 10:17:31.394600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.361 qpair failed and we were unable to recover it. 00:28:46.361 [2024-07-25 10:17:31.394815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.361 [2024-07-25 10:17:31.394845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.361 qpair failed and we were unable to recover it. 00:28:46.361 [2024-07-25 10:17:31.395059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.361 [2024-07-25 10:17:31.395088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.361 qpair failed and we were unable to recover it. 00:28:46.361 [2024-07-25 10:17:31.395354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.361 [2024-07-25 10:17:31.395377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.361 qpair failed and we were unable to recover it. 00:28:46.361 [2024-07-25 10:17:31.395568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.361 [2024-07-25 10:17:31.395604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.361 qpair failed and we were unable to recover it. 00:28:46.361 [2024-07-25 10:17:31.395894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.361 [2024-07-25 10:17:31.395923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.361 qpair failed and we were unable to recover it. 00:28:46.361 [2024-07-25 10:17:31.396130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.361 [2024-07-25 10:17:31.396158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.361 qpair failed and we were unable to recover it. 00:28:46.361 [2024-07-25 10:17:31.396377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.361 [2024-07-25 10:17:31.396407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.361 qpair failed and we were unable to recover it. 00:28:46.361 [2024-07-25 10:17:31.396685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.361 [2024-07-25 10:17:31.396714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.361 qpair failed and we were unable to recover it. 00:28:46.361 [2024-07-25 10:17:31.397006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.361 [2024-07-25 10:17:31.397030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.361 qpair failed and we were unable to recover it. 00:28:46.361 [2024-07-25 10:17:31.397260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.361 [2024-07-25 10:17:31.397288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.361 qpair failed and we were unable to recover it. 00:28:46.361 [2024-07-25 10:17:31.397501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.361 [2024-07-25 10:17:31.397531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.361 qpair failed and we were unable to recover it. 00:28:46.361 [2024-07-25 10:17:31.397764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.362 [2024-07-25 10:17:31.397788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.362 qpair failed and we were unable to recover it. 00:28:46.362 [2024-07-25 10:17:31.398009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.362 [2024-07-25 10:17:31.398038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.362 qpair failed and we were unable to recover it. 00:28:46.362 [2024-07-25 10:17:31.398258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.362 [2024-07-25 10:17:31.398287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.362 qpair failed and we were unable to recover it. 00:28:46.362 [2024-07-25 10:17:31.398487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.362 [2024-07-25 10:17:31.398513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.362 qpair failed and we were unable to recover it. 00:28:46.362 [2024-07-25 10:17:31.398675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.362 [2024-07-25 10:17:31.398716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.362 qpair failed and we were unable to recover it. 00:28:46.362 [2024-07-25 10:17:31.398915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.362 [2024-07-25 10:17:31.398944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.362 qpair failed and we were unable to recover it. 00:28:46.362 [2024-07-25 10:17:31.399126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.362 [2024-07-25 10:17:31.399149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.362 qpair failed and we were unable to recover it. 00:28:46.362 [2024-07-25 10:17:31.399311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.362 [2024-07-25 10:17:31.399340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.362 qpair failed and we were unable to recover it. 00:28:46.362 [2024-07-25 10:17:31.399508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.362 [2024-07-25 10:17:31.399547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.362 qpair failed and we were unable to recover it. 00:28:46.362 [2024-07-25 10:17:31.399795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.362 [2024-07-25 10:17:31.399819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.362 qpair failed and we were unable to recover it. 00:28:46.362 [2024-07-25 10:17:31.400058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.362 [2024-07-25 10:17:31.400087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.362 qpair failed and we were unable to recover it. 00:28:46.362 [2024-07-25 10:17:31.400302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.362 [2024-07-25 10:17:31.400331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.362 qpair failed and we were unable to recover it. 00:28:46.362 [2024-07-25 10:17:31.400550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.362 [2024-07-25 10:17:31.400574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.362 qpair failed and we were unable to recover it. 00:28:46.362 [2024-07-25 10:17:31.400851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.362 [2024-07-25 10:17:31.400887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.362 qpair failed and we were unable to recover it. 00:28:46.362 [2024-07-25 10:17:31.401115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.362 [2024-07-25 10:17:31.401144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.362 qpair failed and we were unable to recover it. 00:28:46.362 [2024-07-25 10:17:31.401364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.362 [2024-07-25 10:17:31.401388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.362 qpair failed and we were unable to recover it. 00:28:46.362 [2024-07-25 10:17:31.401677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.362 [2024-07-25 10:17:31.401718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.362 qpair failed and we were unable to recover it. 00:28:46.362 [2024-07-25 10:17:31.401951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.362 [2024-07-25 10:17:31.401981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.362 qpair failed and we were unable to recover it. 00:28:46.362 [2024-07-25 10:17:31.402248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.362 [2024-07-25 10:17:31.402271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.362 qpair failed and we were unable to recover it. 00:28:46.362 [2024-07-25 10:17:31.402545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.362 [2024-07-25 10:17:31.402575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.362 qpair failed and we were unable to recover it. 00:28:46.362 [2024-07-25 10:17:31.402794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.362 [2024-07-25 10:17:31.402824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.362 qpair failed and we were unable to recover it. 00:28:46.362 [2024-07-25 10:17:31.403034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.362 [2024-07-25 10:17:31.403057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.362 qpair failed and we were unable to recover it. 00:28:46.362 [2024-07-25 10:17:31.403245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.362 [2024-07-25 10:17:31.403284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.362 qpair failed and we were unable to recover it. 00:28:46.362 [2024-07-25 10:17:31.403494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.362 [2024-07-25 10:17:31.403522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.362 qpair failed and we were unable to recover it. 00:28:46.362 [2024-07-25 10:17:31.403768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.362 [2024-07-25 10:17:31.403792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.362 qpair failed and we were unable to recover it. 00:28:46.362 [2024-07-25 10:17:31.404059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.362 [2024-07-25 10:17:31.404088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.362 qpair failed and we were unable to recover it. 00:28:46.362 [2024-07-25 10:17:31.404263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.362 [2024-07-25 10:17:31.404291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.362 qpair failed and we were unable to recover it. 00:28:46.362 [2024-07-25 10:17:31.404465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.362 [2024-07-25 10:17:31.404488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.362 qpair failed and we were unable to recover it. 00:28:46.362 [2024-07-25 10:17:31.404730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.362 [2024-07-25 10:17:31.404760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.362 qpair failed and we were unable to recover it. 00:28:46.362 [2024-07-25 10:17:31.404992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.362 [2024-07-25 10:17:31.405021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.362 qpair failed and we were unable to recover it. 00:28:46.362 [2024-07-25 10:17:31.405239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.362 [2024-07-25 10:17:31.405262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.362 qpair failed and we were unable to recover it. 00:28:46.362 [2024-07-25 10:17:31.405507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.362 [2024-07-25 10:17:31.405537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.362 qpair failed and we were unable to recover it. 00:28:46.362 [2024-07-25 10:17:31.405760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.362 [2024-07-25 10:17:31.405789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.362 qpair failed and we were unable to recover it. 00:28:46.362 [2024-07-25 10:17:31.405976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.362 [2024-07-25 10:17:31.406024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.362 qpair failed and we were unable to recover it. 00:28:46.362 [2024-07-25 10:17:31.406173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.362 [2024-07-25 10:17:31.406202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.362 qpair failed and we were unable to recover it. 00:28:46.362 [2024-07-25 10:17:31.406420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.362 [2024-07-25 10:17:31.406457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.362 qpair failed and we were unable to recover it. 00:28:46.362 [2024-07-25 10:17:31.406745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.362 [2024-07-25 10:17:31.406768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.362 qpair failed and we were unable to recover it. 00:28:46.362 [2024-07-25 10:17:31.406983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.362 [2024-07-25 10:17:31.407012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.362 qpair failed and we were unable to recover it. 00:28:46.362 [2024-07-25 10:17:31.407273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.362 [2024-07-25 10:17:31.407302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.362 qpair failed and we were unable to recover it. 00:28:46.362 [2024-07-25 10:17:31.407522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.362 [2024-07-25 10:17:31.407552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.362 qpair failed and we were unable to recover it. 00:28:46.362 [2024-07-25 10:17:31.407755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.362 [2024-07-25 10:17:31.407788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.362 qpair failed and we were unable to recover it. 00:28:46.362 [2024-07-25 10:17:31.407963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.362 [2024-07-25 10:17:31.407992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.362 qpair failed and we were unable to recover it. 00:28:46.363 [2024-07-25 10:17:31.408263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.363 [2024-07-25 10:17:31.408287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.363 qpair failed and we were unable to recover it. 00:28:46.363 [2024-07-25 10:17:31.408524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.363 [2024-07-25 10:17:31.408548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.363 qpair failed and we were unable to recover it. 00:28:46.363 [2024-07-25 10:17:31.408730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.363 [2024-07-25 10:17:31.408754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.363 qpair failed and we were unable to recover it. 00:28:46.363 [2024-07-25 10:17:31.409011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.363 [2024-07-25 10:17:31.409035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.363 qpair failed and we were unable to recover it. 00:28:46.363 [2024-07-25 10:17:31.409317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.363 [2024-07-25 10:17:31.409345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.363 qpair failed and we were unable to recover it. 00:28:46.363 [2024-07-25 10:17:31.409554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.363 [2024-07-25 10:17:31.409584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.363 qpair failed and we were unable to recover it. 00:28:46.363 [2024-07-25 10:17:31.409815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.363 [2024-07-25 10:17:31.409839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.363 qpair failed and we were unable to recover it. 00:28:46.363 [2024-07-25 10:17:31.410075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.363 [2024-07-25 10:17:31.410103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.363 qpair failed and we were unable to recover it. 00:28:46.363 [2024-07-25 10:17:31.410317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.363 [2024-07-25 10:17:31.410346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.363 qpair failed and we were unable to recover it. 00:28:46.363 [2024-07-25 10:17:31.410598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.363 [2024-07-25 10:17:31.410623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.363 qpair failed and we were unable to recover it. 00:28:46.363 [2024-07-25 10:17:31.410799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.363 [2024-07-25 10:17:31.410839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.363 qpair failed and we were unable to recover it. 00:28:46.363 [2024-07-25 10:17:31.411061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.363 [2024-07-25 10:17:31.411090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.363 qpair failed and we were unable to recover it. 00:28:46.363 [2024-07-25 10:17:31.411317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.363 [2024-07-25 10:17:31.411341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.363 qpair failed and we were unable to recover it. 00:28:46.363 [2024-07-25 10:17:31.411514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.363 [2024-07-25 10:17:31.411552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.363 qpair failed and we were unable to recover it. 00:28:46.363 [2024-07-25 10:17:31.411728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.363 [2024-07-25 10:17:31.411757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.363 qpair failed and we were unable to recover it. 00:28:46.363 [2024-07-25 10:17:31.411975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.363 [2024-07-25 10:17:31.411998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.363 qpair failed and we were unable to recover it. 00:28:46.363 [2024-07-25 10:17:31.412213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.363 [2024-07-25 10:17:31.412242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.363 qpair failed and we were unable to recover it. 00:28:46.363 [2024-07-25 10:17:31.412412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.363 [2024-07-25 10:17:31.412458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.363 qpair failed and we were unable to recover it. 00:28:46.363 [2024-07-25 10:17:31.412632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.363 [2024-07-25 10:17:31.412668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.363 qpair failed and we were unable to recover it. 00:28:46.363 [2024-07-25 10:17:31.412899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.363 [2024-07-25 10:17:31.412928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.363 qpair failed and we were unable to recover it. 00:28:46.363 [2024-07-25 10:17:31.413170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.363 [2024-07-25 10:17:31.413198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.363 qpair failed and we were unable to recover it. 00:28:46.363 [2024-07-25 10:17:31.413376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.363 [2024-07-25 10:17:31.413399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.363 qpair failed and we were unable to recover it. 00:28:46.363 [2024-07-25 10:17:31.413653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.363 [2024-07-25 10:17:31.413682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.363 qpair failed and we were unable to recover it. 00:28:46.363 [2024-07-25 10:17:31.413914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.363 [2024-07-25 10:17:31.413943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.363 qpair failed and we were unable to recover it. 00:28:46.363 [2024-07-25 10:17:31.414162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.363 [2024-07-25 10:17:31.414185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.363 qpair failed and we were unable to recover it. 00:28:46.363 [2024-07-25 10:17:31.414368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.363 [2024-07-25 10:17:31.414396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.363 qpair failed and we were unable to recover it. 00:28:46.363 [2024-07-25 10:17:31.414678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.363 [2024-07-25 10:17:31.414708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.363 qpair failed and we were unable to recover it. 00:28:46.363 [2024-07-25 10:17:31.414965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.363 [2024-07-25 10:17:31.414989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.363 qpair failed and we were unable to recover it. 00:28:46.363 [2024-07-25 10:17:31.415227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.363 [2024-07-25 10:17:31.415256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.363 qpair failed and we were unable to recover it. 00:28:46.363 [2024-07-25 10:17:31.415494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.363 [2024-07-25 10:17:31.415519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.363 qpair failed and we were unable to recover it. 00:28:46.363 [2024-07-25 10:17:31.415743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.363 [2024-07-25 10:17:31.415767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.363 qpair failed and we were unable to recover it. 00:28:46.363 [2024-07-25 10:17:31.416048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.363 [2024-07-25 10:17:31.416078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.363 qpair failed and we were unable to recover it. 00:28:46.363 [2024-07-25 10:17:31.416344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.363 [2024-07-25 10:17:31.416375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.363 qpair failed and we were unable to recover it. 00:28:46.363 [2024-07-25 10:17:31.416627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.363 [2024-07-25 10:17:31.416651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.363 qpair failed and we were unable to recover it. 00:28:46.363 [2024-07-25 10:17:31.416819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.363 [2024-07-25 10:17:31.416848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.363 qpair failed and we were unable to recover it. 00:28:46.363 [2024-07-25 10:17:31.417032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.363 [2024-07-25 10:17:31.417061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.363 qpair failed and we were unable to recover it. 00:28:46.363 [2024-07-25 10:17:31.417222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.363 [2024-07-25 10:17:31.417244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.363 qpair failed and we were unable to recover it. 00:28:46.363 [2024-07-25 10:17:31.417520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.363 [2024-07-25 10:17:31.417550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.363 qpair failed and we were unable to recover it. 00:28:46.363 [2024-07-25 10:17:31.417715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.363 [2024-07-25 10:17:31.417742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.363 qpair failed and we were unable to recover it. 00:28:46.363 [2024-07-25 10:17:31.417941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.363 [2024-07-25 10:17:31.417968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.363 qpair failed and we were unable to recover it. 00:28:46.363 [2024-07-25 10:17:31.418201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.363 [2024-07-25 10:17:31.418230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.363 qpair failed and we were unable to recover it. 00:28:46.364 [2024-07-25 10:17:31.418443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.364 [2024-07-25 10:17:31.418472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.364 qpair failed and we were unable to recover it. 00:28:46.364 [2024-07-25 10:17:31.418701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.364 [2024-07-25 10:17:31.418739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.364 qpair failed and we were unable to recover it. 00:28:46.364 [2024-07-25 10:17:31.418999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.364 [2024-07-25 10:17:31.419028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.364 qpair failed and we were unable to recover it. 00:28:46.364 [2024-07-25 10:17:31.419247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.364 [2024-07-25 10:17:31.419276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.364 qpair failed and we were unable to recover it. 00:28:46.364 [2024-07-25 10:17:31.419542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.364 [2024-07-25 10:17:31.419566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.364 qpair failed and we were unable to recover it. 00:28:46.364 [2024-07-25 10:17:31.419747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.364 [2024-07-25 10:17:31.419777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.364 qpair failed and we were unable to recover it. 00:28:46.364 [2024-07-25 10:17:31.420019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.364 [2024-07-25 10:17:31.420048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.364 qpair failed and we were unable to recover it. 00:28:46.364 [2024-07-25 10:17:31.420262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.364 [2024-07-25 10:17:31.420302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.364 qpair failed and we were unable to recover it. 00:28:46.364 [2024-07-25 10:17:31.420516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.364 [2024-07-25 10:17:31.420545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.364 qpair failed and we were unable to recover it. 00:28:46.364 [2024-07-25 10:17:31.420727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.364 [2024-07-25 10:17:31.420756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.364 qpair failed and we were unable to recover it. 00:28:46.364 [2024-07-25 10:17:31.420935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.364 [2024-07-25 10:17:31.420961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.364 qpair failed and we were unable to recover it. 00:28:46.364 [2024-07-25 10:17:31.421180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.364 [2024-07-25 10:17:31.421209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.364 qpair failed and we were unable to recover it. 00:28:46.364 [2024-07-25 10:17:31.421400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.364 [2024-07-25 10:17:31.421437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.364 qpair failed and we were unable to recover it. 00:28:46.364 [2024-07-25 10:17:31.421704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.364 [2024-07-25 10:17:31.421730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.364 qpair failed and we were unable to recover it. 00:28:46.364 [2024-07-25 10:17:31.421939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.364 [2024-07-25 10:17:31.421969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.364 qpair failed and we were unable to recover it. 00:28:46.364 [2024-07-25 10:17:31.422218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.364 [2024-07-25 10:17:31.422246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.364 qpair failed and we were unable to recover it. 00:28:46.364 [2024-07-25 10:17:31.422413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.364 [2024-07-25 10:17:31.422450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.364 qpair failed and we were unable to recover it. 00:28:46.364 [2024-07-25 10:17:31.422649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.364 [2024-07-25 10:17:31.422675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.364 qpair failed and we were unable to recover it. 00:28:46.364 [2024-07-25 10:17:31.422910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.364 [2024-07-25 10:17:31.422939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.364 qpair failed and we were unable to recover it. 00:28:46.364 [2024-07-25 10:17:31.423167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.364 [2024-07-25 10:17:31.423194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.364 qpair failed and we were unable to recover it. 00:28:46.364 [2024-07-25 10:17:31.423408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.364 [2024-07-25 10:17:31.423446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.364 qpair failed and we were unable to recover it. 00:28:46.364 [2024-07-25 10:17:31.423664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.364 [2024-07-25 10:17:31.423690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.364 qpair failed and we were unable to recover it. 00:28:46.364 [2024-07-25 10:17:31.423947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.364 [2024-07-25 10:17:31.423974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.364 qpair failed and we were unable to recover it. 00:28:46.364 [2024-07-25 10:17:31.424217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.364 [2024-07-25 10:17:31.424246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.364 qpair failed and we were unable to recover it. 00:28:46.364 [2024-07-25 10:17:31.424494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.364 [2024-07-25 10:17:31.424523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.364 qpair failed and we were unable to recover it. 00:28:46.364 [2024-07-25 10:17:31.424746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.364 [2024-07-25 10:17:31.424790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.364 qpair failed and we were unable to recover it. 00:28:46.364 [2024-07-25 10:17:31.425017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.364 [2024-07-25 10:17:31.425047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.364 qpair failed and we were unable to recover it. 00:28:46.364 [2024-07-25 10:17:31.425232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.364 [2024-07-25 10:17:31.425261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.364 qpair failed and we were unable to recover it. 00:28:46.364 [2024-07-25 10:17:31.425450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.364 [2024-07-25 10:17:31.425491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.364 qpair failed and we were unable to recover it. 00:28:46.364 [2024-07-25 10:17:31.425682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.364 [2024-07-25 10:17:31.425711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.364 qpair failed and we were unable to recover it. 00:28:46.364 [2024-07-25 10:17:31.425960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.364 [2024-07-25 10:17:31.425988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.364 qpair failed and we were unable to recover it. 00:28:46.364 [2024-07-25 10:17:31.426162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.364 [2024-07-25 10:17:31.426209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.364 qpair failed and we were unable to recover it. 00:28:46.364 [2024-07-25 10:17:31.426467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.364 [2024-07-25 10:17:31.426497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.364 qpair failed and we were unable to recover it. 00:28:46.364 [2024-07-25 10:17:31.426711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.364 [2024-07-25 10:17:31.426740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.364 qpair failed and we were unable to recover it. 00:28:46.364 [2024-07-25 10:17:31.426918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.364 [2024-07-25 10:17:31.426944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.364 qpair failed and we were unable to recover it. 00:28:46.364 [2024-07-25 10:17:31.427182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.364 [2024-07-25 10:17:31.427211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.364 qpair failed and we were unable to recover it. 00:28:46.364 [2024-07-25 10:17:31.427444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.364 [2024-07-25 10:17:31.427474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.364 qpair failed and we were unable to recover it. 00:28:46.364 [2024-07-25 10:17:31.427646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.364 [2024-07-25 10:17:31.427673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.364 qpair failed and we were unable to recover it. 00:28:46.364 [2024-07-25 10:17:31.427926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.364 [2024-07-25 10:17:31.427968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.364 qpair failed and we were unable to recover it. 00:28:46.364 [2024-07-25 10:17:31.428141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.364 [2024-07-25 10:17:31.428170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.364 qpair failed and we were unable to recover it. 00:28:46.364 [2024-07-25 10:17:31.428369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.364 [2024-07-25 10:17:31.428395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.364 qpair failed and we were unable to recover it. 00:28:46.364 [2024-07-25 10:17:31.428606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.364 [2024-07-25 10:17:31.428634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.364 qpair failed and we were unable to recover it. 00:28:46.364 [2024-07-25 10:17:31.428871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.364 [2024-07-25 10:17:31.428899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.365 qpair failed and we were unable to recover it. 00:28:46.365 [2024-07-25 10:17:31.429082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.365 [2024-07-25 10:17:31.429117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.365 qpair failed and we were unable to recover it. 00:28:46.365 [2024-07-25 10:17:31.429332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.365 [2024-07-25 10:17:31.429361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.365 qpair failed and we were unable to recover it. 00:28:46.365 [2024-07-25 10:17:31.429535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.365 [2024-07-25 10:17:31.429565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.365 qpair failed and we were unable to recover it. 00:28:46.365 [2024-07-25 10:17:31.429831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.365 [2024-07-25 10:17:31.429857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.365 qpair failed and we were unable to recover it. 00:28:46.365 [2024-07-25 10:17:31.430086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.365 [2024-07-25 10:17:31.430115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.365 qpair failed and we were unable to recover it. 00:28:46.365 [2024-07-25 10:17:31.430378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.365 [2024-07-25 10:17:31.430407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.365 qpair failed and we were unable to recover it. 00:28:46.365 [2024-07-25 10:17:31.430657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.365 [2024-07-25 10:17:31.430684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.365 qpair failed and we were unable to recover it. 00:28:46.365 [2024-07-25 10:17:31.430941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.365 [2024-07-25 10:17:31.430970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.365 qpair failed and we were unable to recover it. 00:28:46.365 [2024-07-25 10:17:31.431222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.365 [2024-07-25 10:17:31.431251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.365 qpair failed and we were unable to recover it. 00:28:46.365 [2024-07-25 10:17:31.431475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.365 [2024-07-25 10:17:31.431508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.365 qpair failed and we were unable to recover it. 00:28:46.365 [2024-07-25 10:17:31.431734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.365 [2024-07-25 10:17:31.431764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.365 qpair failed and we were unable to recover it. 00:28:46.365 [2024-07-25 10:17:31.431932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.365 [2024-07-25 10:17:31.431961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.365 qpair failed and we were unable to recover it. 00:28:46.365 [2024-07-25 10:17:31.432135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.365 [2024-07-25 10:17:31.432160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.365 qpair failed and we were unable to recover it. 00:28:46.365 [2024-07-25 10:17:31.432378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.365 [2024-07-25 10:17:31.432407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.365 qpair failed and we were unable to recover it. 00:28:46.365 [2024-07-25 10:17:31.432665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.365 [2024-07-25 10:17:31.432694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.365 qpair failed and we were unable to recover it. 00:28:46.365 [2024-07-25 10:17:31.432910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.365 [2024-07-25 10:17:31.432951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.365 qpair failed and we were unable to recover it. 00:28:46.365 [2024-07-25 10:17:31.433133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.365 [2024-07-25 10:17:31.433162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.365 qpair failed and we were unable to recover it. 00:28:46.365 [2024-07-25 10:17:31.433417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.365 [2024-07-25 10:17:31.433456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.365 qpair failed and we were unable to recover it. 00:28:46.365 [2024-07-25 10:17:31.433709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.365 [2024-07-25 10:17:31.433735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.365 qpair failed and we were unable to recover it. 00:28:46.365 [2024-07-25 10:17:31.433882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.365 [2024-07-25 10:17:31.433911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.365 qpair failed and we were unable to recover it. 00:28:46.365 [2024-07-25 10:17:31.434125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.365 [2024-07-25 10:17:31.434154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.365 qpair failed and we were unable to recover it. 00:28:46.365 [2024-07-25 10:17:31.434313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.365 [2024-07-25 10:17:31.434341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.365 qpair failed and we were unable to recover it. 00:28:46.365 [2024-07-25 10:17:31.434538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.365 [2024-07-25 10:17:31.434565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.365 qpair failed and we were unable to recover it. 00:28:46.365 [2024-07-25 10:17:31.434811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.365 [2024-07-25 10:17:31.434840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.365 qpair failed and we were unable to recover it. 00:28:46.365 [2024-07-25 10:17:31.435030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.365 [2024-07-25 10:17:31.435062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.365 qpair failed and we were unable to recover it. 00:28:46.365 [2024-07-25 10:17:31.435309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.365 [2024-07-25 10:17:31.435339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.365 qpair failed and we were unable to recover it. 00:28:46.365 [2024-07-25 10:17:31.435504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.365 [2024-07-25 10:17:31.435533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.365 qpair failed and we were unable to recover it. 00:28:46.365 [2024-07-25 10:17:31.435761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.365 [2024-07-25 10:17:31.435787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.365 qpair failed and we were unable to recover it. 00:28:46.365 [2024-07-25 10:17:31.436024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.365 [2024-07-25 10:17:31.436053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.365 qpair failed and we were unable to recover it. 00:28:46.365 [2024-07-25 10:17:31.436265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.365 [2024-07-25 10:17:31.436294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.365 qpair failed and we were unable to recover it. 00:28:46.365 [2024-07-25 10:17:31.436465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.365 [2024-07-25 10:17:31.436492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.365 qpair failed and we were unable to recover it. 00:28:46.365 [2024-07-25 10:17:31.436708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.365 [2024-07-25 10:17:31.436751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.365 qpair failed and we were unable to recover it. 00:28:46.365 [2024-07-25 10:17:31.437002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.365 [2024-07-25 10:17:31.437030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.365 qpair failed and we were unable to recover it. 00:28:46.365 [2024-07-25 10:17:31.437216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.365 [2024-07-25 10:17:31.437242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.365 qpair failed and we were unable to recover it. 00:28:46.365 [2024-07-25 10:17:31.437451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.365 [2024-07-25 10:17:31.437481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.365 qpair failed and we were unable to recover it. 00:28:46.365 [2024-07-25 10:17:31.437630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.365 [2024-07-25 10:17:31.437659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.365 qpair failed and we were unable to recover it. 00:28:46.365 [2024-07-25 10:17:31.437880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.365 [2024-07-25 10:17:31.437905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.365 qpair failed and we were unable to recover it. 00:28:46.365 [2024-07-25 10:17:31.438158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.365 [2024-07-25 10:17:31.438187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.365 qpair failed and we were unable to recover it. 00:28:46.366 [2024-07-25 10:17:31.438408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.366 [2024-07-25 10:17:31.438446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.366 qpair failed and we were unable to recover it. 00:28:46.366 [2024-07-25 10:17:31.438649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.366 [2024-07-25 10:17:31.438674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.366 qpair failed and we were unable to recover it. 00:28:46.366 [2024-07-25 10:17:31.438901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.366 [2024-07-25 10:17:31.438930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.366 qpair failed and we were unable to recover it. 00:28:46.366 [2024-07-25 10:17:31.439175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.366 [2024-07-25 10:17:31.439204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.366 qpair failed and we were unable to recover it. 00:28:46.366 [2024-07-25 10:17:31.439396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.366 [2024-07-25 10:17:31.439422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.366 qpair failed and we were unable to recover it. 00:28:46.366 [2024-07-25 10:17:31.439629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.366 [2024-07-25 10:17:31.439658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.366 qpair failed and we were unable to recover it. 00:28:46.366 [2024-07-25 10:17:31.439845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.366 [2024-07-25 10:17:31.439874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.366 qpair failed and we were unable to recover it. 00:28:46.366 [2024-07-25 10:17:31.440064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.366 [2024-07-25 10:17:31.440093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.366 qpair failed and we were unable to recover it. 00:28:46.366 [2024-07-25 10:17:31.440360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.366 [2024-07-25 10:17:31.440389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.366 qpair failed and we were unable to recover it. 00:28:46.366 [2024-07-25 10:17:31.440642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.366 [2024-07-25 10:17:31.440672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.366 qpair failed and we were unable to recover it. 00:28:46.366 [2024-07-25 10:17:31.440832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.366 [2024-07-25 10:17:31.440858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.366 qpair failed and we were unable to recover it. 00:28:46.366 [2024-07-25 10:17:31.441019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.366 [2024-07-25 10:17:31.441064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.366 qpair failed and we were unable to recover it. 00:28:46.366 [2024-07-25 10:17:31.441256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.366 [2024-07-25 10:17:31.441285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.366 qpair failed and we were unable to recover it. 00:28:46.366 [2024-07-25 10:17:31.441534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.366 [2024-07-25 10:17:31.441562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.366 qpair failed and we were unable to recover it. 00:28:46.366 [2024-07-25 10:17:31.441794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.366 [2024-07-25 10:17:31.441823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.366 qpair failed and we were unable to recover it. 00:28:46.366 [2024-07-25 10:17:31.441982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.366 [2024-07-25 10:17:31.442011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.366 qpair failed and we were unable to recover it. 00:28:46.366 [2024-07-25 10:17:31.442231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.366 [2024-07-25 10:17:31.442258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.366 qpair failed and we were unable to recover it. 00:28:46.366 [2024-07-25 10:17:31.442490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.366 [2024-07-25 10:17:31.442520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.366 qpair failed and we were unable to recover it. 00:28:46.366 [2024-07-25 10:17:31.442670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.366 [2024-07-25 10:17:31.442698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.366 qpair failed and we were unable to recover it. 00:28:46.366 [2024-07-25 10:17:31.442958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.366 [2024-07-25 10:17:31.442984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.366 qpair failed and we were unable to recover it. 00:28:46.366 [2024-07-25 10:17:31.443266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.366 [2024-07-25 10:17:31.443295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.366 qpair failed and we were unable to recover it. 00:28:46.366 [2024-07-25 10:17:31.443527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.366 [2024-07-25 10:17:31.443567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.366 qpair failed and we were unable to recover it. 00:28:46.366 [2024-07-25 10:17:31.443835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.366 [2024-07-25 10:17:31.443861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.366 qpair failed and we were unable to recover it. 00:28:46.366 [2024-07-25 10:17:31.444089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.366 [2024-07-25 10:17:31.444119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.366 qpair failed and we were unable to recover it. 00:28:46.366 [2024-07-25 10:17:31.444300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.366 [2024-07-25 10:17:31.444329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.366 qpair failed and we were unable to recover it. 00:28:46.366 [2024-07-25 10:17:31.444563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.366 [2024-07-25 10:17:31.444590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.366 qpair failed and we were unable to recover it. 00:28:46.366 [2024-07-25 10:17:31.444832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.366 [2024-07-25 10:17:31.444861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.366 qpair failed and we were unable to recover it. 00:28:46.366 [2024-07-25 10:17:31.445146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.366 [2024-07-25 10:17:31.445175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.366 qpair failed and we were unable to recover it. 00:28:46.366 [2024-07-25 10:17:31.445443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.366 [2024-07-25 10:17:31.445489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.366 qpair failed and we were unable to recover it. 00:28:46.366 [2024-07-25 10:17:31.445761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.366 [2024-07-25 10:17:31.445790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.366 qpair failed and we were unable to recover it. 00:28:46.366 [2024-07-25 10:17:31.446019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.366 [2024-07-25 10:17:31.446048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.366 qpair failed and we were unable to recover it. 00:28:46.366 [2024-07-25 10:17:31.446281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.366 [2024-07-25 10:17:31.446308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.366 qpair failed and we were unable to recover it. 00:28:46.366 [2024-07-25 10:17:31.446519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.366 [2024-07-25 10:17:31.446546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.366 qpair failed and we were unable to recover it. 00:28:46.366 [2024-07-25 10:17:31.446775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.366 [2024-07-25 10:17:31.446805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.366 qpair failed and we were unable to recover it. 00:28:46.366 [2024-07-25 10:17:31.447000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.366 [2024-07-25 10:17:31.447027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.366 qpair failed and we were unable to recover it. 00:28:46.366 [2024-07-25 10:17:31.447268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.366 [2024-07-25 10:17:31.447296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.366 qpair failed and we were unable to recover it. 00:28:46.366 [2024-07-25 10:17:31.447530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.366 [2024-07-25 10:17:31.447560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.366 qpair failed and we were unable to recover it. 00:28:46.366 [2024-07-25 10:17:31.447739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.366 [2024-07-25 10:17:31.447765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.366 qpair failed and we were unable to recover it. 00:28:46.366 [2024-07-25 10:17:31.448007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.366 [2024-07-25 10:17:31.448036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.366 qpair failed and we were unable to recover it. 00:28:46.366 [2024-07-25 10:17:31.448217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.366 [2024-07-25 10:17:31.448260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.366 qpair failed and we were unable to recover it. 00:28:46.366 [2024-07-25 10:17:31.448564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.366 [2024-07-25 10:17:31.448606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.366 qpair failed and we were unable to recover it. 00:28:46.366 [2024-07-25 10:17:31.448838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.366 [2024-07-25 10:17:31.448868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.366 qpair failed and we were unable to recover it. 00:28:46.366 [2024-07-25 10:17:31.449096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.366 [2024-07-25 10:17:31.449125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.366 qpair failed and we were unable to recover it. 00:28:46.366 [2024-07-25 10:17:31.449341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.366 [2024-07-25 10:17:31.449367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.366 qpair failed and we were unable to recover it. 00:28:46.366 [2024-07-25 10:17:31.449612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.367 [2024-07-25 10:17:31.449650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.367 qpair failed and we were unable to recover it. 00:28:46.367 [2024-07-25 10:17:31.449843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.367 [2024-07-25 10:17:31.449871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.367 qpair failed and we were unable to recover it. 00:28:46.367 [2024-07-25 10:17:31.450140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.367 [2024-07-25 10:17:31.450165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.367 qpair failed and we were unable to recover it. 00:28:46.367 [2024-07-25 10:17:31.450353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.367 [2024-07-25 10:17:31.450382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.367 qpair failed and we were unable to recover it. 00:28:46.367 [2024-07-25 10:17:31.450627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.367 [2024-07-25 10:17:31.450656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.367 qpair failed and we were unable to recover it. 00:28:46.367 [2024-07-25 10:17:31.450884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.367 [2024-07-25 10:17:31.450911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.367 qpair failed and we were unable to recover it. 00:28:46.367 [2024-07-25 10:17:31.451228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.367 [2024-07-25 10:17:31.451257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.367 qpair failed and we were unable to recover it. 00:28:46.367 [2024-07-25 10:17:31.451467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.367 [2024-07-25 10:17:31.451497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.367 qpair failed and we were unable to recover it. 00:28:46.367 [2024-07-25 10:17:31.451689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.367 [2024-07-25 10:17:31.451715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.367 qpair failed and we were unable to recover it. 00:28:46.367 [2024-07-25 10:17:31.451910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.367 [2024-07-25 10:17:31.451939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.367 qpair failed and we were unable to recover it. 00:28:46.367 [2024-07-25 10:17:31.452102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.367 [2024-07-25 10:17:31.452131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.367 qpair failed and we were unable to recover it. 00:28:46.367 [2024-07-25 10:17:31.452303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.367 [2024-07-25 10:17:31.452327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.367 qpair failed and we were unable to recover it. 00:28:46.367 [2024-07-25 10:17:31.452558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.367 [2024-07-25 10:17:31.452588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.367 qpair failed and we were unable to recover it. 00:28:46.367 [2024-07-25 10:17:31.452739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.367 [2024-07-25 10:17:31.452768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.367 qpair failed and we were unable to recover it. 00:28:46.367 [2024-07-25 10:17:31.452898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.367 [2024-07-25 10:17:31.452924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.367 qpair failed and we were unable to recover it. 00:28:46.367 [2024-07-25 10:17:31.453065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.367 [2024-07-25 10:17:31.453091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.367 qpair failed and we were unable to recover it. 00:28:46.367 [2024-07-25 10:17:31.453314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.367 [2024-07-25 10:17:31.453343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.367 qpair failed and we were unable to recover it. 00:28:46.367 [2024-07-25 10:17:31.453530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.367 [2024-07-25 10:17:31.453563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.367 qpair failed and we were unable to recover it. 00:28:46.367 [2024-07-25 10:17:31.453805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.367 [2024-07-25 10:17:31.453833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.367 qpair failed and we were unable to recover it. 00:28:46.367 [2024-07-25 10:17:31.454016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.367 [2024-07-25 10:17:31.454045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.367 qpair failed and we were unable to recover it. 00:28:46.367 [2024-07-25 10:17:31.454211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.367 [2024-07-25 10:17:31.454238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.367 qpair failed and we were unable to recover it. 00:28:46.367 [2024-07-25 10:17:31.454488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.367 [2024-07-25 10:17:31.454518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.367 qpair failed and we were unable to recover it. 00:28:46.367 [2024-07-25 10:17:31.454742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.367 [2024-07-25 10:17:31.454775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.367 qpair failed and we were unable to recover it. 00:28:46.367 [2024-07-25 10:17:31.455008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.367 [2024-07-25 10:17:31.455034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.367 qpair failed and we were unable to recover it. 00:28:46.367 [2024-07-25 10:17:31.455286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.367 [2024-07-25 10:17:31.455315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.367 qpair failed and we were unable to recover it. 00:28:46.367 [2024-07-25 10:17:31.455575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.367 [2024-07-25 10:17:31.455602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.367 qpair failed and we were unable to recover it. 00:28:46.367 [2024-07-25 10:17:31.455839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.367 [2024-07-25 10:17:31.455865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.367 qpair failed and we were unable to recover it. 00:28:46.367 [2024-07-25 10:17:31.456059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.367 [2024-07-25 10:17:31.456099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.367 qpair failed and we were unable to recover it. 00:28:46.367 [2024-07-25 10:17:31.456327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.367 [2024-07-25 10:17:31.456354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.367 qpair failed and we were unable to recover it. 00:28:46.367 [2024-07-25 10:17:31.456570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.367 [2024-07-25 10:17:31.456597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.367 qpair failed and we were unable to recover it. 00:28:46.367 [2024-07-25 10:17:31.456779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.367 [2024-07-25 10:17:31.456819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.367 qpair failed and we were unable to recover it. 00:28:46.367 [2024-07-25 10:17:31.457040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.367 [2024-07-25 10:17:31.457069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.367 qpair failed and we were unable to recover it. 00:28:46.367 [2024-07-25 10:17:31.457259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.367 [2024-07-25 10:17:31.457285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.367 qpair failed and we were unable to recover it. 00:28:46.367 [2024-07-25 10:17:31.457465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.367 [2024-07-25 10:17:31.457506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.367 qpair failed and we were unable to recover it. 00:28:46.367 [2024-07-25 10:17:31.457741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.367 [2024-07-25 10:17:31.457770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.367 qpair failed and we were unable to recover it. 00:28:46.367 [2024-07-25 10:17:31.457988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.367 [2024-07-25 10:17:31.458014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.367 qpair failed and we were unable to recover it. 00:28:46.367 [2024-07-25 10:17:31.458246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.367 [2024-07-25 10:17:31.458275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.367 qpair failed and we were unable to recover it. 00:28:46.367 [2024-07-25 10:17:31.458473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.367 [2024-07-25 10:17:31.458520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.367 qpair failed and we were unable to recover it. 00:28:46.367 [2024-07-25 10:17:31.458685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.367 [2024-07-25 10:17:31.458711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.367 qpair failed and we were unable to recover it. 00:28:46.367 [2024-07-25 10:17:31.458915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.367 [2024-07-25 10:17:31.458944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.367 qpair failed and we were unable to recover it. 00:28:46.367 [2024-07-25 10:17:31.459142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.367 [2024-07-25 10:17:31.459171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.367 qpair failed and we were unable to recover it. 00:28:46.367 [2024-07-25 10:17:31.459327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.367 [2024-07-25 10:17:31.459353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.367 qpair failed and we were unable to recover it. 00:28:46.367 [2024-07-25 10:17:31.459551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.367 [2024-07-25 10:17:31.459578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.367 qpair failed and we were unable to recover it. 00:28:46.367 [2024-07-25 10:17:31.459751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.367 [2024-07-25 10:17:31.459780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.367 qpair failed and we were unable to recover it. 00:28:46.368 [2024-07-25 10:17:31.459921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.368 [2024-07-25 10:17:31.459962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.368 qpair failed and we were unable to recover it. 00:28:46.368 [2024-07-25 10:17:31.460197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.368 [2024-07-25 10:17:31.460226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.368 qpair failed and we were unable to recover it. 00:28:46.368 [2024-07-25 10:17:31.460484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.368 [2024-07-25 10:17:31.460514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.368 qpair failed and we were unable to recover it. 00:28:46.368 [2024-07-25 10:17:31.460728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.368 [2024-07-25 10:17:31.460754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.368 qpair failed and we were unable to recover it. 00:28:46.368 [2024-07-25 10:17:31.460928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.368 [2024-07-25 10:17:31.460956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.368 qpair failed and we were unable to recover it. 00:28:46.368 [2024-07-25 10:17:31.461174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.368 [2024-07-25 10:17:31.461203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.368 qpair failed and we were unable to recover it. 00:28:46.368 [2024-07-25 10:17:31.461443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.368 [2024-07-25 10:17:31.461473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.368 qpair failed and we were unable to recover it. 00:28:46.368 [2024-07-25 10:17:31.461695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.368 [2024-07-25 10:17:31.461738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.368 qpair failed and we were unable to recover it. 00:28:46.368 [2024-07-25 10:17:31.461989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.368 [2024-07-25 10:17:31.462019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.368 qpair failed and we were unable to recover it. 00:28:46.368 [2024-07-25 10:17:31.462199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.368 [2024-07-25 10:17:31.462225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.368 qpair failed and we were unable to recover it. 00:28:46.368 [2024-07-25 10:17:31.462404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.368 [2024-07-25 10:17:31.462440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.368 qpair failed and we were unable to recover it. 00:28:46.368 [2024-07-25 10:17:31.462646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.368 [2024-07-25 10:17:31.462673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.368 qpair failed and we were unable to recover it. 00:28:46.368 [2024-07-25 10:17:31.462933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.368 [2024-07-25 10:17:31.462960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.368 qpair failed and we were unable to recover it. 00:28:46.368 [2024-07-25 10:17:31.463156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.368 [2024-07-25 10:17:31.463185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.368 qpair failed and we were unable to recover it. 00:28:46.368 [2024-07-25 10:17:31.463400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.368 [2024-07-25 10:17:31.463438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.368 qpair failed and we were unable to recover it. 00:28:46.368 [2024-07-25 10:17:31.463667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.368 [2024-07-25 10:17:31.463693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.368 qpair failed and we were unable to recover it. 00:28:46.368 [2024-07-25 10:17:31.463915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.368 [2024-07-25 10:17:31.463944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.368 qpair failed and we were unable to recover it. 00:28:46.368 [2024-07-25 10:17:31.464134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.368 [2024-07-25 10:17:31.464162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.368 qpair failed and we were unable to recover it. 00:28:46.368 [2024-07-25 10:17:31.464378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.368 [2024-07-25 10:17:31.464418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.368 qpair failed and we were unable to recover it. 00:28:46.368 [2024-07-25 10:17:31.464648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.368 [2024-07-25 10:17:31.464678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.368 qpair failed and we were unable to recover it. 00:28:46.368 [2024-07-25 10:17:31.464902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.368 [2024-07-25 10:17:31.464931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.368 qpair failed and we were unable to recover it. 00:28:46.368 [2024-07-25 10:17:31.465156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.368 [2024-07-25 10:17:31.465197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.368 qpair failed and we were unable to recover it. 00:28:46.368 [2024-07-25 10:17:31.465410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.368 [2024-07-25 10:17:31.465447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.368 qpair failed and we were unable to recover it. 00:28:46.368 [2024-07-25 10:17:31.465705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.368 [2024-07-25 10:17:31.465734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.368 qpair failed and we were unable to recover it. 00:28:46.368 [2024-07-25 10:17:31.465906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.368 [2024-07-25 10:17:31.465946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.368 qpair failed and we were unable to recover it. 00:28:46.368 [2024-07-25 10:17:31.466163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.368 [2024-07-25 10:17:31.466192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.368 qpair failed and we were unable to recover it. 00:28:46.368 [2024-07-25 10:17:31.466450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.368 [2024-07-25 10:17:31.466480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.368 qpair failed and we were unable to recover it. 00:28:46.368 [2024-07-25 10:17:31.466707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.368 [2024-07-25 10:17:31.466748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.368 qpair failed and we were unable to recover it. 00:28:46.368 [2024-07-25 10:17:31.466987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.368 [2024-07-25 10:17:31.467016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.368 qpair failed and we were unable to recover it. 00:28:46.368 [2024-07-25 10:17:31.467316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.368 [2024-07-25 10:17:31.467345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.368 qpair failed and we were unable to recover it. 00:28:46.368 [2024-07-25 10:17:31.467619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.368 [2024-07-25 10:17:31.467647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.368 qpair failed and we were unable to recover it. 00:28:46.368 [2024-07-25 10:17:31.467903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.368 [2024-07-25 10:17:31.467931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.368 qpair failed and we were unable to recover it. 00:28:46.368 [2024-07-25 10:17:31.468116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.368 [2024-07-25 10:17:31.468145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.368 qpair failed and we were unable to recover it. 00:28:46.368 [2024-07-25 10:17:31.468367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.368 [2024-07-25 10:17:31.468406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.368 qpair failed and we were unable to recover it. 00:28:46.368 [2024-07-25 10:17:31.468595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.368 [2024-07-25 10:17:31.468622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.368 qpair failed and we were unable to recover it. 00:28:46.368 [2024-07-25 10:17:31.468777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.368 [2024-07-25 10:17:31.468806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.368 qpair failed and we were unable to recover it. 00:28:46.368 [2024-07-25 10:17:31.469010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.368 [2024-07-25 10:17:31.469036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.368 qpair failed and we were unable to recover it. 00:28:46.368 [2024-07-25 10:17:31.469200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.368 [2024-07-25 10:17:31.469229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.368 qpair failed and we were unable to recover it. 00:28:46.368 [2024-07-25 10:17:31.469500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.368 [2024-07-25 10:17:31.469530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.368 qpair failed and we were unable to recover it. 00:28:46.368 [2024-07-25 10:17:31.469792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.368 [2024-07-25 10:17:31.469819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.368 qpair failed and we were unable to recover it. 00:28:46.368 [2024-07-25 10:17:31.470053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.368 [2024-07-25 10:17:31.470082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.368 qpair failed and we were unable to recover it. 00:28:46.368 [2024-07-25 10:17:31.470306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.368 [2024-07-25 10:17:31.470335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.368 qpair failed and we were unable to recover it. 00:28:46.368 [2024-07-25 10:17:31.470590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.368 [2024-07-25 10:17:31.470617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.368 qpair failed and we were unable to recover it. 00:28:46.368 [2024-07-25 10:17:31.470875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.368 [2024-07-25 10:17:31.470904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.368 qpair failed and we were unable to recover it. 00:28:46.369 [2024-07-25 10:17:31.471092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.369 [2024-07-25 10:17:31.471121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.369 qpair failed and we were unable to recover it. 00:28:46.369 [2024-07-25 10:17:31.471279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.369 [2024-07-25 10:17:31.471304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.369 qpair failed and we were unable to recover it. 00:28:46.369 [2024-07-25 10:17:31.471484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.369 [2024-07-25 10:17:31.471530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.369 qpair failed and we were unable to recover it. 00:28:46.369 [2024-07-25 10:17:31.471757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.369 [2024-07-25 10:17:31.471786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.369 qpair failed and we were unable to recover it. 00:28:46.369 [2024-07-25 10:17:31.472067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.369 [2024-07-25 10:17:31.472094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.369 qpair failed and we were unable to recover it. 00:28:46.369 [2024-07-25 10:17:31.472369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.369 [2024-07-25 10:17:31.472398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.369 qpair failed and we were unable to recover it. 00:28:46.369 [2024-07-25 10:17:31.472585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.369 [2024-07-25 10:17:31.472614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.369 qpair failed and we were unable to recover it. 00:28:46.369 [2024-07-25 10:17:31.472796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.369 [2024-07-25 10:17:31.472821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.369 qpair failed and we were unable to recover it. 00:28:46.369 [2024-07-25 10:17:31.473017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.369 [2024-07-25 10:17:31.473047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.369 qpair failed and we were unable to recover it. 00:28:46.369 [2024-07-25 10:17:31.473256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.369 [2024-07-25 10:17:31.473284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.369 qpair failed and we were unable to recover it. 00:28:46.369 [2024-07-25 10:17:31.473485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.369 [2024-07-25 10:17:31.473512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.369 qpair failed and we were unable to recover it. 00:28:46.369 [2024-07-25 10:17:31.473679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.369 [2024-07-25 10:17:31.473708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.369 qpair failed and we were unable to recover it. 00:28:46.369 [2024-07-25 10:17:31.473910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.369 [2024-07-25 10:17:31.473938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.369 qpair failed and we were unable to recover it. 00:28:46.369 [2024-07-25 10:17:31.474145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.369 [2024-07-25 10:17:31.474172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.369 qpair failed and we were unable to recover it. 00:28:46.369 [2024-07-25 10:17:31.474345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.369 [2024-07-25 10:17:31.474373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.369 qpair failed and we were unable to recover it. 00:28:46.369 [2024-07-25 10:17:31.474551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.369 [2024-07-25 10:17:31.474581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.369 qpair failed and we were unable to recover it. 00:28:46.369 [2024-07-25 10:17:31.474766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.369 [2024-07-25 10:17:31.474792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.369 qpair failed and we were unable to recover it. 00:28:46.369 [2024-07-25 10:17:31.474993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.369 [2024-07-25 10:17:31.475021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.369 qpair failed and we were unable to recover it. 00:28:46.369 [2024-07-25 10:17:31.475172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.369 [2024-07-25 10:17:31.475201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.369 qpair failed and we were unable to recover it. 00:28:46.369 [2024-07-25 10:17:31.475462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.369 [2024-07-25 10:17:31.475507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.369 qpair failed and we were unable to recover it. 00:28:46.369 [2024-07-25 10:17:31.475739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.369 [2024-07-25 10:17:31.475768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.369 qpair failed and we were unable to recover it. 00:28:46.369 [2024-07-25 10:17:31.475942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.369 [2024-07-25 10:17:31.475970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.369 qpair failed and we were unable to recover it. 00:28:46.645 [2024-07-25 10:17:31.476194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.645 [2024-07-25 10:17:31.476220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.645 qpair failed and we were unable to recover it. 00:28:46.645 [2024-07-25 10:17:31.476392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.645 [2024-07-25 10:17:31.476438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.645 qpair failed and we were unable to recover it. 00:28:46.645 [2024-07-25 10:17:31.476700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.645 [2024-07-25 10:17:31.476744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.645 qpair failed and we were unable to recover it. 00:28:46.645 [2024-07-25 10:17:31.476986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.646 [2024-07-25 10:17:31.477013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.646 qpair failed and we were unable to recover it. 00:28:46.646 [2024-07-25 10:17:31.477242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.646 [2024-07-25 10:17:31.477271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.646 qpair failed and we were unable to recover it. 00:28:46.646 [2024-07-25 10:17:31.477476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.646 [2024-07-25 10:17:31.477506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.646 qpair failed and we were unable to recover it. 00:28:46.646 [2024-07-25 10:17:31.477727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.646 [2024-07-25 10:17:31.477753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.646 qpair failed and we were unable to recover it. 00:28:46.646 [2024-07-25 10:17:31.478017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.646 [2024-07-25 10:17:31.478050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.646 qpair failed and we were unable to recover it. 00:28:46.646 [2024-07-25 10:17:31.478270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.646 [2024-07-25 10:17:31.478300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.646 qpair failed and we were unable to recover it. 00:28:46.646 [2024-07-25 10:17:31.478476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.646 [2024-07-25 10:17:31.478502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.646 qpair failed and we were unable to recover it. 00:28:46.646 [2024-07-25 10:17:31.478690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.646 [2024-07-25 10:17:31.478719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.646 qpair failed and we were unable to recover it. 00:28:46.646 [2024-07-25 10:17:31.478988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.646 [2024-07-25 10:17:31.479017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.646 qpair failed and we were unable to recover it. 00:28:46.646 [2024-07-25 10:17:31.479233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.646 [2024-07-25 10:17:31.479259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.646 qpair failed and we were unable to recover it. 00:28:46.646 [2024-07-25 10:17:31.479439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.646 [2024-07-25 10:17:31.479479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.646 qpair failed and we were unable to recover it. 00:28:46.646 [2024-07-25 10:17:31.479734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.646 [2024-07-25 10:17:31.479763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.646 qpair failed and we were unable to recover it. 00:28:46.646 [2024-07-25 10:17:31.479946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.646 [2024-07-25 10:17:31.479982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.646 qpair failed and we were unable to recover it. 00:28:46.646 [2024-07-25 10:17:31.480207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.646 [2024-07-25 10:17:31.480236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.646 qpair failed and we were unable to recover it. 00:28:46.646 [2024-07-25 10:17:31.480463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.646 [2024-07-25 10:17:31.480493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.646 qpair failed and we were unable to recover it. 00:28:46.646 [2024-07-25 10:17:31.480710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.646 [2024-07-25 10:17:31.480736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.646 qpair failed and we were unable to recover it. 00:28:46.646 [2024-07-25 10:17:31.480912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.646 [2024-07-25 10:17:31.480941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.646 qpair failed and we were unable to recover it. 00:28:46.646 [2024-07-25 10:17:31.481137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.646 [2024-07-25 10:17:31.481166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.646 qpair failed and we were unable to recover it. 00:28:46.646 [2024-07-25 10:17:31.481366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.646 [2024-07-25 10:17:31.481392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.646 qpair failed and we were unable to recover it. 00:28:46.646 [2024-07-25 10:17:31.481594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.646 [2024-07-25 10:17:31.481620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.646 qpair failed and we were unable to recover it. 00:28:46.646 [2024-07-25 10:17:31.481857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.646 [2024-07-25 10:17:31.481887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.646 qpair failed and we were unable to recover it. 00:28:46.646 [2024-07-25 10:17:31.482126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.646 [2024-07-25 10:17:31.482152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.646 qpair failed and we were unable to recover it. 00:28:46.646 [2024-07-25 10:17:31.482305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.646 [2024-07-25 10:17:31.482334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.646 qpair failed and we were unable to recover it. 00:28:46.646 [2024-07-25 10:17:31.482492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.646 [2024-07-25 10:17:31.482521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.646 qpair failed and we were unable to recover it. 00:28:46.646 [2024-07-25 10:17:31.482756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.646 [2024-07-25 10:17:31.482783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.646 qpair failed and we were unable to recover it. 00:28:46.646 [2024-07-25 10:17:31.483019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.646 [2024-07-25 10:17:31.483048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.646 qpair failed and we were unable to recover it. 00:28:46.646 [2024-07-25 10:17:31.483223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.646 [2024-07-25 10:17:31.483252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.646 qpair failed and we were unable to recover it. 00:28:46.646 [2024-07-25 10:17:31.483538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.646 [2024-07-25 10:17:31.483565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.646 qpair failed and we were unable to recover it. 00:28:46.646 [2024-07-25 10:17:31.483805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.646 [2024-07-25 10:17:31.483834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.646 qpair failed and we were unable to recover it. 00:28:46.646 [2024-07-25 10:17:31.484096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.646 [2024-07-25 10:17:31.484123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.646 qpair failed and we were unable to recover it. 00:28:46.646 [2024-07-25 10:17:31.484393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.646 [2024-07-25 10:17:31.484417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.646 qpair failed and we were unable to recover it. 00:28:46.646 [2024-07-25 10:17:31.484670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.646 [2024-07-25 10:17:31.484701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.646 qpair failed and we were unable to recover it. 00:28:46.646 [2024-07-25 10:17:31.484954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.646 [2024-07-25 10:17:31.484981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.646 qpair failed and we were unable to recover it. 00:28:46.646 [2024-07-25 10:17:31.485246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.646 [2024-07-25 10:17:31.485270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.646 qpair failed and we were unable to recover it. 00:28:46.646 [2024-07-25 10:17:31.485496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.646 [2024-07-25 10:17:31.485523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.646 qpair failed and we were unable to recover it. 00:28:46.646 [2024-07-25 10:17:31.485701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.646 [2024-07-25 10:17:31.485728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.646 qpair failed and we were unable to recover it. 00:28:46.646 [2024-07-25 10:17:31.485921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.646 [2024-07-25 10:17:31.485945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.646 qpair failed and we were unable to recover it. 00:28:46.646 [2024-07-25 10:17:31.486088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.646 [2024-07-25 10:17:31.486114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.647 qpair failed and we were unable to recover it. 00:28:46.647 [2024-07-25 10:17:31.486308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.647 [2024-07-25 10:17:31.486336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.647 qpair failed and we were unable to recover it. 00:28:46.647 [2024-07-25 10:17:31.486570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.647 [2024-07-25 10:17:31.486597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.647 qpair failed and we were unable to recover it. 00:28:46.647 [2024-07-25 10:17:31.486891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.647 [2024-07-25 10:17:31.486918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.647 qpair failed and we were unable to recover it. 00:28:46.647 [2024-07-25 10:17:31.487142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.647 [2024-07-25 10:17:31.487170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.647 qpair failed and we were unable to recover it. 00:28:46.647 [2024-07-25 10:17:31.487345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.647 [2024-07-25 10:17:31.487370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.647 qpair failed and we were unable to recover it. 00:28:46.647 [2024-07-25 10:17:31.487539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.647 [2024-07-25 10:17:31.487567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.647 qpair failed and we were unable to recover it. 00:28:46.647 [2024-07-25 10:17:31.487776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.647 [2024-07-25 10:17:31.487804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.647 qpair failed and we were unable to recover it. 00:28:46.647 [2024-07-25 10:17:31.487999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.647 [2024-07-25 10:17:31.488024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.647 qpair failed and we were unable to recover it. 00:28:46.647 [2024-07-25 10:17:31.488220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.647 [2024-07-25 10:17:31.488247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.647 qpair failed and we were unable to recover it. 00:28:46.647 [2024-07-25 10:17:31.488393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.647 [2024-07-25 10:17:31.488422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.647 qpair failed and we were unable to recover it. 00:28:46.647 [2024-07-25 10:17:31.488661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.647 [2024-07-25 10:17:31.488688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.647 qpair failed and we were unable to recover it. 00:28:46.647 [2024-07-25 10:17:31.488902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.647 [2024-07-25 10:17:31.488932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.647 qpair failed and we were unable to recover it. 00:28:46.647 [2024-07-25 10:17:31.489232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.647 [2024-07-25 10:17:31.489261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.647 qpair failed and we were unable to recover it. 00:28:46.647 [2024-07-25 10:17:31.489452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.647 [2024-07-25 10:17:31.489489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.647 qpair failed and we were unable to recover it. 00:28:46.647 [2024-07-25 10:17:31.489689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.647 [2024-07-25 10:17:31.489719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.647 qpair failed and we were unable to recover it. 00:28:46.647 [2024-07-25 10:17:31.490033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.647 [2024-07-25 10:17:31.490063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.647 qpair failed and we were unable to recover it. 00:28:46.647 [2024-07-25 10:17:31.490304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.647 [2024-07-25 10:17:31.490334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.647 qpair failed and we were unable to recover it. 00:28:46.647 [2024-07-25 10:17:31.490551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.647 [2024-07-25 10:17:31.490578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.647 qpair failed and we were unable to recover it. 00:28:46.647 [2024-07-25 10:17:31.490831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.647 [2024-07-25 10:17:31.490881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.647 qpair failed and we were unable to recover it. 00:28:46.647 [2024-07-25 10:17:31.491069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.647 [2024-07-25 10:17:31.491098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.647 qpair failed and we were unable to recover it. 00:28:46.647 [2024-07-25 10:17:31.491292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.647 [2024-07-25 10:17:31.491349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.647 qpair failed and we were unable to recover it. 00:28:46.647 [2024-07-25 10:17:31.491556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.647 [2024-07-25 10:17:31.491583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.647 qpair failed and we were unable to recover it. 00:28:46.647 [2024-07-25 10:17:31.491795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.647 [2024-07-25 10:17:31.491852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.647 qpair failed and we were unable to recover it. 00:28:46.647 [2024-07-25 10:17:31.492063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.647 [2024-07-25 10:17:31.492092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.647 qpair failed and we were unable to recover it. 00:28:46.647 [2024-07-25 10:17:31.492266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.647 [2024-07-25 10:17:31.492307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.647 qpair failed and we were unable to recover it. 00:28:46.647 [2024-07-25 10:17:31.492555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.647 [2024-07-25 10:17:31.492582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.647 qpair failed and we were unable to recover it. 00:28:46.647 [2024-07-25 10:17:31.492834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.647 [2024-07-25 10:17:31.492886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.647 qpair failed and we were unable to recover it. 00:28:46.647 [2024-07-25 10:17:31.493124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.647 [2024-07-25 10:17:31.493153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.647 qpair failed and we were unable to recover it. 00:28:46.647 [2024-07-25 10:17:31.493384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.647 [2024-07-25 10:17:31.493414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.647 qpair failed and we were unable to recover it. 00:28:46.647 [2024-07-25 10:17:31.493797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.647 [2024-07-25 10:17:31.493844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.647 qpair failed and we were unable to recover it. 00:28:46.647 [2024-07-25 10:17:31.494141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.647 [2024-07-25 10:17:31.494193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.647 qpair failed and we were unable to recover it. 00:28:46.647 [2024-07-25 10:17:31.494504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.647 [2024-07-25 10:17:31.494534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.647 qpair failed and we were unable to recover it. 00:28:46.647 [2024-07-25 10:17:31.494751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.647 [2024-07-25 10:17:31.494781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.647 qpair failed and we were unable to recover it. 00:28:46.647 [2024-07-25 10:17:31.495003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.647 [2024-07-25 10:17:31.495029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.647 qpair failed and we were unable to recover it. 00:28:46.647 [2024-07-25 10:17:31.495288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.647 [2024-07-25 10:17:31.495340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.647 qpair failed and we were unable to recover it. 00:28:46.647 [2024-07-25 10:17:31.495583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.647 [2024-07-25 10:17:31.495613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.647 qpair failed and we were unable to recover it. 00:28:46.647 [2024-07-25 10:17:31.495822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.647 [2024-07-25 10:17:31.495851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.647 qpair failed and we were unable to recover it. 00:28:46.647 [2024-07-25 10:17:31.496133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.648 [2024-07-25 10:17:31.496159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.648 qpair failed and we were unable to recover it. 00:28:46.648 [2024-07-25 10:17:31.496356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.648 [2024-07-25 10:17:31.496386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.648 qpair failed and we were unable to recover it. 00:28:46.648 [2024-07-25 10:17:31.496602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.648 [2024-07-25 10:17:31.496628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.648 qpair failed and we were unable to recover it. 00:28:46.648 [2024-07-25 10:17:31.496815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.648 [2024-07-25 10:17:31.496844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.648 qpair failed and we were unable to recover it. 00:28:46.648 [2024-07-25 10:17:31.497000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.648 [2024-07-25 10:17:31.497026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.648 qpair failed and we were unable to recover it. 00:28:46.648 [2024-07-25 10:17:31.497228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.648 [2024-07-25 10:17:31.497280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.648 qpair failed and we were unable to recover it. 00:28:46.648 [2024-07-25 10:17:31.497465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.648 [2024-07-25 10:17:31.497495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.648 qpair failed and we were unable to recover it. 00:28:46.648 [2024-07-25 10:17:31.497732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.648 [2024-07-25 10:17:31.497762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.648 qpair failed and we were unable to recover it. 00:28:46.648 [2024-07-25 10:17:31.497948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.648 [2024-07-25 10:17:31.497974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.648 qpair failed and we were unable to recover it. 00:28:46.648 [2024-07-25 10:17:31.498172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.648 [2024-07-25 10:17:31.498222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.648 qpair failed and we were unable to recover it. 00:28:46.648 [2024-07-25 10:17:31.498489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.648 [2024-07-25 10:17:31.498518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.648 qpair failed and we were unable to recover it. 00:28:46.648 [2024-07-25 10:17:31.498809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.648 [2024-07-25 10:17:31.498839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.648 qpair failed and we were unable to recover it. 00:28:46.648 [2024-07-25 10:17:31.499081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.648 [2024-07-25 10:17:31.499108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.648 qpair failed and we were unable to recover it. 00:28:46.648 [2024-07-25 10:17:31.499347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.648 [2024-07-25 10:17:31.499375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.648 qpair failed and we were unable to recover it. 00:28:46.648 [2024-07-25 10:17:31.499641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.648 [2024-07-25 10:17:31.499669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.648 qpair failed and we were unable to recover it. 00:28:46.648 [2024-07-25 10:17:31.499879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.648 [2024-07-25 10:17:31.499908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.648 qpair failed and we were unable to recover it. 00:28:46.648 [2024-07-25 10:17:31.500119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.648 [2024-07-25 10:17:31.500145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.648 qpair failed and we were unable to recover it. 00:28:46.648 [2024-07-25 10:17:31.500360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.648 [2024-07-25 10:17:31.500389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.648 qpair failed and we were unable to recover it. 00:28:46.648 [2024-07-25 10:17:31.500632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.648 [2024-07-25 10:17:31.500658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.648 qpair failed and we were unable to recover it. 00:28:46.648 [2024-07-25 10:17:31.500924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.648 [2024-07-25 10:17:31.500953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.648 qpair failed and we were unable to recover it. 00:28:46.648 [2024-07-25 10:17:31.501124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.648 [2024-07-25 10:17:31.501148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.648 qpair failed and we were unable to recover it. 00:28:46.648 [2024-07-25 10:17:31.501385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.648 [2024-07-25 10:17:31.501414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.648 qpair failed and we were unable to recover it. 00:28:46.648 [2024-07-25 10:17:31.501746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.648 [2024-07-25 10:17:31.501776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.648 qpair failed and we were unable to recover it. 00:28:46.648 [2024-07-25 10:17:31.502033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.648 [2024-07-25 10:17:31.502071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.648 qpair failed and we were unable to recover it. 00:28:46.648 [2024-07-25 10:17:31.502302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.648 [2024-07-25 10:17:31.502331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.648 qpair failed and we were unable to recover it. 00:28:46.648 [2024-07-25 10:17:31.502536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.648 [2024-07-25 10:17:31.502565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.648 qpair failed and we were unable to recover it. 00:28:46.648 [2024-07-25 10:17:31.502779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.648 [2024-07-25 10:17:31.502808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.648 qpair failed and we were unable to recover it. 00:28:46.648 [2024-07-25 10:17:31.502984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.648 [2024-07-25 10:17:31.503013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.648 qpair failed and we were unable to recover it. 00:28:46.648 [2024-07-25 10:17:31.503193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.648 [2024-07-25 10:17:31.503216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.648 qpair failed and we were unable to recover it. 00:28:46.648 [2024-07-25 10:17:31.503415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.648 [2024-07-25 10:17:31.503453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.648 qpair failed and we were unable to recover it. 00:28:46.648 [2024-07-25 10:17:31.503685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.648 [2024-07-25 10:17:31.503715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.648 qpair failed and we were unable to recover it. 00:28:46.648 [2024-07-25 10:17:31.503973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.648 [2024-07-25 10:17:31.504002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.648 qpair failed and we were unable to recover it. 00:28:46.648 [2024-07-25 10:17:31.504298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.648 [2024-07-25 10:17:31.504321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.648 qpair failed and we were unable to recover it. 00:28:46.648 [2024-07-25 10:17:31.504554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.648 [2024-07-25 10:17:31.504584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.648 qpair failed and we were unable to recover it. 00:28:46.648 [2024-07-25 10:17:31.504797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.648 [2024-07-25 10:17:31.504826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.648 qpair failed and we were unable to recover it. 00:28:46.648 [2024-07-25 10:17:31.505030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.648 [2024-07-25 10:17:31.505059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.648 qpair failed and we were unable to recover it. 00:28:46.648 [2024-07-25 10:17:31.505307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.648 [2024-07-25 10:17:31.505331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.648 qpair failed and we were unable to recover it. 00:28:46.648 [2024-07-25 10:17:31.505536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.648 [2024-07-25 10:17:31.505566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.648 qpair failed and we were unable to recover it. 00:28:46.649 [2024-07-25 10:17:31.505754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.649 [2024-07-25 10:17:31.505787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.649 qpair failed and we were unable to recover it. 00:28:46.649 [2024-07-25 10:17:31.505985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.649 [2024-07-25 10:17:31.506014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.649 qpair failed and we were unable to recover it. 00:28:46.649 [2024-07-25 10:17:31.506239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.649 [2024-07-25 10:17:31.506263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.649 qpair failed and we were unable to recover it. 00:28:46.649 [2024-07-25 10:17:31.506478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.649 [2024-07-25 10:17:31.506509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.649 qpair failed and we were unable to recover it. 00:28:46.649 [2024-07-25 10:17:31.506735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.649 [2024-07-25 10:17:31.506764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.649 qpair failed and we were unable to recover it. 00:28:46.649 [2024-07-25 10:17:31.506948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.649 [2024-07-25 10:17:31.506978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.649 qpair failed and we were unable to recover it. 00:28:46.649 [2024-07-25 10:17:31.507158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.649 [2024-07-25 10:17:31.507182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.649 qpair failed and we were unable to recover it. 00:28:46.649 [2024-07-25 10:17:31.507353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.649 [2024-07-25 10:17:31.507382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.649 qpair failed and we were unable to recover it. 00:28:46.649 [2024-07-25 10:17:31.507613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.649 [2024-07-25 10:17:31.507640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.649 qpair failed and we were unable to recover it. 00:28:46.649 [2024-07-25 10:17:31.507891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.649 [2024-07-25 10:17:31.507919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.649 qpair failed and we were unable to recover it. 00:28:46.649 [2024-07-25 10:17:31.508134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.649 [2024-07-25 10:17:31.508158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.649 qpair failed and we were unable to recover it. 00:28:46.649 [2024-07-25 10:17:31.508344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.649 [2024-07-25 10:17:31.508373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.649 qpair failed and we were unable to recover it. 00:28:46.649 [2024-07-25 10:17:31.508632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.649 [2024-07-25 10:17:31.508660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.649 qpair failed and we were unable to recover it. 00:28:46.649 [2024-07-25 10:17:31.508914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.649 [2024-07-25 10:17:31.508948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.649 qpair failed and we were unable to recover it. 00:28:46.649 [2024-07-25 10:17:31.509165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.649 [2024-07-25 10:17:31.509189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.649 qpair failed and we were unable to recover it. 00:28:46.649 [2024-07-25 10:17:31.509367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.649 [2024-07-25 10:17:31.509407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.649 qpair failed and we were unable to recover it. 00:28:46.649 [2024-07-25 10:17:31.509625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.649 [2024-07-25 10:17:31.509655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.649 qpair failed and we were unable to recover it. 00:28:46.649 [2024-07-25 10:17:31.509860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.649 [2024-07-25 10:17:31.509889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.649 qpair failed and we were unable to recover it. 00:28:46.649 [2024-07-25 10:17:31.510014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.649 [2024-07-25 10:17:31.510038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.649 qpair failed and we were unable to recover it. 00:28:46.649 [2024-07-25 10:17:31.510283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.649 [2024-07-25 10:17:31.510334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.649 qpair failed and we were unable to recover it. 00:28:46.649 [2024-07-25 10:17:31.510557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.649 [2024-07-25 10:17:31.510586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.649 qpair failed and we were unable to recover it. 00:28:46.649 [2024-07-25 10:17:31.510760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.649 [2024-07-25 10:17:31.510789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.649 qpair failed and we were unable to recover it. 00:28:46.649 [2024-07-25 10:17:31.511004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.649 [2024-07-25 10:17:31.511028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.649 qpair failed and we were unable to recover it. 00:28:46.649 [2024-07-25 10:17:31.511253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.649 [2024-07-25 10:17:31.511302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.649 qpair failed and we were unable to recover it. 00:28:46.649 [2024-07-25 10:17:31.511520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.649 [2024-07-25 10:17:31.511550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.649 qpair failed and we were unable to recover it. 00:28:46.649 [2024-07-25 10:17:31.511722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.649 [2024-07-25 10:17:31.511760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.649 qpair failed and we were unable to recover it. 00:28:46.649 [2024-07-25 10:17:31.511946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.649 [2024-07-25 10:17:31.511969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.649 qpair failed and we were unable to recover it. 00:28:46.649 [2024-07-25 10:17:31.512215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.649 [2024-07-25 10:17:31.512266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.649 qpair failed and we were unable to recover it. 00:28:46.649 [2024-07-25 10:17:31.512476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.649 [2024-07-25 10:17:31.512506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.649 qpair failed and we were unable to recover it. 00:28:46.649 [2024-07-25 10:17:31.512736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.649 [2024-07-25 10:17:31.512766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.649 qpair failed and we were unable to recover it. 00:28:46.649 [2024-07-25 10:17:31.512997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.649 [2024-07-25 10:17:31.513020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.649 qpair failed and we were unable to recover it. 00:28:46.649 [2024-07-25 10:17:31.513300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.649 [2024-07-25 10:17:31.513351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.649 qpair failed and we were unable to recover it. 00:28:46.649 [2024-07-25 10:17:31.513563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.649 [2024-07-25 10:17:31.513593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.649 qpair failed and we were unable to recover it. 00:28:46.649 [2024-07-25 10:17:31.513791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.649 [2024-07-25 10:17:31.513820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.649 qpair failed and we were unable to recover it. 00:28:46.649 [2024-07-25 10:17:31.514048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.649 [2024-07-25 10:17:31.514071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.649 qpair failed and we were unable to recover it. 00:28:46.649 [2024-07-25 10:17:31.514223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.649 [2024-07-25 10:17:31.514271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.649 qpair failed and we were unable to recover it. 00:28:46.649 [2024-07-25 10:17:31.514487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.649 [2024-07-25 10:17:31.514517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.649 qpair failed and we were unable to recover it. 00:28:46.649 [2024-07-25 10:17:31.514737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.649 [2024-07-25 10:17:31.514766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.649 qpair failed and we were unable to recover it. 00:28:46.650 [2024-07-25 10:17:31.514997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.650 [2024-07-25 10:17:31.515020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.650 qpair failed and we were unable to recover it. 00:28:46.650 [2024-07-25 10:17:31.515245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.650 [2024-07-25 10:17:31.515295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.650 qpair failed and we were unable to recover it. 00:28:46.650 [2024-07-25 10:17:31.515491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.650 [2024-07-25 10:17:31.515521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.650 qpair failed and we were unable to recover it. 00:28:46.650 [2024-07-25 10:17:31.515760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.650 [2024-07-25 10:17:31.515790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.650 qpair failed and we were unable to recover it. 00:28:46.650 [2024-07-25 10:17:31.515968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.650 [2024-07-25 10:17:31.515991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.650 qpair failed and we were unable to recover it. 00:28:46.650 [2024-07-25 10:17:31.516242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.650 [2024-07-25 10:17:31.516290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.650 qpair failed and we were unable to recover it. 00:28:46.650 [2024-07-25 10:17:31.516466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.650 [2024-07-25 10:17:31.516496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.650 qpair failed and we were unable to recover it. 00:28:46.650 [2024-07-25 10:17:31.516666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.650 [2024-07-25 10:17:31.516695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.650 qpair failed and we were unable to recover it. 00:28:46.650 [2024-07-25 10:17:31.516865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.650 [2024-07-25 10:17:31.516888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.650 qpair failed and we were unable to recover it. 00:28:46.650 [2024-07-25 10:17:31.517077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.650 [2024-07-25 10:17:31.517129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.650 qpair failed and we were unable to recover it. 00:28:46.650 [2024-07-25 10:17:31.517317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.650 [2024-07-25 10:17:31.517346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.650 qpair failed and we were unable to recover it. 00:28:46.650 [2024-07-25 10:17:31.517551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.650 [2024-07-25 10:17:31.517576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.650 qpair failed and we were unable to recover it. 00:28:46.650 [2024-07-25 10:17:31.517730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.650 [2024-07-25 10:17:31.517753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.650 qpair failed and we were unable to recover it. 00:28:46.650 [2024-07-25 10:17:31.517897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.650 [2024-07-25 10:17:31.517963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.650 qpair failed and we were unable to recover it. 00:28:46.650 [2024-07-25 10:17:31.518114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.650 [2024-07-25 10:17:31.518144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.650 qpair failed and we were unable to recover it. 00:28:46.650 [2024-07-25 10:17:31.518419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.650 [2024-07-25 10:17:31.518468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.650 qpair failed and we were unable to recover it. 00:28:46.650 [2024-07-25 10:17:31.518683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.650 [2024-07-25 10:17:31.518725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.650 qpair failed and we were unable to recover it. 00:28:46.650 [2024-07-25 10:17:31.518919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.650 [2024-07-25 10:17:31.518967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.650 qpair failed and we were unable to recover it. 00:28:46.650 [2024-07-25 10:17:31.519171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.650 [2024-07-25 10:17:31.519199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.650 qpair failed and we were unable to recover it. 00:28:46.650 [2024-07-25 10:17:31.519408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.650 [2024-07-25 10:17:31.519446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.650 qpair failed and we were unable to recover it. 00:28:46.650 [2024-07-25 10:17:31.519682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.650 [2024-07-25 10:17:31.519721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.650 qpair failed and we were unable to recover it. 00:28:46.650 [2024-07-25 10:17:31.519868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.650 [2024-07-25 10:17:31.519920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.650 qpair failed and we were unable to recover it. 00:28:46.650 [2024-07-25 10:17:31.520099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.650 [2024-07-25 10:17:31.520126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.650 qpair failed and we were unable to recover it. 00:28:46.650 [2024-07-25 10:17:31.520384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.650 [2024-07-25 10:17:31.520414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.650 qpair failed and we were unable to recover it. 00:28:46.650 [2024-07-25 10:17:31.520659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.650 [2024-07-25 10:17:31.520683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.650 qpair failed and we were unable to recover it. 00:28:46.650 [2024-07-25 10:17:31.520883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.650 [2024-07-25 10:17:31.520934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.650 qpair failed and we were unable to recover it. 00:28:46.650 [2024-07-25 10:17:31.521151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.650 [2024-07-25 10:17:31.521180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.650 qpair failed and we were unable to recover it. 00:28:46.650 [2024-07-25 10:17:31.521395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.650 [2024-07-25 10:17:31.521424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.650 qpair failed and we were unable to recover it. 00:28:46.650 [2024-07-25 10:17:31.521699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.650 [2024-07-25 10:17:31.521724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.650 qpair failed and we were unable to recover it. 00:28:46.650 [2024-07-25 10:17:31.521973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.650 [2024-07-25 10:17:31.522023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.650 qpair failed and we were unable to recover it. 00:28:46.650 [2024-07-25 10:17:31.522276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.651 [2024-07-25 10:17:31.522306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.651 qpair failed and we were unable to recover it. 00:28:46.651 [2024-07-25 10:17:31.522546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.651 [2024-07-25 10:17:31.522576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.651 qpair failed and we were unable to recover it. 00:28:46.651 [2024-07-25 10:17:31.522770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.651 [2024-07-25 10:17:31.522793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.651 qpair failed and we were unable to recover it. 00:28:46.651 [2024-07-25 10:17:31.523019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.651 [2024-07-25 10:17:31.523068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.651 qpair failed and we were unable to recover it. 00:28:46.651 [2024-07-25 10:17:31.523241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.651 [2024-07-25 10:17:31.523270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.651 qpair failed and we were unable to recover it. 00:28:46.651 [2024-07-25 10:17:31.523483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.651 [2024-07-25 10:17:31.523513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.651 qpair failed and we were unable to recover it. 00:28:46.651 [2024-07-25 10:17:31.523753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.651 [2024-07-25 10:17:31.523777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.651 qpair failed and we were unable to recover it. 00:28:46.651 [2024-07-25 10:17:31.524022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.651 [2024-07-25 10:17:31.524072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.651 qpair failed and we were unable to recover it. 00:28:46.651 [2024-07-25 10:17:31.524260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.651 [2024-07-25 10:17:31.524288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.651 qpair failed and we were unable to recover it. 00:28:46.651 [2024-07-25 10:17:31.524425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.651 [2024-07-25 10:17:31.524473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.651 qpair failed and we were unable to recover it. 00:28:46.651 [2024-07-25 10:17:31.524692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.651 [2024-07-25 10:17:31.524731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.651 qpair failed and we were unable to recover it. 00:28:46.651 [2024-07-25 10:17:31.524967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.651 [2024-07-25 10:17:31.525016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.651 qpair failed and we were unable to recover it. 00:28:46.651 [2024-07-25 10:17:31.525202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.651 [2024-07-25 10:17:31.525231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.651 qpair failed and we were unable to recover it. 00:28:46.651 [2024-07-25 10:17:31.525437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.651 [2024-07-25 10:17:31.525471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.651 qpair failed and we were unable to recover it. 00:28:46.651 [2024-07-25 10:17:31.525689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.651 [2024-07-25 10:17:31.525729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.651 qpair failed and we were unable to recover it. 00:28:46.651 [2024-07-25 10:17:31.525991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.651 [2024-07-25 10:17:31.526043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.651 qpair failed and we were unable to recover it. 00:28:46.651 [2024-07-25 10:17:31.526287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.651 [2024-07-25 10:17:31.526317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.651 qpair failed and we were unable to recover it. 00:28:46.651 [2024-07-25 10:17:31.526532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.651 [2024-07-25 10:17:31.526558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.651 qpair failed and we were unable to recover it. 00:28:46.651 [2024-07-25 10:17:31.526750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.651 [2024-07-25 10:17:31.526774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.651 qpair failed and we were unable to recover it. 00:28:46.651 [2024-07-25 10:17:31.527010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.651 [2024-07-25 10:17:31.527059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.651 qpair failed and we were unable to recover it. 00:28:46.651 [2024-07-25 10:17:31.527309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.651 [2024-07-25 10:17:31.527338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.651 qpair failed and we were unable to recover it. 00:28:46.651 [2024-07-25 10:17:31.527514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.651 [2024-07-25 10:17:31.527554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.651 qpair failed and we were unable to recover it. 00:28:46.651 [2024-07-25 10:17:31.527772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.651 [2024-07-25 10:17:31.527796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.651 qpair failed and we were unable to recover it. 00:28:46.651 [2024-07-25 10:17:31.528020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.651 [2024-07-25 10:17:31.528071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.651 qpair failed and we were unable to recover it. 00:28:46.651 [2024-07-25 10:17:31.528280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.651 [2024-07-25 10:17:31.528309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.651 qpair failed and we were unable to recover it. 00:28:46.651 [2024-07-25 10:17:31.528514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.651 [2024-07-25 10:17:31.528543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.651 qpair failed and we were unable to recover it. 00:28:46.651 [2024-07-25 10:17:31.528725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.651 [2024-07-25 10:17:31.528761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.651 qpair failed and we were unable to recover it. 00:28:46.651 [2024-07-25 10:17:31.529013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.651 [2024-07-25 10:17:31.529062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.651 qpair failed and we were unable to recover it. 00:28:46.651 [2024-07-25 10:17:31.529324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.651 [2024-07-25 10:17:31.529354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.651 qpair failed and we were unable to recover it. 00:28:46.651 [2024-07-25 10:17:31.529622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.651 [2024-07-25 10:17:31.529653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.651 qpair failed and we were unable to recover it. 00:28:46.651 [2024-07-25 10:17:31.529868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.651 [2024-07-25 10:17:31.529892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.651 qpair failed and we were unable to recover it. 00:28:46.651 [2024-07-25 10:17:31.530116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.651 [2024-07-25 10:17:31.530183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.651 qpair failed and we were unable to recover it. 00:28:46.651 [2024-07-25 10:17:31.530406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.651 [2024-07-25 10:17:31.530444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.651 qpair failed and we were unable to recover it. 00:28:46.651 [2024-07-25 10:17:31.530611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.651 [2024-07-25 10:17:31.530639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.651 qpair failed and we were unable to recover it. 00:28:46.651 [2024-07-25 10:17:31.530817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.651 [2024-07-25 10:17:31.530840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.651 qpair failed and we were unable to recover it. 00:28:46.651 [2024-07-25 10:17:31.531074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.651 [2024-07-25 10:17:31.531125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.651 qpair failed and we were unable to recover it. 00:28:46.651 [2024-07-25 10:17:31.531367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.651 [2024-07-25 10:17:31.531396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.651 qpair failed and we were unable to recover it. 00:28:46.651 [2024-07-25 10:17:31.531636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.651 [2024-07-25 10:17:31.531666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.651 qpair failed and we were unable to recover it. 00:28:46.652 [2024-07-25 10:17:31.531872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.652 [2024-07-25 10:17:31.531896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.652 qpair failed and we were unable to recover it. 00:28:46.652 [2024-07-25 10:17:31.532150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.652 [2024-07-25 10:17:31.532199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.652 qpair failed and we were unable to recover it. 00:28:46.652 [2024-07-25 10:17:31.532381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.652 [2024-07-25 10:17:31.532415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.652 qpair failed and we were unable to recover it. 00:28:46.652 [2024-07-25 10:17:31.532607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.652 [2024-07-25 10:17:31.532636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.652 qpair failed and we were unable to recover it. 00:28:46.652 [2024-07-25 10:17:31.532855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.652 [2024-07-25 10:17:31.532879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.652 qpair failed and we were unable to recover it. 00:28:46.652 [2024-07-25 10:17:31.533135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.652 [2024-07-25 10:17:31.533164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.652 qpair failed and we were unable to recover it. 00:28:46.652 [2024-07-25 10:17:31.533406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.652 [2024-07-25 10:17:31.533444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.652 qpair failed and we were unable to recover it. 00:28:46.652 [2024-07-25 10:17:31.533720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.652 [2024-07-25 10:17:31.533750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.652 qpair failed and we were unable to recover it. 00:28:46.652 [2024-07-25 10:17:31.533933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.652 [2024-07-25 10:17:31.533955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.652 qpair failed and we were unable to recover it. 00:28:46.652 [2024-07-25 10:17:31.534262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.652 [2024-07-25 10:17:31.534311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.652 qpair failed and we were unable to recover it. 00:28:46.652 [2024-07-25 10:17:31.534586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.652 [2024-07-25 10:17:31.534617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.652 qpair failed and we were unable to recover it. 00:28:46.652 [2024-07-25 10:17:31.534868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.652 [2024-07-25 10:17:31.534896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.652 qpair failed and we were unable to recover it. 00:28:46.652 [2024-07-25 10:17:31.535072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.652 [2024-07-25 10:17:31.535095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.652 qpair failed and we were unable to recover it. 00:28:46.652 [2024-07-25 10:17:31.535272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.652 [2024-07-25 10:17:31.535300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.652 qpair failed and we were unable to recover it. 00:28:46.652 [2024-07-25 10:17:31.535505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.652 [2024-07-25 10:17:31.535530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.652 qpair failed and we were unable to recover it. 00:28:46.652 [2024-07-25 10:17:31.535739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.652 [2024-07-25 10:17:31.535768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.652 qpair failed and we were unable to recover it. 00:28:46.652 [2024-07-25 10:17:31.535990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.652 [2024-07-25 10:17:31.536013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.652 qpair failed and we were unable to recover it. 00:28:46.652 [2024-07-25 10:17:31.536295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.652 [2024-07-25 10:17:31.536346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.652 qpair failed and we were unable to recover it. 00:28:46.652 [2024-07-25 10:17:31.536565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.652 [2024-07-25 10:17:31.536594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.652 qpair failed and we were unable to recover it. 00:28:46.652 [2024-07-25 10:17:31.536825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.652 [2024-07-25 10:17:31.536854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.652 qpair failed and we were unable to recover it. 00:28:46.652 [2024-07-25 10:17:31.537030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.652 [2024-07-25 10:17:31.537055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.652 qpair failed and we were unable to recover it. 00:28:46.652 [2024-07-25 10:17:31.537225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.652 [2024-07-25 10:17:31.537266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.652 qpair failed and we were unable to recover it. 00:28:46.652 [2024-07-25 10:17:31.537451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.652 [2024-07-25 10:17:31.537480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.652 qpair failed and we were unable to recover it. 00:28:46.652 [2024-07-25 10:17:31.537677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.652 [2024-07-25 10:17:31.537706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.652 qpair failed and we were unable to recover it. 00:28:46.652 [2024-07-25 10:17:31.537900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.652 [2024-07-25 10:17:31.537924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.652 qpair failed and we were unable to recover it. 00:28:46.652 [2024-07-25 10:17:31.538179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.652 [2024-07-25 10:17:31.538230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.652 qpair failed and we were unable to recover it. 00:28:46.652 [2024-07-25 10:17:31.538440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.652 [2024-07-25 10:17:31.538470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.652 qpair failed and we were unable to recover it. 00:28:46.652 [2024-07-25 10:17:31.538673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.652 [2024-07-25 10:17:31.538701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.652 qpair failed and we were unable to recover it. 00:28:46.652 [2024-07-25 10:17:31.538964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.652 [2024-07-25 10:17:31.538988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.652 qpair failed and we were unable to recover it. 00:28:46.652 [2024-07-25 10:17:31.539283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.652 [2024-07-25 10:17:31.539338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.652 qpair failed and we were unable to recover it. 00:28:46.652 [2024-07-25 10:17:31.539539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.652 [2024-07-25 10:17:31.539569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.652 qpair failed and we were unable to recover it. 00:28:46.652 [2024-07-25 10:17:31.539729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.652 [2024-07-25 10:17:31.539758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.652 qpair failed and we were unable to recover it. 00:28:46.652 [2024-07-25 10:17:31.539993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.652 [2024-07-25 10:17:31.540017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.652 qpair failed and we were unable to recover it. 00:28:46.652 [2024-07-25 10:17:31.540181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.652 [2024-07-25 10:17:31.540231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.652 qpair failed and we were unable to recover it. 00:28:46.652 [2024-07-25 10:17:31.540489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.652 [2024-07-25 10:17:31.540519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.652 qpair failed and we were unable to recover it. 00:28:46.652 [2024-07-25 10:17:31.540754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.652 [2024-07-25 10:17:31.540783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.652 qpair failed and we were unable to recover it. 00:28:46.652 [2024-07-25 10:17:31.541014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.652 [2024-07-25 10:17:31.541037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.652 qpair failed and we were unable to recover it. 00:28:46.652 [2024-07-25 10:17:31.541328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.652 [2024-07-25 10:17:31.541378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.652 qpair failed and we were unable to recover it. 00:28:46.652 [2024-07-25 10:17:31.541647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.653 [2024-07-25 10:17:31.541673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.653 qpair failed and we were unable to recover it. 00:28:46.653 [2024-07-25 10:17:31.541842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.653 [2024-07-25 10:17:31.541871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.653 qpair failed and we were unable to recover it. 00:28:46.653 [2024-07-25 10:17:31.542053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.653 [2024-07-25 10:17:31.542076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.653 qpair failed and we were unable to recover it. 00:28:46.653 [2024-07-25 10:17:31.542270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.653 [2024-07-25 10:17:31.542320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.653 qpair failed and we were unable to recover it. 00:28:46.653 [2024-07-25 10:17:31.542502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.653 [2024-07-25 10:17:31.542532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.653 qpair failed and we were unable to recover it. 00:28:46.653 [2024-07-25 10:17:31.542768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.653 [2024-07-25 10:17:31.542797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.653 qpair failed and we were unable to recover it. 00:28:46.653 [2024-07-25 10:17:31.542991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.653 [2024-07-25 10:17:31.543014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.653 qpair failed and we were unable to recover it. 00:28:46.653 [2024-07-25 10:17:31.543272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.653 [2024-07-25 10:17:31.543321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.653 qpair failed and we were unable to recover it. 00:28:46.653 [2024-07-25 10:17:31.543578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.653 [2024-07-25 10:17:31.543607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.653 qpair failed and we were unable to recover it. 00:28:46.653 [2024-07-25 10:17:31.543806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.653 [2024-07-25 10:17:31.543835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.653 qpair failed and we were unable to recover it. 00:28:46.653 [2024-07-25 10:17:31.544035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.653 [2024-07-25 10:17:31.544058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.653 qpair failed and we were unable to recover it. 00:28:46.653 [2024-07-25 10:17:31.544281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.653 [2024-07-25 10:17:31.544309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.653 qpair failed and we were unable to recover it. 00:28:46.653 [2024-07-25 10:17:31.544526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.653 [2024-07-25 10:17:31.544555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.653 qpair failed and we were unable to recover it. 00:28:46.653 [2024-07-25 10:17:31.544735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.653 [2024-07-25 10:17:31.544763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.653 qpair failed and we were unable to recover it. 00:28:46.653 [2024-07-25 10:17:31.544966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.653 [2024-07-25 10:17:31.544989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.653 qpair failed and we were unable to recover it. 00:28:46.653 [2024-07-25 10:17:31.545195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.653 [2024-07-25 10:17:31.545247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.653 qpair failed and we were unable to recover it. 00:28:46.653 [2024-07-25 10:17:31.545514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.653 [2024-07-25 10:17:31.545544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.653 qpair failed and we were unable to recover it. 00:28:46.653 [2024-07-25 10:17:31.545782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.653 [2024-07-25 10:17:31.545810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.653 qpair failed and we were unable to recover it. 00:28:46.653 [2024-07-25 10:17:31.545991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.653 [2024-07-25 10:17:31.546015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.653 qpair failed and we were unable to recover it. 00:28:46.653 [2024-07-25 10:17:31.546250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.653 [2024-07-25 10:17:31.546307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.653 qpair failed and we were unable to recover it. 00:28:46.653 [2024-07-25 10:17:31.546528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.653 [2024-07-25 10:17:31.546557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.653 qpair failed and we were unable to recover it. 00:28:46.653 [2024-07-25 10:17:31.546742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.653 [2024-07-25 10:17:31.546770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.653 qpair failed and we were unable to recover it. 00:28:46.653 [2024-07-25 10:17:31.546989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.653 [2024-07-25 10:17:31.547013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.653 qpair failed and we were unable to recover it. 00:28:46.653 [2024-07-25 10:17:31.547184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.653 [2024-07-25 10:17:31.547234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.653 qpair failed and we were unable to recover it. 00:28:46.653 [2024-07-25 10:17:31.547449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.653 [2024-07-25 10:17:31.547489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.653 qpair failed and we were unable to recover it. 00:28:46.653 [2024-07-25 10:17:31.547645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.653 [2024-07-25 10:17:31.547674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.653 qpair failed and we were unable to recover it. 00:28:46.653 [2024-07-25 10:17:31.547878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.653 [2024-07-25 10:17:31.547902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.653 qpair failed and we were unable to recover it. 00:28:46.653 [2024-07-25 10:17:31.548163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.653 [2024-07-25 10:17:31.548214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.653 qpair failed and we were unable to recover it. 00:28:46.653 [2024-07-25 10:17:31.548445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.653 [2024-07-25 10:17:31.548475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.653 qpair failed and we were unable to recover it. 00:28:46.653 [2024-07-25 10:17:31.548698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.653 [2024-07-25 10:17:31.548727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.653 qpair failed and we were unable to recover it. 00:28:46.653 [2024-07-25 10:17:31.548977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.653 [2024-07-25 10:17:31.549000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.653 qpair failed and we were unable to recover it. 00:28:46.653 [2024-07-25 10:17:31.549200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.653 [2024-07-25 10:17:31.549251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.653 qpair failed and we were unable to recover it. 00:28:46.653 [2024-07-25 10:17:31.549457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.653 [2024-07-25 10:17:31.549486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.653 qpair failed and we were unable to recover it. 00:28:46.653 [2024-07-25 10:17:31.549744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.653 [2024-07-25 10:17:31.549773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.653 qpair failed and we were unable to recover it. 00:28:46.653 [2024-07-25 10:17:31.550000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.653 [2024-07-25 10:17:31.550024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.653 qpair failed and we were unable to recover it. 00:28:46.653 [2024-07-25 10:17:31.550294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.653 [2024-07-25 10:17:31.550344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.653 qpair failed and we were unable to recover it. 00:28:46.653 [2024-07-25 10:17:31.550533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.653 [2024-07-25 10:17:31.550558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.653 qpair failed and we were unable to recover it. 00:28:46.653 [2024-07-25 10:17:31.550748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.653 [2024-07-25 10:17:31.550785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.653 qpair failed and we were unable to recover it. 00:28:46.653 [2024-07-25 10:17:31.550923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.654 [2024-07-25 10:17:31.550946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.654 qpair failed and we were unable to recover it. 00:28:46.654 [2024-07-25 10:17:31.551142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.654 [2024-07-25 10:17:31.551204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.654 qpair failed and we were unable to recover it. 00:28:46.654 [2024-07-25 10:17:31.551339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.654 [2024-07-25 10:17:31.551368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.654 qpair failed and we were unable to recover it. 00:28:46.654 [2024-07-25 10:17:31.551586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.654 [2024-07-25 10:17:31.551616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.654 qpair failed and we were unable to recover it. 00:28:46.654 [2024-07-25 10:17:31.551846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.654 [2024-07-25 10:17:31.551869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.654 qpair failed and we were unable to recover it. 00:28:46.654 [2024-07-25 10:17:31.552148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.654 [2024-07-25 10:17:31.552199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.654 qpair failed and we were unable to recover it. 00:28:46.654 [2024-07-25 10:17:31.552418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.654 [2024-07-25 10:17:31.552456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.654 qpair failed and we were unable to recover it. 00:28:46.654 [2024-07-25 10:17:31.552706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.654 [2024-07-25 10:17:31.552735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.654 qpair failed and we were unable to recover it. 00:28:46.654 [2024-07-25 10:17:31.553014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.654 [2024-07-25 10:17:31.553038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.654 qpair failed and we were unable to recover it. 00:28:46.654 [2024-07-25 10:17:31.553295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.654 [2024-07-25 10:17:31.553345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.654 qpair failed and we were unable to recover it. 00:28:46.654 [2024-07-25 10:17:31.553575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.654 [2024-07-25 10:17:31.553605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.654 qpair failed and we were unable to recover it. 00:28:46.654 [2024-07-25 10:17:31.553767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.654 [2024-07-25 10:17:31.553796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.654 qpair failed and we were unable to recover it. 00:28:46.654 [2024-07-25 10:17:31.553977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.654 [2024-07-25 10:17:31.554011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.654 qpair failed and we were unable to recover it. 00:28:46.654 [2024-07-25 10:17:31.554332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.654 [2024-07-25 10:17:31.554388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.654 qpair failed and we were unable to recover it. 00:28:46.654 [2024-07-25 10:17:31.554567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.654 [2024-07-25 10:17:31.554592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.654 qpair failed and we were unable to recover it. 00:28:46.654 [2024-07-25 10:17:31.554760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.654 [2024-07-25 10:17:31.554789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.654 qpair failed and we were unable to recover it. 00:28:46.654 [2024-07-25 10:17:31.555011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.654 [2024-07-25 10:17:31.555033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.654 qpair failed and we were unable to recover it. 00:28:46.654 [2024-07-25 10:17:31.555328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.654 [2024-07-25 10:17:31.555379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.654 qpair failed and we were unable to recover it. 00:28:46.654 [2024-07-25 10:17:31.555595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.654 [2024-07-25 10:17:31.555622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.654 qpair failed and we were unable to recover it. 00:28:46.654 [2024-07-25 10:17:31.555903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.654 [2024-07-25 10:17:31.555933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.654 qpair failed and we were unable to recover it. 00:28:46.654 [2024-07-25 10:17:31.556143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.654 [2024-07-25 10:17:31.556166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.654 qpair failed and we were unable to recover it. 00:28:46.654 [2024-07-25 10:17:31.556395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.654 [2024-07-25 10:17:31.556437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.654 qpair failed and we were unable to recover it. 00:28:46.654 [2024-07-25 10:17:31.556674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.654 [2024-07-25 10:17:31.556703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.654 qpair failed and we were unable to recover it. 00:28:46.654 [2024-07-25 10:17:31.556913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.654 [2024-07-25 10:17:31.556942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.654 qpair failed and we were unable to recover it. 00:28:46.654 [2024-07-25 10:17:31.557138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.654 [2024-07-25 10:17:31.557161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.654 qpair failed and we were unable to recover it. 00:28:46.654 [2024-07-25 10:17:31.557344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.654 [2024-07-25 10:17:31.557385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.654 qpair failed and we were unable to recover it. 00:28:46.654 [2024-07-25 10:17:31.557609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.654 [2024-07-25 10:17:31.557633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.654 qpair failed and we were unable to recover it. 00:28:46.654 [2024-07-25 10:17:31.557836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.654 [2024-07-25 10:17:31.557866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.654 qpair failed and we were unable to recover it. 00:28:46.654 [2024-07-25 10:17:31.558075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.654 [2024-07-25 10:17:31.558098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.654 qpair failed and we were unable to recover it. 00:28:46.654 [2024-07-25 10:17:31.558274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.654 [2024-07-25 10:17:31.558303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.654 qpair failed and we were unable to recover it. 00:28:46.654 [2024-07-25 10:17:31.558470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.654 [2024-07-25 10:17:31.558500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.654 qpair failed and we were unable to recover it. 00:28:46.654 [2024-07-25 10:17:31.558719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.654 [2024-07-25 10:17:31.558749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.654 qpair failed and we were unable to recover it. 00:28:46.654 [2024-07-25 10:17:31.558918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.654 [2024-07-25 10:17:31.558941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.654 qpair failed and we were unable to recover it. 00:28:46.654 [2024-07-25 10:17:31.559155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.654 [2024-07-25 10:17:31.559210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.654 qpair failed and we were unable to recover it. 00:28:46.654 [2024-07-25 10:17:31.559487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.654 [2024-07-25 10:17:31.559517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.654 qpair failed and we were unable to recover it. 00:28:46.654 [2024-07-25 10:17:31.559778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.654 [2024-07-25 10:17:31.559807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.654 qpair failed and we were unable to recover it. 00:28:46.654 [2024-07-25 10:17:31.559989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.654 [2024-07-25 10:17:31.560012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.654 qpair failed and we were unable to recover it. 00:28:46.654 [2024-07-25 10:17:31.560237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.654 [2024-07-25 10:17:31.560288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.654 qpair failed and we were unable to recover it. 00:28:46.654 [2024-07-25 10:17:31.560512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.655 [2024-07-25 10:17:31.560542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.655 qpair failed and we were unable to recover it. 00:28:46.655 [2024-07-25 10:17:31.560764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.655 [2024-07-25 10:17:31.560794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.655 qpair failed and we were unable to recover it. 00:28:46.655 [2024-07-25 10:17:31.560979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.655 [2024-07-25 10:17:31.561002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.655 qpair failed and we were unable to recover it. 00:28:46.655 [2024-07-25 10:17:31.561192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.655 [2024-07-25 10:17:31.561244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.655 qpair failed and we were unable to recover it. 00:28:46.655 [2024-07-25 10:17:31.561497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.655 [2024-07-25 10:17:31.561527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.655 qpair failed and we were unable to recover it. 00:28:46.655 [2024-07-25 10:17:31.561689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.655 [2024-07-25 10:17:31.561718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.655 qpair failed and we were unable to recover it. 00:28:46.655 [2024-07-25 10:17:31.561921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.655 [2024-07-25 10:17:31.561944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.655 qpair failed and we were unable to recover it. 00:28:46.655 [2024-07-25 10:17:31.562174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.655 [2024-07-25 10:17:31.562226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.655 qpair failed and we were unable to recover it. 00:28:46.655 [2024-07-25 10:17:31.562441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.655 [2024-07-25 10:17:31.562471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.655 qpair failed and we were unable to recover it. 00:28:46.655 [2024-07-25 10:17:31.562649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.655 [2024-07-25 10:17:31.562678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.655 qpair failed and we were unable to recover it. 00:28:46.655 [2024-07-25 10:17:31.562898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.655 [2024-07-25 10:17:31.562925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.655 qpair failed and we were unable to recover it. 00:28:46.655 [2024-07-25 10:17:31.563114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.655 [2024-07-25 10:17:31.563166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.655 qpair failed and we were unable to recover it. 00:28:46.655 [2024-07-25 10:17:31.563393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.655 [2024-07-25 10:17:31.563423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.655 qpair failed and we were unable to recover it. 00:28:46.655 [2024-07-25 10:17:31.563632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.655 [2024-07-25 10:17:31.563662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.655 qpair failed and we were unable to recover it. 00:28:46.655 [2024-07-25 10:17:31.563921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.655 [2024-07-25 10:17:31.563944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.655 qpair failed and we were unable to recover it. 00:28:46.655 [2024-07-25 10:17:31.564226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.655 [2024-07-25 10:17:31.564275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.655 qpair failed and we were unable to recover it. 00:28:46.655 [2024-07-25 10:17:31.564524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.655 [2024-07-25 10:17:31.564554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.655 qpair failed and we were unable to recover it. 00:28:46.655 [2024-07-25 10:17:31.564837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.655 [2024-07-25 10:17:31.564866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.655 qpair failed and we were unable to recover it. 00:28:46.655 [2024-07-25 10:17:31.565141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.655 [2024-07-25 10:17:31.565165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.655 qpair failed and we were unable to recover it. 00:28:46.655 [2024-07-25 10:17:31.565392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.655 [2024-07-25 10:17:31.565422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.655 qpair failed and we were unable to recover it. 00:28:46.655 [2024-07-25 10:17:31.565661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.655 [2024-07-25 10:17:31.565691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.655 qpair failed and we were unable to recover it. 00:28:46.655 [2024-07-25 10:17:31.565931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.655 [2024-07-25 10:17:31.565960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.655 qpair failed and we were unable to recover it. 00:28:46.655 [2024-07-25 10:17:31.566180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.655 [2024-07-25 10:17:31.566204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.655 qpair failed and we were unable to recover it. 00:28:46.655 [2024-07-25 10:17:31.566558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.655 [2024-07-25 10:17:31.566582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.655 qpair failed and we were unable to recover it. 00:28:46.655 [2024-07-25 10:17:31.566805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.655 [2024-07-25 10:17:31.566834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.655 qpair failed and we were unable to recover it. 00:28:46.655 [2024-07-25 10:17:31.567070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.655 [2024-07-25 10:17:31.567099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.655 qpair failed and we were unable to recover it. 00:28:46.655 [2024-07-25 10:17:31.567221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.655 [2024-07-25 10:17:31.567259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.655 qpair failed and we were unable to recover it. 00:28:46.655 [2024-07-25 10:17:31.567482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.655 [2024-07-25 10:17:31.567506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.655 qpair failed and we were unable to recover it. 00:28:46.655 [2024-07-25 10:17:31.567727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.655 [2024-07-25 10:17:31.567756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.655 qpair failed and we were unable to recover it. 00:28:46.655 [2024-07-25 10:17:31.568006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.655 [2024-07-25 10:17:31.568035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.655 qpair failed and we were unable to recover it. 00:28:46.655 [2024-07-25 10:17:31.568299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.655 [2024-07-25 10:17:31.568322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.655 qpair failed and we were unable to recover it. 00:28:46.655 [2024-07-25 10:17:31.568554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.655 [2024-07-25 10:17:31.568584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.655 qpair failed and we were unable to recover it. 00:28:46.655 [2024-07-25 10:17:31.568772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.655 [2024-07-25 10:17:31.568801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.656 qpair failed and we were unable to recover it. 00:28:46.656 [2024-07-25 10:17:31.569003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.656 [2024-07-25 10:17:31.569033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.656 qpair failed and we were unable to recover it. 00:28:46.656 [2024-07-25 10:17:31.569264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.656 [2024-07-25 10:17:31.569287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.656 qpair failed and we were unable to recover it. 00:28:46.656 [2024-07-25 10:17:31.569541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.656 [2024-07-25 10:17:31.569570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.656 qpair failed and we were unable to recover it. 00:28:46.656 [2024-07-25 10:17:31.569755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.656 [2024-07-25 10:17:31.569783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.656 qpair failed and we were unable to recover it. 00:28:46.656 [2024-07-25 10:17:31.569964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.656 [2024-07-25 10:17:31.569993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.656 qpair failed and we were unable to recover it. 00:28:46.656 [2024-07-25 10:17:31.570248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.656 [2024-07-25 10:17:31.570272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.656 qpair failed and we were unable to recover it. 00:28:46.656 [2024-07-25 10:17:31.570529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.656 [2024-07-25 10:17:31.570578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.656 qpair failed and we were unable to recover it. 00:28:46.656 [2024-07-25 10:17:31.570773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.656 [2024-07-25 10:17:31.570802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.656 qpair failed and we were unable to recover it. 00:28:46.656 [2024-07-25 10:17:31.571009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.656 [2024-07-25 10:17:31.571038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.656 qpair failed and we were unable to recover it. 00:28:46.656 [2024-07-25 10:17:31.571274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.656 [2024-07-25 10:17:31.571297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.656 qpair failed and we were unable to recover it. 00:28:46.656 [2024-07-25 10:17:31.571581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.656 [2024-07-25 10:17:31.571633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.656 qpair failed and we were unable to recover it. 00:28:46.656 [2024-07-25 10:17:31.571879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.656 [2024-07-25 10:17:31.571907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.656 qpair failed and we were unable to recover it. 00:28:46.656 [2024-07-25 10:17:31.572136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.656 [2024-07-25 10:17:31.572166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.656 qpair failed and we were unable to recover it. 00:28:46.656 [2024-07-25 10:17:31.572453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.656 [2024-07-25 10:17:31.572479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.656 qpair failed and we were unable to recover it. 00:28:46.656 [2024-07-25 10:17:31.572747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.656 [2024-07-25 10:17:31.572776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.656 qpair failed and we were unable to recover it. 00:28:46.656 [2024-07-25 10:17:31.572988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.656 [2024-07-25 10:17:31.573017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.656 qpair failed and we were unable to recover it. 00:28:46.656 [2024-07-25 10:17:31.573277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.656 [2024-07-25 10:17:31.573306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.656 qpair failed and we were unable to recover it. 00:28:46.656 [2024-07-25 10:17:31.573584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.656 [2024-07-25 10:17:31.573609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.656 qpair failed and we were unable to recover it. 00:28:46.656 [2024-07-25 10:17:31.573877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.656 [2024-07-25 10:17:31.573926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.656 qpair failed and we were unable to recover it. 00:28:46.656 [2024-07-25 10:17:31.574191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.656 [2024-07-25 10:17:31.574220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.656 qpair failed and we were unable to recover it. 00:28:46.656 [2024-07-25 10:17:31.574447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.656 [2024-07-25 10:17:31.574478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.656 qpair failed and we were unable to recover it. 00:28:46.656 [2024-07-25 10:17:31.574733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.656 [2024-07-25 10:17:31.574757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.656 qpair failed and we were unable to recover it. 00:28:46.656 [2024-07-25 10:17:31.575002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.656 [2024-07-25 10:17:31.575055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.656 qpair failed and we were unable to recover it. 00:28:46.656 [2024-07-25 10:17:31.575318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.656 [2024-07-25 10:17:31.575347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.656 qpair failed and we were unable to recover it. 00:28:46.656 [2024-07-25 10:17:31.575580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.656 [2024-07-25 10:17:31.575610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.656 qpair failed and we were unable to recover it. 00:28:46.656 [2024-07-25 10:17:31.575828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.656 [2024-07-25 10:17:31.575851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.656 qpair failed and we were unable to recover it. 00:28:46.656 [2024-07-25 10:17:31.576067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.656 [2024-07-25 10:17:31.576115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.656 qpair failed and we were unable to recover it. 00:28:46.656 [2024-07-25 10:17:31.576293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.656 [2024-07-25 10:17:31.576321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.656 qpair failed and we were unable to recover it. 00:28:46.656 [2024-07-25 10:17:31.576493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.656 [2024-07-25 10:17:31.576517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.656 qpair failed and we were unable to recover it. 00:28:46.656 [2024-07-25 10:17:31.576738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.656 [2024-07-25 10:17:31.576760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.656 qpair failed and we were unable to recover it. 00:28:46.656 [2024-07-25 10:17:31.576952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.656 [2024-07-25 10:17:31.577003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.656 qpair failed and we were unable to recover it. 00:28:46.656 [2024-07-25 10:17:31.577192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.656 [2024-07-25 10:17:31.577221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.656 qpair failed and we were unable to recover it. 00:28:46.656 [2024-07-25 10:17:31.577393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.656 [2024-07-25 10:17:31.577422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.656 qpair failed and we were unable to recover it. 00:28:46.656 [2024-07-25 10:17:31.577642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.656 [2024-07-25 10:17:31.577666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.656 qpair failed and we were unable to recover it. 00:28:46.656 [2024-07-25 10:17:31.577886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.656 [2024-07-25 10:17:31.577935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.656 qpair failed and we were unable to recover it. 00:28:46.656 [2024-07-25 10:17:31.578209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.656 [2024-07-25 10:17:31.578239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.656 qpair failed and we were unable to recover it. 00:28:46.656 [2024-07-25 10:17:31.578455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.656 [2024-07-25 10:17:31.578485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.656 qpair failed and we were unable to recover it. 00:28:46.656 [2024-07-25 10:17:31.578734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.656 [2024-07-25 10:17:31.578758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.656 qpair failed and we were unable to recover it. 00:28:46.657 [2024-07-25 10:17:31.579044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.657 [2024-07-25 10:17:31.579094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.657 qpair failed and we were unable to recover it. 00:28:46.657 [2024-07-25 10:17:31.579337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.657 [2024-07-25 10:17:31.579367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.657 qpair failed and we were unable to recover it. 00:28:46.657 [2024-07-25 10:17:31.579625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.657 [2024-07-25 10:17:31.579654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.657 qpair failed and we were unable to recover it. 00:28:46.657 [2024-07-25 10:17:31.579849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.657 [2024-07-25 10:17:31.579873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.657 qpair failed and we were unable to recover it. 00:28:46.657 [2024-07-25 10:17:31.580031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.657 [2024-07-25 10:17:31.580054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.657 qpair failed and we were unable to recover it. 00:28:46.657 [2024-07-25 10:17:31.580289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.657 [2024-07-25 10:17:31.580318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.657 qpair failed and we were unable to recover it. 00:28:46.657 [2024-07-25 10:17:31.580536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.657 [2024-07-25 10:17:31.580566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.657 qpair failed and we were unable to recover it. 00:28:46.657 [2024-07-25 10:17:31.580793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.657 [2024-07-25 10:17:31.580820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.657 qpair failed and we were unable to recover it. 00:28:46.657 [2024-07-25 10:17:31.581031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.657 [2024-07-25 10:17:31.581081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.657 qpair failed and we were unable to recover it. 00:28:46.657 [2024-07-25 10:17:31.581280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.657 [2024-07-25 10:17:31.581309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.657 qpair failed and we were unable to recover it. 00:28:46.657 [2024-07-25 10:17:31.581514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.657 [2024-07-25 10:17:31.581544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.657 qpair failed and we were unable to recover it. 00:28:46.657 [2024-07-25 10:17:31.581792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.657 [2024-07-25 10:17:31.581816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.657 qpair failed and we were unable to recover it. 00:28:46.657 [2024-07-25 10:17:31.582052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.657 [2024-07-25 10:17:31.582101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.657 qpair failed and we were unable to recover it. 00:28:46.657 [2024-07-25 10:17:31.582310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.657 [2024-07-25 10:17:31.582339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.657 qpair failed and we were unable to recover it. 00:28:46.657 [2024-07-25 10:17:31.582594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.657 [2024-07-25 10:17:31.582624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.657 qpair failed and we were unable to recover it. 00:28:46.657 [2024-07-25 10:17:31.582849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.657 [2024-07-25 10:17:31.582873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.657 qpair failed and we were unable to recover it. 00:28:46.657 [2024-07-25 10:17:31.583086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.657 [2024-07-25 10:17:31.583135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.657 qpair failed and we were unable to recover it. 00:28:46.657 [2024-07-25 10:17:31.583363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.657 [2024-07-25 10:17:31.583392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.657 qpair failed and we were unable to recover it. 00:28:46.657 [2024-07-25 10:17:31.583654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.657 [2024-07-25 10:17:31.583684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.657 qpair failed and we were unable to recover it. 00:28:46.657 [2024-07-25 10:17:31.583923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.657 [2024-07-25 10:17:31.583947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.657 qpair failed and we were unable to recover it. 00:28:46.657 [2024-07-25 10:17:31.584199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.657 [2024-07-25 10:17:31.584250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.657 qpair failed and we were unable to recover it. 00:28:46.657 [2024-07-25 10:17:31.584442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.657 [2024-07-25 10:17:31.584471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.657 qpair failed and we were unable to recover it. 00:28:46.657 [2024-07-25 10:17:31.584674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.657 [2024-07-25 10:17:31.584702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.657 qpair failed and we were unable to recover it. 00:28:46.657 [2024-07-25 10:17:31.584872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.657 [2024-07-25 10:17:31.584894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.657 qpair failed and we were unable to recover it. 00:28:46.657 [2024-07-25 10:17:31.585085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.657 [2024-07-25 10:17:31.585136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.657 qpair failed and we were unable to recover it. 00:28:46.657 [2024-07-25 10:17:31.585375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.657 [2024-07-25 10:17:31.585404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.657 qpair failed and we were unable to recover it. 00:28:46.657 [2024-07-25 10:17:31.585690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.657 [2024-07-25 10:17:31.585732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.657 qpair failed and we were unable to recover it. 00:28:46.657 [2024-07-25 10:17:31.585954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.657 [2024-07-25 10:17:31.585978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.657 qpair failed and we were unable to recover it. 00:28:46.657 [2024-07-25 10:17:31.586210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.657 [2024-07-25 10:17:31.586258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.657 qpair failed and we were unable to recover it. 00:28:46.657 [2024-07-25 10:17:31.586523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.657 [2024-07-25 10:17:31.586552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.657 qpair failed and we were unable to recover it. 00:28:46.657 [2024-07-25 10:17:31.586714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.657 [2024-07-25 10:17:31.586743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.657 qpair failed and we were unable to recover it. 00:28:46.657 [2024-07-25 10:17:31.586988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.657 [2024-07-25 10:17:31.587011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.657 qpair failed and we were unable to recover it. 00:28:46.657 [2024-07-25 10:17:31.587195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.657 [2024-07-25 10:17:31.587244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.657 qpair failed and we were unable to recover it. 00:28:46.657 [2024-07-25 10:17:31.587453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.657 [2024-07-25 10:17:31.587482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.657 qpair failed and we were unable to recover it. 00:28:46.657 [2024-07-25 10:17:31.587695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.657 [2024-07-25 10:17:31.587729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.657 qpair failed and we were unable to recover it. 00:28:46.657 [2024-07-25 10:17:31.587950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.657 [2024-07-25 10:17:31.587973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.657 qpair failed and we were unable to recover it. 00:28:46.657 [2024-07-25 10:17:31.588268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.657 [2024-07-25 10:17:31.588317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.657 qpair failed and we were unable to recover it. 00:28:46.657 [2024-07-25 10:17:31.588553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.657 [2024-07-25 10:17:31.588583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.657 qpair failed and we were unable to recover it. 00:28:46.658 [2024-07-25 10:17:31.588791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.658 [2024-07-25 10:17:31.588821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.658 qpair failed and we were unable to recover it. 00:28:46.658 [2024-07-25 10:17:31.589061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.658 [2024-07-25 10:17:31.589084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.658 qpair failed and we were unable to recover it. 00:28:46.658 [2024-07-25 10:17:31.589345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.658 [2024-07-25 10:17:31.589396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.658 qpair failed and we were unable to recover it. 00:28:46.658 [2024-07-25 10:17:31.589651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.658 [2024-07-25 10:17:31.589676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.658 qpair failed and we were unable to recover it. 00:28:46.658 [2024-07-25 10:17:31.589871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.658 [2024-07-25 10:17:31.589902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.658 qpair failed and we were unable to recover it. 00:28:46.658 [2024-07-25 10:17:31.590059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.658 [2024-07-25 10:17:31.590082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.658 qpair failed and we were unable to recover it. 00:28:46.658 [2024-07-25 10:17:31.590306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.658 [2024-07-25 10:17:31.590334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.658 qpair failed and we were unable to recover it. 00:28:46.658 [2024-07-25 10:17:31.590512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.658 [2024-07-25 10:17:31.590541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.658 qpair failed and we were unable to recover it. 00:28:46.658 [2024-07-25 10:17:31.590740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.658 [2024-07-25 10:17:31.590769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.658 qpair failed and we were unable to recover it. 00:28:46.658 [2024-07-25 10:17:31.591033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.658 [2024-07-25 10:17:31.591056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.658 qpair failed and we were unable to recover it. 00:28:46.658 [2024-07-25 10:17:31.591308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.658 [2024-07-25 10:17:31.591357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.658 qpair failed and we were unable to recover it. 00:28:46.658 [2024-07-25 10:17:31.591595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.658 [2024-07-25 10:17:31.591625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.658 qpair failed and we were unable to recover it. 00:28:46.658 [2024-07-25 10:17:31.591854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.658 [2024-07-25 10:17:31.591882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.658 qpair failed and we were unable to recover it. 00:28:46.658 [2024-07-25 10:17:31.592106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.658 [2024-07-25 10:17:31.592129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.658 qpair failed and we were unable to recover it. 00:28:46.658 [2024-07-25 10:17:31.592305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.658 [2024-07-25 10:17:31.592346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.658 qpair failed and we were unable to recover it. 00:28:46.658 [2024-07-25 10:17:31.592547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.658 [2024-07-25 10:17:31.592577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.658 qpair failed and we were unable to recover it. 00:28:46.658 [2024-07-25 10:17:31.592789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.658 [2024-07-25 10:17:31.592818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.658 qpair failed and we were unable to recover it. 00:28:46.658 [2024-07-25 10:17:31.593052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.658 [2024-07-25 10:17:31.593075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.658 qpair failed and we were unable to recover it. 00:28:46.658 [2024-07-25 10:17:31.593320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.658 [2024-07-25 10:17:31.593350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.658 qpair failed and we were unable to recover it. 00:28:46.658 [2024-07-25 10:17:31.593594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.658 [2024-07-25 10:17:31.593635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.658 qpair failed and we were unable to recover it. 00:28:46.658 [2024-07-25 10:17:31.593842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.658 [2024-07-25 10:17:31.593870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.658 qpair failed and we were unable to recover it. 00:28:46.658 [2024-07-25 10:17:31.594094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.658 [2024-07-25 10:17:31.594117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.658 qpair failed and we were unable to recover it. 00:28:46.658 [2024-07-25 10:17:31.594356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.658 [2024-07-25 10:17:31.594385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.658 qpair failed and we were unable to recover it. 00:28:46.658 [2024-07-25 10:17:31.594534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.658 [2024-07-25 10:17:31.594562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.658 qpair failed and we were unable to recover it. 00:28:46.658 [2024-07-25 10:17:31.594754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.658 [2024-07-25 10:17:31.594783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.658 qpair failed and we were unable to recover it. 00:28:46.658 [2024-07-25 10:17:31.594992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.658 [2024-07-25 10:17:31.595015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.658 qpair failed and we were unable to recover it. 00:28:46.658 [2024-07-25 10:17:31.595332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.658 [2024-07-25 10:17:31.595390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.658 qpair failed and we were unable to recover it. 00:28:46.658 [2024-07-25 10:17:31.595613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.658 [2024-07-25 10:17:31.595639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.658 qpair failed and we were unable to recover it. 00:28:46.658 [2024-07-25 10:17:31.595869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.658 [2024-07-25 10:17:31.595897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.658 qpair failed and we were unable to recover it. 00:28:46.658 [2024-07-25 10:17:31.596065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.658 [2024-07-25 10:17:31.596089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.658 qpair failed and we were unable to recover it. 00:28:46.658 [2024-07-25 10:17:31.596345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.658 [2024-07-25 10:17:31.596376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.658 qpair failed and we were unable to recover it. 00:28:46.658 [2024-07-25 10:17:31.596657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.658 [2024-07-25 10:17:31.596683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.658 qpair failed and we were unable to recover it. 00:28:46.658 [2024-07-25 10:17:31.596952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.658 [2024-07-25 10:17:31.596982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.658 qpair failed and we were unable to recover it. 00:28:46.658 [2024-07-25 10:17:31.597196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.658 [2024-07-25 10:17:31.597219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.658 qpair failed and we were unable to recover it. 00:28:46.658 [2024-07-25 10:17:31.597376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.658 [2024-07-25 10:17:31.597406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.658 qpair failed and we were unable to recover it. 00:28:46.658 [2024-07-25 10:17:31.597728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.658 [2024-07-25 10:17:31.597768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.658 qpair failed and we were unable to recover it. 00:28:46.658 [2024-07-25 10:17:31.598056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.658 [2024-07-25 10:17:31.598086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.658 qpair failed and we were unable to recover it. 00:28:46.658 [2024-07-25 10:17:31.598355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.658 [2024-07-25 10:17:31.598378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.658 qpair failed and we were unable to recover it. 00:28:46.658 [2024-07-25 10:17:31.598621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.659 [2024-07-25 10:17:31.598650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.659 qpair failed and we were unable to recover it. 00:28:46.659 [2024-07-25 10:17:31.598840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.659 [2024-07-25 10:17:31.598878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.659 qpair failed and we were unable to recover it. 00:28:46.659 [2024-07-25 10:17:31.599021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.659 [2024-07-25 10:17:31.599050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.659 qpair failed and we were unable to recover it. 00:28:46.659 [2024-07-25 10:17:31.599231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.659 [2024-07-25 10:17:31.599266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.659 qpair failed and we were unable to recover it. 00:28:46.659 [2024-07-25 10:17:31.599521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.659 [2024-07-25 10:17:31.599550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.659 qpair failed and we were unable to recover it. 00:28:46.659 [2024-07-25 10:17:31.599734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.659 [2024-07-25 10:17:31.599762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.659 qpair failed and we were unable to recover it. 00:28:46.659 [2024-07-25 10:17:31.600088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.659 [2024-07-25 10:17:31.600118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.659 qpair failed and we were unable to recover it. 00:28:46.659 [2024-07-25 10:17:31.600358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.659 [2024-07-25 10:17:31.600382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.659 qpair failed and we were unable to recover it. 00:28:46.659 [2024-07-25 10:17:31.600690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.659 [2024-07-25 10:17:31.600719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.659 qpair failed and we were unable to recover it. 00:28:46.659 [2024-07-25 10:17:31.600956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.659 [2024-07-25 10:17:31.600985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.659 qpair failed and we were unable to recover it. 00:28:46.659 [2024-07-25 10:17:31.601146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.659 [2024-07-25 10:17:31.601174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.659 qpair failed and we were unable to recover it. 00:28:46.659 [2024-07-25 10:17:31.601345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.659 [2024-07-25 10:17:31.601368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.659 qpair failed and we were unable to recover it. 00:28:46.659 [2024-07-25 10:17:31.601534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.659 [2024-07-25 10:17:31.601563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.659 qpair failed and we were unable to recover it. 00:28:46.659 [2024-07-25 10:17:31.601778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.659 [2024-07-25 10:17:31.601807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.659 qpair failed and we were unable to recover it. 00:28:46.659 [2024-07-25 10:17:31.602029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.659 [2024-07-25 10:17:31.602058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.659 qpair failed and we were unable to recover it. 00:28:46.659 [2024-07-25 10:17:31.602226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.659 [2024-07-25 10:17:31.602249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.659 qpair failed and we were unable to recover it. 00:28:46.659 [2024-07-25 10:17:31.602511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.659 [2024-07-25 10:17:31.602541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.659 qpair failed and we were unable to recover it. 00:28:46.659 [2024-07-25 10:17:31.602731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.659 [2024-07-25 10:17:31.602759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.659 qpair failed and we were unable to recover it. 00:28:46.659 [2024-07-25 10:17:31.603002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.659 [2024-07-25 10:17:31.603032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.659 qpair failed and we were unable to recover it. 00:28:46.659 [2024-07-25 10:17:31.603290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.659 [2024-07-25 10:17:31.603312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.659 qpair failed and we were unable to recover it. 00:28:46.659 [2024-07-25 10:17:31.603539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.659 [2024-07-25 10:17:31.603569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.659 qpair failed and we were unable to recover it. 00:28:46.659 [2024-07-25 10:17:31.603759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.659 [2024-07-25 10:17:31.603788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.659 qpair failed and we were unable to recover it. 00:28:46.659 [2024-07-25 10:17:31.603960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.659 [2024-07-25 10:17:31.603988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.659 qpair failed and we were unable to recover it. 00:28:46.659 [2024-07-25 10:17:31.604162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.659 [2024-07-25 10:17:31.604185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.659 qpair failed and we were unable to recover it. 00:28:46.659 [2024-07-25 10:17:31.604396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.659 [2024-07-25 10:17:31.604424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.659 qpair failed and we were unable to recover it. 00:28:46.659 [2024-07-25 10:17:31.604631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.659 [2024-07-25 10:17:31.604660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.659 qpair failed and we were unable to recover it. 00:28:46.659 [2024-07-25 10:17:31.604847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.659 [2024-07-25 10:17:31.604877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.659 qpair failed and we were unable to recover it. 00:28:46.659 [2024-07-25 10:17:31.605107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.659 [2024-07-25 10:17:31.605131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.659 qpair failed and we were unable to recover it. 00:28:46.659 [2024-07-25 10:17:31.605375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.659 [2024-07-25 10:17:31.605405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.659 qpair failed and we were unable to recover it. 00:28:46.659 [2024-07-25 10:17:31.605648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.659 [2024-07-25 10:17:31.605673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.659 qpair failed and we were unable to recover it. 00:28:46.659 [2024-07-25 10:17:31.605919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.659 [2024-07-25 10:17:31.605949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.659 qpair failed and we were unable to recover it. 00:28:46.659 [2024-07-25 10:17:31.606200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.659 [2024-07-25 10:17:31.606224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.659 qpair failed and we were unable to recover it. 00:28:46.659 [2024-07-25 10:17:31.606416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.659 [2024-07-25 10:17:31.606455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.659 qpair failed and we were unable to recover it. 00:28:46.660 [2024-07-25 10:17:31.606707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.660 [2024-07-25 10:17:31.606736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.660 qpair failed and we were unable to recover it. 00:28:46.660 [2024-07-25 10:17:31.606983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.660 [2024-07-25 10:17:31.607012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.660 qpair failed and we were unable to recover it. 00:28:46.660 [2024-07-25 10:17:31.607190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.660 [2024-07-25 10:17:31.607214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.660 qpair failed and we were unable to recover it. 00:28:46.660 [2024-07-25 10:17:31.607455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.660 [2024-07-25 10:17:31.607485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.660 qpair failed and we were unable to recover it. 00:28:46.660 [2024-07-25 10:17:31.607664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.660 [2024-07-25 10:17:31.607702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.660 qpair failed and we were unable to recover it. 00:28:46.660 [2024-07-25 10:17:31.607933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.660 [2024-07-25 10:17:31.607962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.660 qpair failed and we were unable to recover it. 00:28:46.660 [2024-07-25 10:17:31.608234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.660 [2024-07-25 10:17:31.608258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.660 qpair failed and we were unable to recover it. 00:28:46.660 [2024-07-25 10:17:31.608551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.660 [2024-07-25 10:17:31.608581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.660 qpair failed and we were unable to recover it. 00:28:46.660 [2024-07-25 10:17:31.608865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.660 [2024-07-25 10:17:31.608895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.660 qpair failed and we were unable to recover it. 00:28:46.660 [2024-07-25 10:17:31.609140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.660 [2024-07-25 10:17:31.609169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.660 qpair failed and we were unable to recover it. 00:28:46.660 [2024-07-25 10:17:31.609392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.660 [2024-07-25 10:17:31.609437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.660 qpair failed and we were unable to recover it. 00:28:46.660 [2024-07-25 10:17:31.609717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.660 [2024-07-25 10:17:31.609746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.660 qpair failed and we were unable to recover it. 00:28:46.660 [2024-07-25 10:17:31.609962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.660 [2024-07-25 10:17:31.609991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.660 qpair failed and we were unable to recover it. 00:28:46.660 [2024-07-25 10:17:31.610202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.660 [2024-07-25 10:17:31.610231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.660 qpair failed and we were unable to recover it. 00:28:46.660 [2024-07-25 10:17:31.610420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.660 [2024-07-25 10:17:31.610465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.660 qpair failed and we were unable to recover it. 00:28:46.660 [2024-07-25 10:17:31.610671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.660 [2024-07-25 10:17:31.610699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.660 qpair failed and we were unable to recover it. 00:28:46.660 [2024-07-25 10:17:31.610842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.660 [2024-07-25 10:17:31.610871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.660 qpair failed and we were unable to recover it. 00:28:46.660 [2024-07-25 10:17:31.611124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.660 [2024-07-25 10:17:31.611153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.660 qpair failed and we were unable to recover it. 00:28:46.660 [2024-07-25 10:17:31.611339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.660 [2024-07-25 10:17:31.611375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.660 qpair failed and we were unable to recover it. 00:28:46.660 [2024-07-25 10:17:31.611620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.660 [2024-07-25 10:17:31.611649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.660 qpair failed and we were unable to recover it. 00:28:46.660 [2024-07-25 10:17:31.611885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.660 [2024-07-25 10:17:31.611918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.660 qpair failed and we were unable to recover it. 00:28:46.660 [2024-07-25 10:17:31.612103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.660 [2024-07-25 10:17:31.612132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.660 qpair failed and we were unable to recover it. 00:28:46.660 [2024-07-25 10:17:31.612364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.660 [2024-07-25 10:17:31.612387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.660 qpair failed and we were unable to recover it. 00:28:46.660 [2024-07-25 10:17:31.612644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.660 [2024-07-25 10:17:31.612674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.660 qpair failed and we were unable to recover it. 00:28:46.660 [2024-07-25 10:17:31.612955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.660 [2024-07-25 10:17:31.612985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.660 qpair failed and we were unable to recover it. 00:28:46.660 [2024-07-25 10:17:31.613222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.660 [2024-07-25 10:17:31.613251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.660 qpair failed and we were unable to recover it. 00:28:46.660 [2024-07-25 10:17:31.613508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.660 [2024-07-25 10:17:31.613531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.660 qpair failed and we were unable to recover it. 00:28:46.660 [2024-07-25 10:17:31.613752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.660 [2024-07-25 10:17:31.613781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.660 qpair failed and we were unable to recover it. 00:28:46.660 [2024-07-25 10:17:31.613981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.660 [2024-07-25 10:17:31.614011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.660 qpair failed and we were unable to recover it. 00:28:46.660 [2024-07-25 10:17:31.614235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.660 [2024-07-25 10:17:31.614264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.660 qpair failed and we were unable to recover it. 00:28:46.660 [2024-07-25 10:17:31.614401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.660 [2024-07-25 10:17:31.614436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.660 qpair failed and we were unable to recover it. 00:28:46.660 [2024-07-25 10:17:31.614663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.660 [2024-07-25 10:17:31.614688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.660 qpair failed and we were unable to recover it. 00:28:46.660 [2024-07-25 10:17:31.614884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.660 [2024-07-25 10:17:31.614913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.660 qpair failed and we were unable to recover it. 00:28:46.660 [2024-07-25 10:17:31.615084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.660 [2024-07-25 10:17:31.615113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.660 qpair failed and we were unable to recover it. 00:28:46.660 [2024-07-25 10:17:31.615275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.660 [2024-07-25 10:17:31.615304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.660 qpair failed and we were unable to recover it. 00:28:46.660 [2024-07-25 10:17:31.615498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.660 [2024-07-25 10:17:31.615522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.660 qpair failed and we were unable to recover it. 00:28:46.660 [2024-07-25 10:17:31.615776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.660 [2024-07-25 10:17:31.615805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.660 qpair failed and we were unable to recover it. 00:28:46.660 [2024-07-25 10:17:31.616008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.660 [2024-07-25 10:17:31.616037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.660 qpair failed and we were unable to recover it. 00:28:46.660 [2024-07-25 10:17:31.616215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.661 [2024-07-25 10:17:31.616238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.661 qpair failed and we were unable to recover it. 00:28:46.661 [2024-07-25 10:17:31.616421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.661 [2024-07-25 10:17:31.616467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.661 qpair failed and we were unable to recover it. 00:28:46.661 [2024-07-25 10:17:31.616622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.661 [2024-07-25 10:17:31.616651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.661 qpair failed and we were unable to recover it. 00:28:46.661 [2024-07-25 10:17:31.616839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.661 [2024-07-25 10:17:31.616867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.661 qpair failed and we were unable to recover it. 00:28:46.661 [2024-07-25 10:17:31.617041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.661 [2024-07-25 10:17:31.617064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.661 qpair failed and we were unable to recover it. 00:28:46.661 [2024-07-25 10:17:31.617256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.661 [2024-07-25 10:17:31.617308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.661 qpair failed and we were unable to recover it. 00:28:46.661 [2024-07-25 10:17:31.617543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.661 [2024-07-25 10:17:31.617573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.661 qpair failed and we were unable to recover it. 00:28:46.661 [2024-07-25 10:17:31.617742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.661 [2024-07-25 10:17:31.617771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.661 qpair failed and we were unable to recover it. 00:28:46.661 [2024-07-25 10:17:31.617985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.661 [2024-07-25 10:17:31.618008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.661 qpair failed and we were unable to recover it. 00:28:46.661 [2024-07-25 10:17:31.618239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.661 [2024-07-25 10:17:31.618292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.661 qpair failed and we were unable to recover it. 00:28:46.661 [2024-07-25 10:17:31.618525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.661 [2024-07-25 10:17:31.618554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.661 qpair failed and we were unable to recover it. 00:28:46.661 [2024-07-25 10:17:31.618759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.661 [2024-07-25 10:17:31.618788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.661 qpair failed and we were unable to recover it. 00:28:46.661 [2024-07-25 10:17:31.618989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.661 [2024-07-25 10:17:31.619012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.661 qpair failed and we were unable to recover it. 00:28:46.661 [2024-07-25 10:17:31.619247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.661 [2024-07-25 10:17:31.619298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.661 qpair failed and we were unable to recover it. 00:28:46.661 [2024-07-25 10:17:31.619500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.661 [2024-07-25 10:17:31.619529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.661 qpair failed and we were unable to recover it. 00:28:46.661 [2024-07-25 10:17:31.619765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.661 [2024-07-25 10:17:31.619794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.661 qpair failed and we were unable to recover it. 00:28:46.661 [2024-07-25 10:17:31.620049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.661 [2024-07-25 10:17:31.620072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.661 qpair failed and we were unable to recover it. 00:28:46.661 [2024-07-25 10:17:31.620284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.661 [2024-07-25 10:17:31.620312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.661 qpair failed and we were unable to recover it. 00:28:46.661 [2024-07-25 10:17:31.620515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.661 [2024-07-25 10:17:31.620545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.661 qpair failed and we were unable to recover it. 00:28:46.661 [2024-07-25 10:17:31.620711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.661 [2024-07-25 10:17:31.620740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.661 qpair failed and we were unable to recover it. 00:28:46.661 [2024-07-25 10:17:31.620975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.661 [2024-07-25 10:17:31.620998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.661 qpair failed and we were unable to recover it. 00:28:46.661 [2024-07-25 10:17:31.621224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.661 [2024-07-25 10:17:31.621276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.661 qpair failed and we were unable to recover it. 00:28:46.661 [2024-07-25 10:17:31.621493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.661 [2024-07-25 10:17:31.621518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.661 qpair failed and we were unable to recover it. 00:28:46.661 [2024-07-25 10:17:31.621685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.661 [2024-07-25 10:17:31.621709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.661 qpair failed and we were unable to recover it. 00:28:46.661 [2024-07-25 10:17:31.621870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.661 [2024-07-25 10:17:31.621893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.661 qpair failed and we were unable to recover it. 00:28:46.661 [2024-07-25 10:17:31.622174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.661 [2024-07-25 10:17:31.622204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.661 qpair failed and we were unable to recover it. 00:28:46.661 [2024-07-25 10:17:31.622455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.661 [2024-07-25 10:17:31.622484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.661 qpair failed and we were unable to recover it. 00:28:46.661 [2024-07-25 10:17:31.622708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.661 [2024-07-25 10:17:31.622737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.661 qpair failed and we were unable to recover it. 00:28:46.661 [2024-07-25 10:17:31.623096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.661 [2024-07-25 10:17:31.623166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.661 qpair failed and we were unable to recover it. 00:28:46.661 [2024-07-25 10:17:31.623457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.661 [2024-07-25 10:17:31.623487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.661 qpair failed and we were unable to recover it. 00:28:46.661 [2024-07-25 10:17:31.623741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.661 [2024-07-25 10:17:31.623770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.661 qpair failed and we were unable to recover it. 00:28:46.661 [2024-07-25 10:17:31.624000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.661 [2024-07-25 10:17:31.624029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.661 qpair failed and we were unable to recover it. 00:28:46.661 [2024-07-25 10:17:31.624291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.661 [2024-07-25 10:17:31.624314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.661 qpair failed and we were unable to recover it. 00:28:46.661 [2024-07-25 10:17:31.624519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.661 [2024-07-25 10:17:31.624549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.661 qpair failed and we were unable to recover it. 00:28:46.661 [2024-07-25 10:17:31.624723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.661 [2024-07-25 10:17:31.624752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.661 qpair failed and we were unable to recover it. 00:28:46.661 [2024-07-25 10:17:31.624930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.661 [2024-07-25 10:17:31.624959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.661 qpair failed and we were unable to recover it. 00:28:46.661 [2024-07-25 10:17:31.625169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.661 [2024-07-25 10:17:31.625192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.661 qpair failed and we were unable to recover it. 00:28:46.661 [2024-07-25 10:17:31.625382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.661 [2024-07-25 10:17:31.625410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.661 qpair failed and we were unable to recover it. 00:28:46.661 [2024-07-25 10:17:31.625675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.662 [2024-07-25 10:17:31.625705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.662 qpair failed and we were unable to recover it. 00:28:46.662 [2024-07-25 10:17:31.625993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.662 [2024-07-25 10:17:31.626022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.662 qpair failed and we were unable to recover it. 00:28:46.662 [2024-07-25 10:17:31.626285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.662 [2024-07-25 10:17:31.626308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.662 qpair failed and we were unable to recover it. 00:28:46.662 [2024-07-25 10:17:31.626503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.662 [2024-07-25 10:17:31.626532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.662 qpair failed and we were unable to recover it. 00:28:46.662 [2024-07-25 10:17:31.626747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.662 [2024-07-25 10:17:31.626777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.662 qpair failed and we were unable to recover it. 00:28:46.662 [2024-07-25 10:17:31.627038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.662 [2024-07-25 10:17:31.627066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.662 qpair failed and we were unable to recover it. 00:28:46.662 [2024-07-25 10:17:31.627335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.662 [2024-07-25 10:17:31.627358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.662 qpair failed and we were unable to recover it. 00:28:46.662 [2024-07-25 10:17:31.627515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.662 [2024-07-25 10:17:31.627544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.662 qpair failed and we were unable to recover it. 00:28:46.662 [2024-07-25 10:17:31.627757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.662 [2024-07-25 10:17:31.627786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.662 qpair failed and we were unable to recover it. 00:28:46.662 [2024-07-25 10:17:31.627951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.662 [2024-07-25 10:17:31.627980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.662 qpair failed and we were unable to recover it. 00:28:46.662 [2024-07-25 10:17:31.628151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.662 [2024-07-25 10:17:31.628174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.662 qpair failed and we were unable to recover it. 00:28:46.662 [2024-07-25 10:17:31.628384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.662 [2024-07-25 10:17:31.628413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.662 qpair failed and we were unable to recover it. 00:28:46.662 [2024-07-25 10:17:31.628660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.662 [2024-07-25 10:17:31.628691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.662 qpair failed and we were unable to recover it. 00:28:46.662 [2024-07-25 10:17:31.628920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.662 [2024-07-25 10:17:31.628950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.662 qpair failed and we were unable to recover it. 00:28:46.662 [2024-07-25 10:17:31.629170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.662 [2024-07-25 10:17:31.629193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.662 qpair failed and we were unable to recover it. 00:28:46.662 [2024-07-25 10:17:31.629377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.662 [2024-07-25 10:17:31.629407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.662 qpair failed and we were unable to recover it. 00:28:46.662 [2024-07-25 10:17:31.629585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.662 [2024-07-25 10:17:31.629614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.662 qpair failed and we were unable to recover it. 00:28:46.662 [2024-07-25 10:17:31.629834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.662 [2024-07-25 10:17:31.629863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.662 qpair failed and we were unable to recover it. 00:28:46.662 [2024-07-25 10:17:31.630142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.662 [2024-07-25 10:17:31.630165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.662 qpair failed and we were unable to recover it. 00:28:46.662 [2024-07-25 10:17:31.630477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.662 [2024-07-25 10:17:31.630501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.662 qpair failed and we were unable to recover it. 00:28:46.662 [2024-07-25 10:17:31.630728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.662 [2024-07-25 10:17:31.630757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.662 qpair failed and we were unable to recover it. 00:28:46.662 [2024-07-25 10:17:31.630997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.662 [2024-07-25 10:17:31.631026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.662 qpair failed and we were unable to recover it. 00:28:46.662 [2024-07-25 10:17:31.631268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.662 [2024-07-25 10:17:31.631291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.662 qpair failed and we were unable to recover it. 00:28:46.662 [2024-07-25 10:17:31.631530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.662 [2024-07-25 10:17:31.631555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.662 qpair failed and we were unable to recover it. 00:28:46.662 [2024-07-25 10:17:31.631758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.662 [2024-07-25 10:17:31.631786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.662 qpair failed and we were unable to recover it. 00:28:46.662 [2024-07-25 10:17:31.631946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.662 [2024-07-25 10:17:31.631975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.662 qpair failed and we were unable to recover it. 00:28:46.662 [2024-07-25 10:17:31.632230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.662 [2024-07-25 10:17:31.632254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.662 qpair failed and we were unable to recover it. 00:28:46.662 [2024-07-25 10:17:31.632495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.662 [2024-07-25 10:17:31.632524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.662 qpair failed and we were unable to recover it. 00:28:46.662 [2024-07-25 10:17:31.632733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.662 [2024-07-25 10:17:31.632763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.662 qpair failed and we were unable to recover it. 00:28:46.662 [2024-07-25 10:17:31.632994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.662 [2024-07-25 10:17:31.633023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.662 qpair failed and we were unable to recover it. 00:28:46.662 [2024-07-25 10:17:31.633261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.662 [2024-07-25 10:17:31.633284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.662 qpair failed and we were unable to recover it. 00:28:46.662 [2024-07-25 10:17:31.633561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.662 [2024-07-25 10:17:31.633592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.662 qpair failed and we were unable to recover it. 00:28:46.662 [2024-07-25 10:17:31.633858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.662 [2024-07-25 10:17:31.633886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.662 qpair failed and we were unable to recover it. 00:28:46.662 [2024-07-25 10:17:31.634062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.662 [2024-07-25 10:17:31.634090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.662 qpair failed and we were unable to recover it. 00:28:46.662 [2024-07-25 10:17:31.634240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.662 [2024-07-25 10:17:31.634265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.662 qpair failed and we were unable to recover it. 00:28:46.662 [2024-07-25 10:17:31.634468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.662 [2024-07-25 10:17:31.634498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.662 qpair failed and we were unable to recover it. 00:28:46.662 [2024-07-25 10:17:31.634735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.662 [2024-07-25 10:17:31.634764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.662 qpair failed and we were unable to recover it. 00:28:46.662 [2024-07-25 10:17:31.634995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.662 [2024-07-25 10:17:31.635022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.662 qpair failed and we were unable to recover it. 00:28:46.662 [2024-07-25 10:17:31.635241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.663 [2024-07-25 10:17:31.635264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.663 qpair failed and we were unable to recover it. 00:28:46.663 [2024-07-25 10:17:31.635485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.663 [2024-07-25 10:17:31.635518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.663 qpair failed and we were unable to recover it. 00:28:46.663 [2024-07-25 10:17:31.635708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.663 [2024-07-25 10:17:31.635744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.663 qpair failed and we were unable to recover it. 00:28:46.663 [2024-07-25 10:17:31.635944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.663 [2024-07-25 10:17:31.635973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.663 qpair failed and we were unable to recover it. 00:28:46.663 [2024-07-25 10:17:31.636150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.663 [2024-07-25 10:17:31.636174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.663 qpair failed and we were unable to recover it. 00:28:46.663 [2024-07-25 10:17:31.636350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.663 [2024-07-25 10:17:31.636378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.663 qpair failed and we were unable to recover it. 00:28:46.663 [2024-07-25 10:17:31.636611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.663 [2024-07-25 10:17:31.636636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.663 qpair failed and we were unable to recover it. 00:28:46.663 [2024-07-25 10:17:31.636886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.663 [2024-07-25 10:17:31.636915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.663 qpair failed and we were unable to recover it. 00:28:46.663 [2024-07-25 10:17:31.637057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.663 [2024-07-25 10:17:31.637080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.663 qpair failed and we were unable to recover it. 00:28:46.663 [2024-07-25 10:17:31.637280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.663 [2024-07-25 10:17:31.637309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.663 qpair failed and we were unable to recover it. 00:28:46.663 [2024-07-25 10:17:31.637469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.663 [2024-07-25 10:17:31.637499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.663 qpair failed and we were unable to recover it. 00:28:46.663 [2024-07-25 10:17:31.637726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.663 [2024-07-25 10:17:31.637754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.663 qpair failed and we were unable to recover it. 00:28:46.663 [2024-07-25 10:17:31.637982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.663 [2024-07-25 10:17:31.638005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.663 qpair failed and we were unable to recover it. 00:28:46.663 [2024-07-25 10:17:31.638241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.663 [2024-07-25 10:17:31.638270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.663 qpair failed and we were unable to recover it. 00:28:46.663 [2024-07-25 10:17:31.638502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.663 [2024-07-25 10:17:31.638532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.663 qpair failed and we were unable to recover it. 00:28:46.663 [2024-07-25 10:17:31.638786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.663 [2024-07-25 10:17:31.638816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.663 qpair failed and we were unable to recover it. 00:28:46.663 [2024-07-25 10:17:31.639038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.663 [2024-07-25 10:17:31.639061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.663 qpair failed and we were unable to recover it. 00:28:46.663 [2024-07-25 10:17:31.639296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.663 [2024-07-25 10:17:31.639346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.663 qpair failed and we were unable to recover it. 00:28:46.663 [2024-07-25 10:17:31.639569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.663 [2024-07-25 10:17:31.639598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.663 qpair failed and we were unable to recover it. 00:28:46.663 [2024-07-25 10:17:31.639800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.663 [2024-07-25 10:17:31.639828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.663 qpair failed and we were unable to recover it. 00:28:46.663 [2024-07-25 10:17:31.640041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.663 [2024-07-25 10:17:31.640064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.663 qpair failed and we were unable to recover it. 00:28:46.663 [2024-07-25 10:17:31.640252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.663 [2024-07-25 10:17:31.640303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.663 qpair failed and we were unable to recover it. 00:28:46.663 [2024-07-25 10:17:31.640450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.663 [2024-07-25 10:17:31.640480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.663 qpair failed and we were unable to recover it. 00:28:46.663 [2024-07-25 10:17:31.640657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.663 [2024-07-25 10:17:31.640696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.663 qpair failed and we were unable to recover it. 00:28:46.663 [2024-07-25 10:17:31.640942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.663 [2024-07-25 10:17:31.640966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.663 qpair failed and we were unable to recover it. 00:28:46.663 [2024-07-25 10:17:31.641219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.663 [2024-07-25 10:17:31.641271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.663 qpair failed and we were unable to recover it. 00:28:46.663 [2024-07-25 10:17:31.641526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.663 [2024-07-25 10:17:31.641555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.663 qpair failed and we were unable to recover it. 00:28:46.663 [2024-07-25 10:17:31.641770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.663 [2024-07-25 10:17:31.641798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.663 qpair failed and we were unable to recover it. 00:28:46.663 [2024-07-25 10:17:31.641963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.663 [2024-07-25 10:17:31.641989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.663 qpair failed and we were unable to recover it. 00:28:46.663 [2024-07-25 10:17:31.642188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.663 [2024-07-25 10:17:31.642238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.663 qpair failed and we were unable to recover it. 00:28:46.663 [2024-07-25 10:17:31.642461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.663 [2024-07-25 10:17:31.642490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.663 qpair failed and we were unable to recover it. 00:28:46.663 [2024-07-25 10:17:31.642720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.663 [2024-07-25 10:17:31.642748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.663 qpair failed and we were unable to recover it. 00:28:46.663 [2024-07-25 10:17:31.642941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.663 [2024-07-25 10:17:31.642964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.663 qpair failed and we were unable to recover it. 00:28:46.663 [2024-07-25 10:17:31.643175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.663 [2024-07-25 10:17:31.643226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.663 qpair failed and we were unable to recover it. 00:28:46.663 [2024-07-25 10:17:31.643413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.663 [2024-07-25 10:17:31.643451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.663 qpair failed and we were unable to recover it. 00:28:46.663 [2024-07-25 10:17:31.643682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.663 [2024-07-25 10:17:31.643711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.663 qpair failed and we were unable to recover it. 00:28:46.663 [2024-07-25 10:17:31.643878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.663 [2024-07-25 10:17:31.643902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.663 qpair failed and we were unable to recover it. 00:28:46.663 [2024-07-25 10:17:31.644110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.663 [2024-07-25 10:17:31.644159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.663 qpair failed and we were unable to recover it. 00:28:46.663 [2024-07-25 10:17:31.644447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.664 [2024-07-25 10:17:31.644481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.664 qpair failed and we were unable to recover it. 00:28:46.664 [2024-07-25 10:17:31.644725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.664 [2024-07-25 10:17:31.644754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.664 qpair failed and we were unable to recover it. 00:28:46.664 [2024-07-25 10:17:31.644991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.664 [2024-07-25 10:17:31.645014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.664 qpair failed and we were unable to recover it. 00:28:46.664 [2024-07-25 10:17:31.645162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.664 [2024-07-25 10:17:31.645214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.664 qpair failed and we were unable to recover it. 00:28:46.664 [2024-07-25 10:17:31.645424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.664 [2024-07-25 10:17:31.645475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.664 qpair failed and we were unable to recover it. 00:28:46.664 [2024-07-25 10:17:31.645699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.664 [2024-07-25 10:17:31.645741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.664 qpair failed and we were unable to recover it. 00:28:46.664 [2024-07-25 10:17:31.645982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.664 [2024-07-25 10:17:31.646005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.664 qpair failed and we were unable to recover it. 00:28:46.664 [2024-07-25 10:17:31.646246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.664 [2024-07-25 10:17:31.646295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.664 qpair failed and we were unable to recover it. 00:28:46.664 [2024-07-25 10:17:31.646506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.664 [2024-07-25 10:17:31.646536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.664 qpair failed and we were unable to recover it. 00:28:46.664 [2024-07-25 10:17:31.646786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.664 [2024-07-25 10:17:31.646814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.664 qpair failed and we were unable to recover it. 00:28:46.664 [2024-07-25 10:17:31.647029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.664 [2024-07-25 10:17:31.647052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.664 qpair failed and we were unable to recover it. 00:28:46.664 [2024-07-25 10:17:31.647291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.664 [2024-07-25 10:17:31.647339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.664 qpair failed and we were unable to recover it. 00:28:46.664 [2024-07-25 10:17:31.647599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.664 [2024-07-25 10:17:31.647628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.664 qpair failed and we were unable to recover it. 00:28:46.664 [2024-07-25 10:17:31.647812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.664 [2024-07-25 10:17:31.647841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.664 qpair failed and we were unable to recover it. 00:28:46.664 [2024-07-25 10:17:31.648067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.664 [2024-07-25 10:17:31.648090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.664 qpair failed and we were unable to recover it. 00:28:46.664 [2024-07-25 10:17:31.648362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.664 [2024-07-25 10:17:31.648390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.664 qpair failed and we were unable to recover it. 00:28:46.664 [2024-07-25 10:17:31.648608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.664 [2024-07-25 10:17:31.648633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.664 qpair failed and we were unable to recover it. 00:28:46.664 [2024-07-25 10:17:31.648858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.664 [2024-07-25 10:17:31.648892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.664 qpair failed and we were unable to recover it. 00:28:46.664 [2024-07-25 10:17:31.649073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.664 [2024-07-25 10:17:31.649096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.664 qpair failed and we were unable to recover it. 00:28:46.664 [2024-07-25 10:17:31.649289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.664 [2024-07-25 10:17:31.649317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.664 qpair failed and we were unable to recover it. 00:28:46.664 [2024-07-25 10:17:31.649453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.664 [2024-07-25 10:17:31.649482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.664 qpair failed and we were unable to recover it. 00:28:46.664 [2024-07-25 10:17:31.649639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.664 [2024-07-25 10:17:31.649668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.664 qpair failed and we were unable to recover it. 00:28:46.664 [2024-07-25 10:17:31.649877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.664 [2024-07-25 10:17:31.649900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.664 qpair failed and we were unable to recover it. 00:28:46.664 [2024-07-25 10:17:31.650192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.664 [2024-07-25 10:17:31.650242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.664 qpair failed and we were unable to recover it. 00:28:46.664 [2024-07-25 10:17:31.650475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.664 [2024-07-25 10:17:31.650506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.664 qpair failed and we were unable to recover it. 00:28:46.664 [2024-07-25 10:17:31.650784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.664 [2024-07-25 10:17:31.650813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.664 qpair failed and we were unable to recover it. 00:28:46.664 [2024-07-25 10:17:31.651105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.664 [2024-07-25 10:17:31.651129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.664 qpair failed and we were unable to recover it. 00:28:46.664 [2024-07-25 10:17:31.651404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.664 [2024-07-25 10:17:31.651440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.664 qpair failed and we were unable to recover it. 00:28:46.664 [2024-07-25 10:17:31.651697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.664 [2024-07-25 10:17:31.651727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.664 qpair failed and we were unable to recover it. 00:28:46.664 [2024-07-25 10:17:31.651956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.664 [2024-07-25 10:17:31.651985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.664 qpair failed and we were unable to recover it. 00:28:46.664 [2024-07-25 10:17:31.652272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.664 [2024-07-25 10:17:31.652296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.664 qpair failed and we were unable to recover it. 00:28:46.664 [2024-07-25 10:17:31.652501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.664 [2024-07-25 10:17:31.652531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.664 qpair failed and we were unable to recover it. 00:28:46.664 [2024-07-25 10:17:31.652668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.665 [2024-07-25 10:17:31.652700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.665 qpair failed and we were unable to recover it. 00:28:46.665 [2024-07-25 10:17:31.652945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.665 [2024-07-25 10:17:31.652975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.665 qpair failed and we were unable to recover it. 00:28:46.665 [2024-07-25 10:17:31.653217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.665 [2024-07-25 10:17:31.653241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.665 qpair failed and we were unable to recover it. 00:28:46.665 [2024-07-25 10:17:31.653460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.665 [2024-07-25 10:17:31.653489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.665 qpair failed and we were unable to recover it. 00:28:46.665 [2024-07-25 10:17:31.653684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.665 [2024-07-25 10:17:31.653713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.665 qpair failed and we were unable to recover it. 00:28:46.665 [2024-07-25 10:17:31.653899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.665 [2024-07-25 10:17:31.653936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.665 qpair failed and we were unable to recover it. 00:28:46.665 [2024-07-25 10:17:31.654154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.665 [2024-07-25 10:17:31.654177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.665 qpair failed and we were unable to recover it. 00:28:46.665 [2024-07-25 10:17:31.654347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.665 [2024-07-25 10:17:31.654375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.665 qpair failed and we were unable to recover it. 00:28:46.665 [2024-07-25 10:17:31.654633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.665 [2024-07-25 10:17:31.654659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.665 qpair failed and we were unable to recover it. 00:28:46.665 [2024-07-25 10:17:31.654927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.665 [2024-07-25 10:17:31.654957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.665 qpair failed and we were unable to recover it. 00:28:46.665 [2024-07-25 10:17:31.655205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.665 [2024-07-25 10:17:31.655241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.665 qpair failed and we were unable to recover it. 00:28:46.665 [2024-07-25 10:17:31.655492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.665 [2024-07-25 10:17:31.655522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.665 qpair failed and we were unable to recover it. 00:28:46.665 [2024-07-25 10:17:31.655756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.665 [2024-07-25 10:17:31.655785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.665 qpair failed and we were unable to recover it. 00:28:46.665 [2024-07-25 10:17:31.656003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.665 [2024-07-25 10:17:31.656032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.665 qpair failed and we were unable to recover it. 00:28:46.665 [2024-07-25 10:17:31.656220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.665 [2024-07-25 10:17:31.656243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.665 qpair failed and we were unable to recover it. 00:28:46.665 [2024-07-25 10:17:31.656530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.665 [2024-07-25 10:17:31.656560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.665 qpair failed and we were unable to recover it. 00:28:46.665 [2024-07-25 10:17:31.656799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.665 [2024-07-25 10:17:31.656828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.665 qpair failed and we were unable to recover it. 00:28:46.665 [2024-07-25 10:17:31.657097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.665 [2024-07-25 10:17:31.657126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.665 qpair failed and we were unable to recover it. 00:28:46.665 [2024-07-25 10:17:31.657392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.665 [2024-07-25 10:17:31.657416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.665 qpair failed and we were unable to recover it. 00:28:46.665 [2024-07-25 10:17:31.657635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.665 [2024-07-25 10:17:31.657665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.665 qpair failed and we were unable to recover it. 00:28:46.665 [2024-07-25 10:17:31.657906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.665 [2024-07-25 10:17:31.657935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.665 qpair failed and we were unable to recover it. 00:28:46.665 [2024-07-25 10:17:31.658108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.665 [2024-07-25 10:17:31.658137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.665 qpair failed and we were unable to recover it. 00:28:46.665 [2024-07-25 10:17:31.658310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.665 [2024-07-25 10:17:31.658333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.665 qpair failed and we were unable to recover it. 00:28:46.665 [2024-07-25 10:17:31.658520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.665 [2024-07-25 10:17:31.658571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.665 qpair failed and we were unable to recover it. 00:28:46.665 [2024-07-25 10:17:31.658891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.665 [2024-07-25 10:17:31.658920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.665 qpair failed and we were unable to recover it. 00:28:46.665 [2024-07-25 10:17:31.659224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.665 [2024-07-25 10:17:31.659253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.665 qpair failed and we were unable to recover it. 00:28:46.665 [2024-07-25 10:17:31.659456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.665 [2024-07-25 10:17:31.659495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.665 qpair failed and we were unable to recover it. 00:28:46.665 [2024-07-25 10:17:31.659726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.665 [2024-07-25 10:17:31.659756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.665 qpair failed and we were unable to recover it. 00:28:46.665 [2024-07-25 10:17:31.659955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.665 [2024-07-25 10:17:31.659984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.665 qpair failed and we were unable to recover it. 00:28:46.665 [2024-07-25 10:17:31.660193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.665 [2024-07-25 10:17:31.660222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.665 qpair failed and we were unable to recover it. 00:28:46.665 [2024-07-25 10:17:31.660481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.665 [2024-07-25 10:17:31.660505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.665 qpair failed and we were unable to recover it. 00:28:46.665 [2024-07-25 10:17:31.660723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.665 [2024-07-25 10:17:31.660752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.665 qpair failed and we were unable to recover it. 00:28:46.665 [2024-07-25 10:17:31.660961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.665 [2024-07-25 10:17:31.660991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.665 qpair failed and we were unable to recover it. 00:28:46.665 [2024-07-25 10:17:31.661252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.665 [2024-07-25 10:17:31.661281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.665 qpair failed and we were unable to recover it. 00:28:46.665 [2024-07-25 10:17:31.661508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.665 [2024-07-25 10:17:31.661532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.665 qpair failed and we were unable to recover it. 00:28:46.665 [2024-07-25 10:17:31.661735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.665 [2024-07-25 10:17:31.661785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.665 qpair failed and we were unable to recover it. 00:28:46.665 [2024-07-25 10:17:31.662012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.665 [2024-07-25 10:17:31.662041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.665 qpair failed and we were unable to recover it. 00:28:46.665 [2024-07-25 10:17:31.662322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.665 [2024-07-25 10:17:31.662351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.665 qpair failed and we were unable to recover it. 00:28:46.665 [2024-07-25 10:17:31.662529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.666 [2024-07-25 10:17:31.662554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.666 qpair failed and we were unable to recover it. 00:28:46.666 [2024-07-25 10:17:31.662726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.666 [2024-07-25 10:17:31.662755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.666 qpair failed and we were unable to recover it. 00:28:46.666 [2024-07-25 10:17:31.662927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.666 [2024-07-25 10:17:31.662956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.666 qpair failed and we were unable to recover it. 00:28:46.666 [2024-07-25 10:17:31.663101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.666 [2024-07-25 10:17:31.663129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.666 qpair failed and we were unable to recover it. 00:28:46.666 [2024-07-25 10:17:31.663321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.666 [2024-07-25 10:17:31.663344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.666 qpair failed and we were unable to recover it. 00:28:46.666 [2024-07-25 10:17:31.663488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.666 [2024-07-25 10:17:31.663530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.666 qpair failed and we were unable to recover it. 00:28:46.666 [2024-07-25 10:17:31.663661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.666 [2024-07-25 10:17:31.663690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.666 qpair failed and we were unable to recover it. 00:28:46.666 [2024-07-25 10:17:31.663848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.666 [2024-07-25 10:17:31.663877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.666 qpair failed and we were unable to recover it. 00:28:46.666 [2024-07-25 10:17:31.664043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.666 [2024-07-25 10:17:31.664066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.666 qpair failed and we were unable to recover it. 00:28:46.666 [2024-07-25 10:17:31.664200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.666 [2024-07-25 10:17:31.664244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.666 qpair failed and we were unable to recover it. 00:28:46.666 [2024-07-25 10:17:31.664404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.666 [2024-07-25 10:17:31.664450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.666 qpair failed and we were unable to recover it. 00:28:46.666 [2024-07-25 10:17:31.664643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.666 [2024-07-25 10:17:31.664672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.666 qpair failed and we were unable to recover it. 00:28:46.666 [2024-07-25 10:17:31.664839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.666 [2024-07-25 10:17:31.664862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.666 qpair failed and we were unable to recover it. 00:28:46.666 [2024-07-25 10:17:31.664995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.666 [2024-07-25 10:17:31.665038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.666 qpair failed and we were unable to recover it. 00:28:46.666 [2024-07-25 10:17:31.665197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.666 [2024-07-25 10:17:31.665226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.666 qpair failed and we were unable to recover it. 00:28:46.666 [2024-07-25 10:17:31.665394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.666 [2024-07-25 10:17:31.665435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.666 qpair failed and we were unable to recover it. 00:28:46.666 [2024-07-25 10:17:31.665611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.666 [2024-07-25 10:17:31.665635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.666 qpair failed and we were unable to recover it. 00:28:46.666 [2024-07-25 10:17:31.665801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.666 [2024-07-25 10:17:31.665872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.666 qpair failed and we were unable to recover it. 00:28:46.666 [2024-07-25 10:17:31.666063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.666 [2024-07-25 10:17:31.666092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.666 qpair failed and we were unable to recover it. 00:28:46.666 [2024-07-25 10:17:31.666240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.666 [2024-07-25 10:17:31.666269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.666 qpair failed and we were unable to recover it. 00:28:46.666 [2024-07-25 10:17:31.666468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.666 [2024-07-25 10:17:31.666493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.666 qpair failed and we were unable to recover it. 00:28:46.666 [2024-07-25 10:17:31.666671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.666 [2024-07-25 10:17:31.666700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.666 qpair failed and we were unable to recover it. 00:28:46.666 [2024-07-25 10:17:31.666865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.666 [2024-07-25 10:17:31.666894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.666 qpair failed and we were unable to recover it. 00:28:46.666 [2024-07-25 10:17:31.667025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.666 [2024-07-25 10:17:31.667053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.666 qpair failed and we were unable to recover it. 00:28:46.666 [2024-07-25 10:17:31.667257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.666 [2024-07-25 10:17:31.667280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.666 qpair failed and we were unable to recover it. 00:28:46.666 [2024-07-25 10:17:31.667426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.666 [2024-07-25 10:17:31.667487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.666 qpair failed and we were unable to recover it. 00:28:46.666 [2024-07-25 10:17:31.667645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.666 [2024-07-25 10:17:31.667674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.666 qpair failed and we were unable to recover it. 00:28:46.666 [2024-07-25 10:17:31.667830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.666 [2024-07-25 10:17:31.667859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.666 qpair failed and we were unable to recover it. 00:28:46.666 [2024-07-25 10:17:31.667998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.666 [2024-07-25 10:17:31.668035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.666 qpair failed and we were unable to recover it. 00:28:46.666 [2024-07-25 10:17:31.668212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.666 [2024-07-25 10:17:31.668235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.666 qpair failed and we were unable to recover it. 00:28:46.666 [2024-07-25 10:17:31.668408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.666 [2024-07-25 10:17:31.668453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.666 qpair failed and we were unable to recover it. 00:28:46.666 [2024-07-25 10:17:31.668628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.666 [2024-07-25 10:17:31.668657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.666 qpair failed and we were unable to recover it. 00:28:46.666 [2024-07-25 10:17:31.668850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.666 [2024-07-25 10:17:31.668873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.666 qpair failed and we were unable to recover it. 00:28:46.666 [2024-07-25 10:17:31.669022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.666 [2024-07-25 10:17:31.669045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.666 qpair failed and we were unable to recover it. 00:28:46.666 [2024-07-25 10:17:31.669237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.666 [2024-07-25 10:17:31.669266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.666 qpair failed and we were unable to recover it. 00:28:46.666 [2024-07-25 10:17:31.669451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.666 [2024-07-25 10:17:31.669495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.666 qpair failed and we were unable to recover it. 00:28:46.666 [2024-07-25 10:17:31.669675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.666 [2024-07-25 10:17:31.669699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.666 qpair failed and we were unable to recover it. 00:28:46.666 [2024-07-25 10:17:31.669916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.666 [2024-07-25 10:17:31.669967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.666 qpair failed and we were unable to recover it. 00:28:46.666 [2024-07-25 10:17:31.670164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.666 [2024-07-25 10:17:31.670193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.667 qpair failed and we were unable to recover it. 00:28:46.667 [2024-07-25 10:17:31.670382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.667 [2024-07-25 10:17:31.670411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.667 qpair failed and we were unable to recover it. 00:28:46.667 [2024-07-25 10:17:31.670619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.667 [2024-07-25 10:17:31.670645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.667 qpair failed and we were unable to recover it. 00:28:46.667 [2024-07-25 10:17:31.670877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.667 [2024-07-25 10:17:31.670928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.667 qpair failed and we were unable to recover it. 00:28:46.667 [2024-07-25 10:17:31.671086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.667 [2024-07-25 10:17:31.671119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.667 qpair failed and we were unable to recover it. 00:28:46.667 [2024-07-25 10:17:31.671278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.667 [2024-07-25 10:17:31.671306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.667 qpair failed and we were unable to recover it. 00:28:46.667 [2024-07-25 10:17:31.671467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.667 [2024-07-25 10:17:31.671494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.667 qpair failed and we were unable to recover it. 00:28:46.667 [2024-07-25 10:17:31.671632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.667 [2024-07-25 10:17:31.671675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.667 qpair failed and we were unable to recover it. 00:28:46.667 [2024-07-25 10:17:31.671838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.667 [2024-07-25 10:17:31.671867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.667 qpair failed and we were unable to recover it. 00:28:46.667 [2024-07-25 10:17:31.672020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.667 [2024-07-25 10:17:31.672048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.667 qpair failed and we were unable to recover it. 00:28:46.667 [2024-07-25 10:17:31.672196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.667 [2024-07-25 10:17:31.672222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.667 qpair failed and we were unable to recover it. 00:28:46.667 [2024-07-25 10:17:31.672414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.667 [2024-07-25 10:17:31.672450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.667 qpair failed and we were unable to recover it. 00:28:46.667 [2024-07-25 10:17:31.672603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.667 [2024-07-25 10:17:31.672632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.667 qpair failed and we were unable to recover it. 00:28:46.667 [2024-07-25 10:17:31.672767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.667 [2024-07-25 10:17:31.672795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.667 qpair failed and we were unable to recover it. 00:28:46.667 [2024-07-25 10:17:31.672950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.667 [2024-07-25 10:17:31.672991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.667 qpair failed and we were unable to recover it. 00:28:46.667 [2024-07-25 10:17:31.673112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.667 [2024-07-25 10:17:31.673136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.667 qpair failed and we were unable to recover it. 00:28:46.667 [2024-07-25 10:17:31.673313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.667 [2024-07-25 10:17:31.673341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.667 qpair failed and we were unable to recover it. 00:28:46.667 [2024-07-25 10:17:31.673511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.667 [2024-07-25 10:17:31.673540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.667 qpair failed and we were unable to recover it. 00:28:46.667 [2024-07-25 10:17:31.673736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.667 [2024-07-25 10:17:31.673776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.667 qpair failed and we were unable to recover it. 00:28:46.667 [2024-07-25 10:17:31.673938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.667 [2024-07-25 10:17:31.673967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.667 qpair failed and we were unable to recover it. 00:28:46.667 [2024-07-25 10:17:31.674098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.667 [2024-07-25 10:17:31.674127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.667 qpair failed and we were unable to recover it. 00:28:46.667 [2024-07-25 10:17:31.674312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.667 [2024-07-25 10:17:31.674341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.667 qpair failed and we were unable to recover it. 00:28:46.667 [2024-07-25 10:17:31.674521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.667 [2024-07-25 10:17:31.674547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.667 qpair failed and we were unable to recover it. 00:28:46.667 [2024-07-25 10:17:31.674719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.667 [2024-07-25 10:17:31.674748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.667 qpair failed and we were unable to recover it. 00:28:46.667 [2024-07-25 10:17:31.674877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.667 [2024-07-25 10:17:31.674906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.667 qpair failed and we were unable to recover it. 00:28:46.667 [2024-07-25 10:17:31.675032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.667 [2024-07-25 10:17:31.675060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.667 qpair failed and we were unable to recover it. 00:28:46.667 [2024-07-25 10:17:31.675218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.667 [2024-07-25 10:17:31.675244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.667 qpair failed and we were unable to recover it. 00:28:46.667 [2024-07-25 10:17:31.675424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.667 [2024-07-25 10:17:31.675462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.667 qpair failed and we were unable to recover it. 00:28:46.667 [2024-07-25 10:17:31.675645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.667 [2024-07-25 10:17:31.675674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.667 qpair failed and we were unable to recover it. 00:28:46.667 [2024-07-25 10:17:31.675827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.667 [2024-07-25 10:17:31.675856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.667 qpair failed and we were unable to recover it. 00:28:46.667 [2024-07-25 10:17:31.676012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.667 [2024-07-25 10:17:31.676037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.667 qpair failed and we were unable to recover it. 00:28:46.667 [2024-07-25 10:17:31.676227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.667 [2024-07-25 10:17:31.676255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.667 qpair failed and we were unable to recover it. 00:28:46.667 [2024-07-25 10:17:31.676452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.667 [2024-07-25 10:17:31.676495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.667 qpair failed and we were unable to recover it. 00:28:46.667 [2024-07-25 10:17:31.676647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.667 [2024-07-25 10:17:31.676673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.667 qpair failed and we were unable to recover it. 00:28:46.667 [2024-07-25 10:17:31.676825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.667 [2024-07-25 10:17:31.676850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.667 qpair failed and we were unable to recover it. 00:28:46.667 [2024-07-25 10:17:31.677051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.667 [2024-07-25 10:17:31.677113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.667 qpair failed and we were unable to recover it. 00:28:46.667 [2024-07-25 10:17:31.677285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.667 [2024-07-25 10:17:31.677314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.667 qpair failed and we were unable to recover it. 00:28:46.667 [2024-07-25 10:17:31.677538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.667 [2024-07-25 10:17:31.677566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.667 qpair failed and we were unable to recover it. 00:28:46.667 [2024-07-25 10:17:31.677819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.667 [2024-07-25 10:17:31.677844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.668 qpair failed and we were unable to recover it. 00:28:46.668 [2024-07-25 10:17:31.678080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.668 [2024-07-25 10:17:31.678133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.668 qpair failed and we were unable to recover it. 00:28:46.668 [2024-07-25 10:17:31.678375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.668 [2024-07-25 10:17:31.678403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.668 qpair failed and we were unable to recover it. 00:28:46.668 [2024-07-25 10:17:31.678566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.668 [2024-07-25 10:17:31.678595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.668 qpair failed and we were unable to recover it. 00:28:46.668 [2024-07-25 10:17:31.678831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.668 [2024-07-25 10:17:31.678856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.668 qpair failed and we were unable to recover it. 00:28:46.668 [2024-07-25 10:17:31.679096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.668 [2024-07-25 10:17:31.679145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.668 qpair failed and we were unable to recover it. 00:28:46.668 [2024-07-25 10:17:31.679355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.668 [2024-07-25 10:17:31.679384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.668 qpair failed and we were unable to recover it. 00:28:46.668 [2024-07-25 10:17:31.679567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.668 [2024-07-25 10:17:31.679596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.668 qpair failed and we were unable to recover it. 00:28:46.668 [2024-07-25 10:17:31.679825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.668 [2024-07-25 10:17:31.679851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.668 qpair failed and we were unable to recover it. 00:28:46.668 [2024-07-25 10:17:31.680074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.668 [2024-07-25 10:17:31.680123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.668 qpair failed and we were unable to recover it. 00:28:46.668 [2024-07-25 10:17:31.680346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.668 [2024-07-25 10:17:31.680376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.668 qpair failed and we were unable to recover it. 00:28:46.668 [2024-07-25 10:17:31.680622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.668 [2024-07-25 10:17:31.680651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.668 qpair failed and we were unable to recover it. 00:28:46.668 [2024-07-25 10:17:31.680854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.668 [2024-07-25 10:17:31.680895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.668 qpair failed and we were unable to recover it. 00:28:46.668 [2024-07-25 10:17:31.681060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.668 [2024-07-25 10:17:31.681117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.668 qpair failed and we were unable to recover it. 00:28:46.668 [2024-07-25 10:17:31.681253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.668 [2024-07-25 10:17:31.681282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.668 qpair failed and we were unable to recover it. 00:28:46.668 [2024-07-25 10:17:31.681492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.668 [2024-07-25 10:17:31.681521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.668 qpair failed and we were unable to recover it. 00:28:46.668 [2024-07-25 10:17:31.681778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.668 [2024-07-25 10:17:31.681817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.668 qpair failed and we were unable to recover it. 00:28:46.668 [2024-07-25 10:17:31.682021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.668 [2024-07-25 10:17:31.682071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.668 qpair failed and we were unable to recover it. 00:28:46.668 [2024-07-25 10:17:31.682197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.668 [2024-07-25 10:17:31.682226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.668 qpair failed and we were unable to recover it. 00:28:46.668 [2024-07-25 10:17:31.682452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.668 [2024-07-25 10:17:31.682481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.668 qpair failed and we were unable to recover it. 00:28:46.668 [2024-07-25 10:17:31.682692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.668 [2024-07-25 10:17:31.682718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.668 qpair failed and we were unable to recover it. 00:28:46.668 [2024-07-25 10:17:31.683013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.668 [2024-07-25 10:17:31.683064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.668 qpair failed and we were unable to recover it. 00:28:46.668 [2024-07-25 10:17:31.683290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.668 [2024-07-25 10:17:31.683319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.668 qpair failed and we were unable to recover it. 00:28:46.668 [2024-07-25 10:17:31.683538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.668 [2024-07-25 10:17:31.683567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.668 qpair failed and we were unable to recover it. 00:28:46.668 [2024-07-25 10:17:31.683746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.668 [2024-07-25 10:17:31.683772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.668 qpair failed and we were unable to recover it. 00:28:46.668 [2024-07-25 10:17:31.684015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.668 [2024-07-25 10:17:31.684067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.668 qpair failed and we were unable to recover it. 00:28:46.668 [2024-07-25 10:17:31.684341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.668 [2024-07-25 10:17:31.684370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.668 qpair failed and we were unable to recover it. 00:28:46.668 [2024-07-25 10:17:31.684635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.668 [2024-07-25 10:17:31.684665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.668 qpair failed and we were unable to recover it. 00:28:46.668 [2024-07-25 10:17:31.684898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.668 [2024-07-25 10:17:31.684924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.668 qpair failed and we were unable to recover it. 00:28:46.668 [2024-07-25 10:17:31.685193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.668 [2024-07-25 10:17:31.685250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.668 qpair failed and we were unable to recover it. 00:28:46.668 [2024-07-25 10:17:31.685445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.668 [2024-07-25 10:17:31.685489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.668 qpair failed and we were unable to recover it. 00:28:46.668 [2024-07-25 10:17:31.685703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.668 [2024-07-25 10:17:31.685747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.668 qpair failed and we were unable to recover it. 00:28:46.668 [2024-07-25 10:17:31.686005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.668 [2024-07-25 10:17:31.686045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.668 qpair failed and we were unable to recover it. 00:28:46.668 [2024-07-25 10:17:31.686321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.668 [2024-07-25 10:17:31.686371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.668 qpair failed and we were unable to recover it. 00:28:46.668 [2024-07-25 10:17:31.686580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.668 [2024-07-25 10:17:31.686610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.668 qpair failed and we were unable to recover it. 00:28:46.668 [2024-07-25 10:17:31.686881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.668 [2024-07-25 10:17:31.686911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.668 qpair failed and we were unable to recover it. 00:28:46.668 [2024-07-25 10:17:31.687156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.668 [2024-07-25 10:17:31.687182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.668 qpair failed and we were unable to recover it. 00:28:46.668 [2024-07-25 10:17:31.687445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.668 [2024-07-25 10:17:31.687475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.668 qpair failed and we were unable to recover it. 00:28:46.668 [2024-07-25 10:17:31.687697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.668 [2024-07-25 10:17:31.687726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.668 qpair failed and we were unable to recover it. 00:28:46.669 [2024-07-25 10:17:31.687950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.669 [2024-07-25 10:17:31.687979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.669 qpair failed and we were unable to recover it. 00:28:46.669 [2024-07-25 10:17:31.688158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.669 [2024-07-25 10:17:31.688193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.669 qpair failed and we were unable to recover it. 00:28:46.669 [2024-07-25 10:17:31.688518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.669 [2024-07-25 10:17:31.688548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.669 qpair failed and we were unable to recover it. 00:28:46.669 [2024-07-25 10:17:31.688789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.669 [2024-07-25 10:17:31.688818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.669 qpair failed and we were unable to recover it. 00:28:46.669 [2024-07-25 10:17:31.688978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.669 [2024-07-25 10:17:31.689007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.669 qpair failed and we were unable to recover it. 00:28:46.669 [2024-07-25 10:17:31.689193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.669 [2024-07-25 10:17:31.689219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.669 qpair failed and we were unable to recover it. 00:28:46.669 [2024-07-25 10:17:31.689446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.669 [2024-07-25 10:17:31.689475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.669 qpair failed and we were unable to recover it. 00:28:46.669 [2024-07-25 10:17:31.689739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.669 [2024-07-25 10:17:31.689769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.669 qpair failed and we were unable to recover it. 00:28:46.669 [2024-07-25 10:17:31.689995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.669 [2024-07-25 10:17:31.690024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.669 qpair failed and we were unable to recover it. 00:28:46.669 [2024-07-25 10:17:31.690232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.669 [2024-07-25 10:17:31.690258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.669 qpair failed and we were unable to recover it. 00:28:46.669 [2024-07-25 10:17:31.690445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.669 [2024-07-25 10:17:31.690474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.669 qpair failed and we were unable to recover it. 00:28:46.669 [2024-07-25 10:17:31.690709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.669 [2024-07-25 10:17:31.690738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.669 qpair failed and we were unable to recover it. 00:28:46.669 [2024-07-25 10:17:31.690973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.669 [2024-07-25 10:17:31.691002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.669 qpair failed and we were unable to recover it. 00:28:46.669 [2024-07-25 10:17:31.691239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.669 [2024-07-25 10:17:31.691279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.669 qpair failed and we were unable to recover it. 00:28:46.669 [2024-07-25 10:17:31.691497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.669 [2024-07-25 10:17:31.691527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.669 qpair failed and we were unable to recover it. 00:28:46.669 [2024-07-25 10:17:31.691758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.669 [2024-07-25 10:17:31.691787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.669 qpair failed and we were unable to recover it. 00:28:46.669 [2024-07-25 10:17:31.692027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.669 [2024-07-25 10:17:31.692055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.669 qpair failed and we were unable to recover it. 00:28:46.669 [2024-07-25 10:17:31.692253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.669 [2024-07-25 10:17:31.692279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.669 qpair failed and we were unable to recover it. 00:28:46.669 [2024-07-25 10:17:31.692462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.669 [2024-07-25 10:17:31.692514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.669 qpair failed and we were unable to recover it. 00:28:46.669 [2024-07-25 10:17:31.692781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.669 [2024-07-25 10:17:31.692810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.669 qpair failed and we were unable to recover it. 00:28:46.669 [2024-07-25 10:17:31.693036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.669 [2024-07-25 10:17:31.693064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.669 qpair failed and we were unable to recover it. 00:28:46.669 [2024-07-25 10:17:31.693243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.669 [2024-07-25 10:17:31.693283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.669 qpair failed and we were unable to recover it. 00:28:46.669 [2024-07-25 10:17:31.693461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.669 [2024-07-25 10:17:31.693496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.669 qpair failed and we were unable to recover it. 00:28:46.669 [2024-07-25 10:17:31.693701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.669 [2024-07-25 10:17:31.693730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.669 qpair failed and we were unable to recover it. 00:28:46.669 [2024-07-25 10:17:31.693916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.669 [2024-07-25 10:17:31.693944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.669 qpair failed and we were unable to recover it. 00:28:46.669 [2024-07-25 10:17:31.694116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.669 [2024-07-25 10:17:31.694140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.669 qpair failed and we were unable to recover it. 00:28:46.669 [2024-07-25 10:17:31.694332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.669 [2024-07-25 10:17:31.694361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.669 qpair failed and we were unable to recover it. 00:28:46.669 [2024-07-25 10:17:31.694535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.669 [2024-07-25 10:17:31.694565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.669 qpair failed and we were unable to recover it. 00:28:46.669 [2024-07-25 10:17:31.694739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.669 [2024-07-25 10:17:31.694768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.669 qpair failed and we were unable to recover it. 00:28:46.669 [2024-07-25 10:17:31.694950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.669 [2024-07-25 10:17:31.694989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.669 qpair failed and we were unable to recover it. 00:28:46.669 [2024-07-25 10:17:31.695184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.669 [2024-07-25 10:17:31.695236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.669 qpair failed and we were unable to recover it. 00:28:46.669 [2024-07-25 10:17:31.695467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.669 [2024-07-25 10:17:31.695495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.669 qpair failed and we were unable to recover it. 00:28:46.669 [2024-07-25 10:17:31.695660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.670 [2024-07-25 10:17:31.695686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.670 qpair failed and we were unable to recover it. 00:28:46.670 [2024-07-25 10:17:31.695897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.670 [2024-07-25 10:17:31.695924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.670 qpair failed and we were unable to recover it. 00:28:46.670 [2024-07-25 10:17:31.696107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.670 [2024-07-25 10:17:31.696159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.670 qpair failed and we were unable to recover it. 00:28:46.670 [2024-07-25 10:17:31.696421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.670 [2024-07-25 10:17:31.696458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.670 qpair failed and we were unable to recover it. 00:28:46.670 [2024-07-25 10:17:31.696639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.670 [2024-07-25 10:17:31.696673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.670 qpair failed and we were unable to recover it. 00:28:46.670 [2024-07-25 10:17:31.696931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.670 [2024-07-25 10:17:31.696957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.670 qpair failed and we were unable to recover it. 00:28:46.670 [2024-07-25 10:17:31.697155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.670 [2024-07-25 10:17:31.697204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.670 qpair failed and we were unable to recover it. 00:28:46.670 [2024-07-25 10:17:31.697425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.670 [2024-07-25 10:17:31.697462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.670 qpair failed and we were unable to recover it. 00:28:46.670 [2024-07-25 10:17:31.697659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.670 [2024-07-25 10:17:31.697688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.670 qpair failed and we were unable to recover it. 00:28:46.670 [2024-07-25 10:17:31.697858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.670 [2024-07-25 10:17:31.697884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.670 qpair failed and we were unable to recover it. 00:28:46.670 [2024-07-25 10:17:31.698093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.670 [2024-07-25 10:17:31.698147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.670 qpair failed and we were unable to recover it. 00:28:46.670 [2024-07-25 10:17:31.698352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.670 [2024-07-25 10:17:31.698380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.670 qpair failed and we were unable to recover it. 00:28:46.670 [2024-07-25 10:17:31.698578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.670 [2024-07-25 10:17:31.698608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.670 qpair failed and we were unable to recover it. 00:28:46.670 [2024-07-25 10:17:31.698861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.670 [2024-07-25 10:17:31.698904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.670 qpair failed and we were unable to recover it. 00:28:46.670 [2024-07-25 10:17:31.699191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.670 [2024-07-25 10:17:31.699247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.670 qpair failed and we were unable to recover it. 00:28:46.670 [2024-07-25 10:17:31.699497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.670 [2024-07-25 10:17:31.699527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.670 qpair failed and we were unable to recover it. 00:28:46.670 [2024-07-25 10:17:31.699728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.670 [2024-07-25 10:17:31.699757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.670 qpair failed and we were unable to recover it. 00:28:46.670 [2024-07-25 10:17:31.700119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.670 [2024-07-25 10:17:31.700185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.670 qpair failed and we were unable to recover it. 00:28:46.670 [2024-07-25 10:17:31.700434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.670 [2024-07-25 10:17:31.700465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.670 qpair failed and we were unable to recover it. 00:28:46.670 [2024-07-25 10:17:31.700704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.670 [2024-07-25 10:17:31.700732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.670 qpair failed and we were unable to recover it. 00:28:46.670 [2024-07-25 10:17:31.700972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.670 [2024-07-25 10:17:31.701002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.670 qpair failed and we were unable to recover it. 00:28:46.670 [2024-07-25 10:17:31.701269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.670 [2024-07-25 10:17:31.701312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.670 qpair failed and we were unable to recover it. 00:28:46.670 [2024-07-25 10:17:31.701598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.670 [2024-07-25 10:17:31.701628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.670 qpair failed and we were unable to recover it. 00:28:46.670 [2024-07-25 10:17:31.701822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.670 [2024-07-25 10:17:31.701851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.670 qpair failed and we were unable to recover it. 00:28:46.670 [2024-07-25 10:17:31.702086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.670 [2024-07-25 10:17:31.702115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.670 qpair failed and we were unable to recover it. 00:28:46.670 [2024-07-25 10:17:31.702362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.670 [2024-07-25 10:17:31.702388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.670 qpair failed and we were unable to recover it. 00:28:46.670 [2024-07-25 10:17:31.702596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.670 [2024-07-25 10:17:31.702625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.670 qpair failed and we were unable to recover it. 00:28:46.670 [2024-07-25 10:17:31.702810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.670 [2024-07-25 10:17:31.702843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.670 qpair failed and we were unable to recover it. 00:28:46.670 [2024-07-25 10:17:31.703063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.670 [2024-07-25 10:17:31.703093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.670 qpair failed and we were unable to recover it. 00:28:46.670 [2024-07-25 10:17:31.703345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.670 [2024-07-25 10:17:31.703371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.670 qpair failed and we were unable to recover it. 00:28:46.670 [2024-07-25 10:17:31.703595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.670 [2024-07-25 10:17:31.703625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.670 qpair failed and we were unable to recover it. 00:28:46.670 [2024-07-25 10:17:31.703833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.670 [2024-07-25 10:17:31.703880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.670 qpair failed and we were unable to recover it. 00:28:46.670 [2024-07-25 10:17:31.704130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.670 [2024-07-25 10:17:31.704162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.670 qpair failed and we were unable to recover it. 00:28:46.670 [2024-07-25 10:17:31.704353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.670 [2024-07-25 10:17:31.704446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.670 qpair failed and we were unable to recover it. 00:28:46.670 [2024-07-25 10:17:31.704706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.670 [2024-07-25 10:17:31.704733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.670 qpair failed and we were unable to recover it. 00:28:46.670 [2024-07-25 10:17:31.704926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.670 [2024-07-25 10:17:31.704969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.670 qpair failed and we were unable to recover it. 00:28:46.670 [2024-07-25 10:17:31.705142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.670 [2024-07-25 10:17:31.705171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.670 qpair failed and we were unable to recover it. 00:28:46.670 [2024-07-25 10:17:31.705390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.670 [2024-07-25 10:17:31.705417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.670 qpair failed and we were unable to recover it. 00:28:46.670 [2024-07-25 10:17:31.705597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.671 [2024-07-25 10:17:31.705625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.671 qpair failed and we were unable to recover it. 00:28:46.671 [2024-07-25 10:17:31.705835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.671 [2024-07-25 10:17:31.705864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.671 qpair failed and we were unable to recover it. 00:28:46.671 [2024-07-25 10:17:31.706030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.671 [2024-07-25 10:17:31.706059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.671 qpair failed and we were unable to recover it. 00:28:46.671 [2024-07-25 10:17:31.706227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.671 [2024-07-25 10:17:31.706253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.671 qpair failed and we were unable to recover it. 00:28:46.671 [2024-07-25 10:17:31.706408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.671 [2024-07-25 10:17:31.706443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.671 qpair failed and we were unable to recover it. 00:28:46.671 [2024-07-25 10:17:31.706691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.671 [2024-07-25 10:17:31.706730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.671 qpair failed and we were unable to recover it. 00:28:46.671 [2024-07-25 10:17:31.706926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.671 [2024-07-25 10:17:31.706962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.671 qpair failed and we were unable to recover it. 00:28:46.671 [2024-07-25 10:17:31.707130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.671 [2024-07-25 10:17:31.707156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.671 qpair failed and we were unable to recover it. 00:28:46.671 [2024-07-25 10:17:31.707415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.671 [2024-07-25 10:17:31.707462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.671 qpair failed and we were unable to recover it. 00:28:46.671 [2024-07-25 10:17:31.707657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.671 [2024-07-25 10:17:31.707683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.671 qpair failed and we were unable to recover it. 00:28:46.671 [2024-07-25 10:17:31.707836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.671 [2024-07-25 10:17:31.707865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.671 qpair failed and we were unable to recover it. 00:28:46.671 [2024-07-25 10:17:31.708065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.671 [2024-07-25 10:17:31.708090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.671 qpair failed and we were unable to recover it. 00:28:46.671 [2024-07-25 10:17:31.708331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.671 [2024-07-25 10:17:31.708385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.671 qpair failed and we were unable to recover it. 00:28:46.671 [2024-07-25 10:17:31.708617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.671 [2024-07-25 10:17:31.708644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.671 qpair failed and we were unable to recover it. 00:28:46.671 [2024-07-25 10:17:31.708849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.671 [2024-07-25 10:17:31.708878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.671 qpair failed and we were unable to recover it. 00:28:46.671 [2024-07-25 10:17:31.709103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.671 [2024-07-25 10:17:31.709129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.671 qpair failed and we were unable to recover it. 00:28:46.671 [2024-07-25 10:17:31.709327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.671 [2024-07-25 10:17:31.709357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.671 qpair failed and we were unable to recover it. 00:28:46.671 [2024-07-25 10:17:31.709550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.671 [2024-07-25 10:17:31.709578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.671 qpair failed and we were unable to recover it. 00:28:46.671 [2024-07-25 10:17:31.709732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.671 [2024-07-25 10:17:31.709766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.671 qpair failed and we were unable to recover it. 00:28:46.671 [2024-07-25 10:17:31.709964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.671 [2024-07-25 10:17:31.709988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.671 qpair failed and we were unable to recover it. 00:28:46.671 [2024-07-25 10:17:31.710220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.671 [2024-07-25 10:17:31.710281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.671 qpair failed and we were unable to recover it. 00:28:46.671 [2024-07-25 10:17:31.710440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.671 [2024-07-25 10:17:31.710484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.671 qpair failed and we were unable to recover it. 00:28:46.671 [2024-07-25 10:17:31.710651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.671 [2024-07-25 10:17:31.710678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.671 qpair failed and we were unable to recover it. 00:28:46.671 [2024-07-25 10:17:31.710906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.671 [2024-07-25 10:17:31.710933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.671 qpair failed and we were unable to recover it. 00:28:46.671 [2024-07-25 10:17:31.711152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.671 [2024-07-25 10:17:31.711212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.671 qpair failed and we were unable to recover it. 00:28:46.671 [2024-07-25 10:17:31.711419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.671 [2024-07-25 10:17:31.711458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.671 qpair failed and we were unable to recover it. 00:28:46.671 [2024-07-25 10:17:31.711687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.671 [2024-07-25 10:17:31.711713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.671 qpair failed and we were unable to recover it. 00:28:46.671 [2024-07-25 10:17:31.711940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.671 [2024-07-25 10:17:31.711966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.671 qpair failed and we were unable to recover it. 00:28:46.671 [2024-07-25 10:17:31.712170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.671 [2024-07-25 10:17:31.712198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.671 qpair failed and we were unable to recover it. 00:28:46.671 [2024-07-25 10:17:31.712404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.671 [2024-07-25 10:17:31.712438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.671 qpair failed and we were unable to recover it. 00:28:46.671 [2024-07-25 10:17:31.712604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.671 [2024-07-25 10:17:31.712634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.671 qpair failed and we were unable to recover it. 00:28:46.671 [2024-07-25 10:17:31.712867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.671 [2024-07-25 10:17:31.712893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.671 qpair failed and we were unable to recover it. 00:28:46.671 [2024-07-25 10:17:31.713079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.671 [2024-07-25 10:17:31.713135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.671 qpair failed and we were unable to recover it. 00:28:46.671 [2024-07-25 10:17:31.713341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.671 [2024-07-25 10:17:31.713371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.671 qpair failed and we were unable to recover it. 00:28:46.671 [2024-07-25 10:17:31.713546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.671 [2024-07-25 10:17:31.713576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.671 qpair failed and we were unable to recover it. 00:28:46.671 [2024-07-25 10:17:31.713799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.671 [2024-07-25 10:17:31.713825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.671 qpair failed and we were unable to recover it. 00:28:46.671 [2024-07-25 10:17:31.713949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.671 [2024-07-25 10:17:31.713990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.671 qpair failed and we were unable to recover it. 00:28:46.671 [2024-07-25 10:17:31.714207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.671 [2024-07-25 10:17:31.714236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.671 qpair failed and we were unable to recover it. 00:28:46.671 [2024-07-25 10:17:31.714500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.672 [2024-07-25 10:17:31.714529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.672 qpair failed and we were unable to recover it. 00:28:46.672 [2024-07-25 10:17:31.714721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.672 [2024-07-25 10:17:31.714761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.672 qpair failed and we were unable to recover it. 00:28:46.672 [2024-07-25 10:17:31.714928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.672 [2024-07-25 10:17:31.714992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.672 qpair failed and we were unable to recover it. 00:28:46.672 [2024-07-25 10:17:31.715195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.672 [2024-07-25 10:17:31.715224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.672 qpair failed and we were unable to recover it. 00:28:46.672 [2024-07-25 10:17:31.715414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.672 [2024-07-25 10:17:31.715449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.672 qpair failed and we were unable to recover it. 00:28:46.672 [2024-07-25 10:17:31.715637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.672 [2024-07-25 10:17:31.715662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.672 qpair failed and we were unable to recover it. 00:28:46.672 [2024-07-25 10:17:31.715872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.672 [2024-07-25 10:17:31.715935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.672 qpair failed and we were unable to recover it. 00:28:46.672 [2024-07-25 10:17:31.716110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.672 [2024-07-25 10:17:31.716138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.672 qpair failed and we were unable to recover it. 00:28:46.672 [2024-07-25 10:17:31.716330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.672 [2024-07-25 10:17:31.716363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.672 qpair failed and we were unable to recover it. 00:28:46.672 [2024-07-25 10:17:31.716556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.672 [2024-07-25 10:17:31.716581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.672 qpair failed and we were unable to recover it. 00:28:46.672 [2024-07-25 10:17:31.716836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.672 [2024-07-25 10:17:31.716892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.672 qpair failed and we were unable to recover it. 00:28:46.672 [2024-07-25 10:17:31.717090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.672 [2024-07-25 10:17:31.717119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.672 qpair failed and we were unable to recover it. 00:28:46.672 [2024-07-25 10:17:31.717294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.672 [2024-07-25 10:17:31.717322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.672 qpair failed and we were unable to recover it. 00:28:46.672 [2024-07-25 10:17:31.717592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.672 [2024-07-25 10:17:31.717620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.672 qpair failed and we were unable to recover it. 00:28:46.672 [2024-07-25 10:17:31.717868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.672 [2024-07-25 10:17:31.717897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.672 qpair failed and we were unable to recover it. 00:28:46.672 [2024-07-25 10:17:31.718134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.672 [2024-07-25 10:17:31.718164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.672 qpair failed and we were unable to recover it. 00:28:46.672 [2024-07-25 10:17:31.718359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.672 [2024-07-25 10:17:31.718388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.672 qpair failed and we were unable to recover it. 00:28:46.672 [2024-07-25 10:17:31.718575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.672 [2024-07-25 10:17:31.718601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.672 qpair failed and we were unable to recover it. 00:28:46.672 [2024-07-25 10:17:31.718834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.672 [2024-07-25 10:17:31.718892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.672 qpair failed and we were unable to recover it. 00:28:46.672 [2024-07-25 10:17:31.719109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.672 [2024-07-25 10:17:31.719138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.672 qpair failed and we were unable to recover it. 00:28:46.672 [2024-07-25 10:17:31.719340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.672 [2024-07-25 10:17:31.719370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.672 qpair failed and we were unable to recover it. 00:28:46.672 [2024-07-25 10:17:31.719577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.672 [2024-07-25 10:17:31.719604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.672 qpair failed and we were unable to recover it. 00:28:46.672 [2024-07-25 10:17:31.719785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.672 [2024-07-25 10:17:31.719861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.672 qpair failed and we were unable to recover it. 00:28:46.672 [2024-07-25 10:17:31.720087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.672 [2024-07-25 10:17:31.720126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.672 qpair failed and we were unable to recover it. 00:28:46.672 [2024-07-25 10:17:31.720382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.672 [2024-07-25 10:17:31.720411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.672 qpair failed and we were unable to recover it. 00:28:46.672 [2024-07-25 10:17:31.720637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.672 [2024-07-25 10:17:31.720664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.672 qpair failed and we were unable to recover it. 00:28:46.672 [2024-07-25 10:17:31.720857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.672 [2024-07-25 10:17:31.720917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.672 qpair failed and we were unable to recover it. 00:28:46.672 [2024-07-25 10:17:31.721111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.672 [2024-07-25 10:17:31.721140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.672 qpair failed and we were unable to recover it. 00:28:46.672 [2024-07-25 10:17:31.721306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.672 [2024-07-25 10:17:31.721335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.672 qpair failed and we were unable to recover it. 00:28:46.672 [2024-07-25 10:17:31.721516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.672 [2024-07-25 10:17:31.721543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.672 qpair failed and we were unable to recover it. 00:28:46.672 [2024-07-25 10:17:31.721747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.672 [2024-07-25 10:17:31.721802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.672 qpair failed and we were unable to recover it. 00:28:46.672 [2024-07-25 10:17:31.722073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.672 [2024-07-25 10:17:31.722102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.672 qpair failed and we were unable to recover it. 00:28:46.672 [2024-07-25 10:17:31.722311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.672 [2024-07-25 10:17:31.722340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.672 qpair failed and we were unable to recover it. 00:28:46.672 [2024-07-25 10:17:31.722550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.672 [2024-07-25 10:17:31.722577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.672 qpair failed and we were unable to recover it. 00:28:46.672 [2024-07-25 10:17:31.722800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.672 [2024-07-25 10:17:31.722849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.672 qpair failed and we were unable to recover it. 00:28:46.672 [2024-07-25 10:17:31.723058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.672 [2024-07-25 10:17:31.723089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.672 qpair failed and we were unable to recover it. 00:28:46.672 [2024-07-25 10:17:31.723300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.672 [2024-07-25 10:17:31.723329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.672 qpair failed and we were unable to recover it. 00:28:46.672 [2024-07-25 10:17:31.723517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.673 [2024-07-25 10:17:31.723544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.673 qpair failed and we were unable to recover it. 00:28:46.673 [2024-07-25 10:17:31.723763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.673 [2024-07-25 10:17:31.723825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.673 qpair failed and we were unable to recover it. 00:28:46.673 [2024-07-25 10:17:31.724071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.673 [2024-07-25 10:17:31.724100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.673 qpair failed and we were unable to recover it. 00:28:46.673 [2024-07-25 10:17:31.724287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.673 [2024-07-25 10:17:31.724326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.673 qpair failed and we were unable to recover it. 00:28:46.673 [2024-07-25 10:17:31.724479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.673 [2024-07-25 10:17:31.724506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.673 qpair failed and we were unable to recover it. 00:28:46.673 [2024-07-25 10:17:31.724676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.673 [2024-07-25 10:17:31.724719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.673 qpair failed and we were unable to recover it. 00:28:46.673 [2024-07-25 10:17:31.724935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.673 [2024-07-25 10:17:31.724963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.673 qpair failed and we were unable to recover it. 00:28:46.673 [2024-07-25 10:17:31.725140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.673 [2024-07-25 10:17:31.725169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.673 qpair failed and we were unable to recover it. 00:28:46.673 [2024-07-25 10:17:31.725323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.673 [2024-07-25 10:17:31.725350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.673 qpair failed and we were unable to recover it. 00:28:46.673 [2024-07-25 10:17:31.725531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.673 [2024-07-25 10:17:31.725561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.673 qpair failed and we were unable to recover it. 00:28:46.673 [2024-07-25 10:17:31.725765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.673 [2024-07-25 10:17:31.725795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.673 qpair failed and we were unable to recover it. 00:28:46.673 [2024-07-25 10:17:31.725977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.673 [2024-07-25 10:17:31.726010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.673 qpair failed and we were unable to recover it. 00:28:46.673 [2024-07-25 10:17:31.726231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.673 [2024-07-25 10:17:31.726257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.673 qpair failed and we were unable to recover it. 00:28:46.673 [2024-07-25 10:17:31.726467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.673 [2024-07-25 10:17:31.726496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.673 qpair failed and we were unable to recover it. 00:28:46.673 [2024-07-25 10:17:31.726637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.673 [2024-07-25 10:17:31.726667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.673 qpair failed and we were unable to recover it. 00:28:46.673 [2024-07-25 10:17:31.726837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.673 [2024-07-25 10:17:31.726867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.673 qpair failed and we were unable to recover it. 00:28:46.673 [2024-07-25 10:17:31.727040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.673 [2024-07-25 10:17:31.727065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.673 qpair failed and we were unable to recover it. 00:28:46.673 [2024-07-25 10:17:31.727251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.673 [2024-07-25 10:17:31.727280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.673 qpair failed and we were unable to recover it. 00:28:46.673 [2024-07-25 10:17:31.727484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.673 [2024-07-25 10:17:31.727511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.673 qpair failed and we were unable to recover it. 00:28:46.673 [2024-07-25 10:17:31.727688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.673 [2024-07-25 10:17:31.727730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.673 qpair failed and we were unable to recover it. 00:28:46.673 [2024-07-25 10:17:31.727900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.673 [2024-07-25 10:17:31.727925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.673 qpair failed and we were unable to recover it. 00:28:46.673 [2024-07-25 10:17:31.728104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.673 [2024-07-25 10:17:31.728162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.673 qpair failed and we were unable to recover it. 00:28:46.673 [2024-07-25 10:17:31.728376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.673 [2024-07-25 10:17:31.728405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.673 qpair failed and we were unable to recover it. 00:28:46.673 [2024-07-25 10:17:31.728594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.673 [2024-07-25 10:17:31.728623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.673 qpair failed and we were unable to recover it. 00:28:46.673 [2024-07-25 10:17:31.728783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.673 [2024-07-25 10:17:31.728809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.673 qpair failed and we were unable to recover it. 00:28:46.673 [2024-07-25 10:17:31.729024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.673 [2024-07-25 10:17:31.729073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.673 qpair failed and we were unable to recover it. 00:28:46.673 [2024-07-25 10:17:31.729279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.673 [2024-07-25 10:17:31.729308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.673 qpair failed and we were unable to recover it. 00:28:46.673 [2024-07-25 10:17:31.729483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.673 [2024-07-25 10:17:31.729513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.673 qpair failed and we were unable to recover it. 00:28:46.673 [2024-07-25 10:17:31.729664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.673 [2024-07-25 10:17:31.729690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.673 qpair failed and we were unable to recover it. 00:28:46.673 [2024-07-25 10:17:31.729931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.673 [2024-07-25 10:17:31.729992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.673 qpair failed and we were unable to recover it. 00:28:46.673 [2024-07-25 10:17:31.730188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.673 [2024-07-25 10:17:31.730217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.673 qpair failed and we were unable to recover it. 00:28:46.673 [2024-07-25 10:17:31.730420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.673 [2024-07-25 10:17:31.730454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.673 qpair failed and we were unable to recover it. 00:28:46.673 [2024-07-25 10:17:31.730662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.674 [2024-07-25 10:17:31.730688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.674 qpair failed and we were unable to recover it. 00:28:46.674 [2024-07-25 10:17:31.730838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.674 [2024-07-25 10:17:31.730899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.674 qpair failed and we were unable to recover it. 00:28:46.674 [2024-07-25 10:17:31.731067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.674 [2024-07-25 10:17:31.731096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.674 qpair failed and we were unable to recover it. 00:28:46.674 [2024-07-25 10:17:31.731269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.674 [2024-07-25 10:17:31.731298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.674 qpair failed and we were unable to recover it. 00:28:46.674 [2024-07-25 10:17:31.731501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.674 [2024-07-25 10:17:31.731529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.674 qpair failed and we were unable to recover it. 00:28:46.674 [2024-07-25 10:17:31.731716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.674 [2024-07-25 10:17:31.731746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.674 qpair failed and we were unable to recover it. 00:28:46.674 [2024-07-25 10:17:31.731876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.674 [2024-07-25 10:17:31.731905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.674 qpair failed and we were unable to recover it. 00:28:46.674 [2024-07-25 10:17:31.732095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.674 [2024-07-25 10:17:31.732135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.674 qpair failed and we were unable to recover it. 00:28:46.674 [2024-07-25 10:17:31.732277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.674 [2024-07-25 10:17:31.732301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.674 qpair failed and we were unable to recover it. 00:28:46.674 [2024-07-25 10:17:31.732515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.674 [2024-07-25 10:17:31.732545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.674 qpair failed and we were unable to recover it. 00:28:46.674 [2024-07-25 10:17:31.732744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.674 [2024-07-25 10:17:31.732773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.674 qpair failed and we were unable to recover it. 00:28:46.674 [2024-07-25 10:17:31.732970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.674 [2024-07-25 10:17:31.732999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.674 qpair failed and we were unable to recover it. 00:28:46.674 [2024-07-25 10:17:31.733199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.674 [2024-07-25 10:17:31.733225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.674 qpair failed and we were unable to recover it. 00:28:46.674 [2024-07-25 10:17:31.733442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.674 [2024-07-25 10:17:31.733472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.674 qpair failed and we were unable to recover it. 00:28:46.674 [2024-07-25 10:17:31.733651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.674 [2024-07-25 10:17:31.733679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.674 qpair failed and we were unable to recover it. 00:28:46.674 [2024-07-25 10:17:31.733860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.674 [2024-07-25 10:17:31.733889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.674 qpair failed and we were unable to recover it. 00:28:46.674 [2024-07-25 10:17:31.734067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.674 [2024-07-25 10:17:31.734094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.674 qpair failed and we were unable to recover it. 00:28:46.674 [2024-07-25 10:17:31.734322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.674 [2024-07-25 10:17:31.734350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.674 qpair failed and we were unable to recover it. 00:28:46.674 [2024-07-25 10:17:31.734559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.674 [2024-07-25 10:17:31.734588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.674 qpair failed and we were unable to recover it. 00:28:46.674 [2024-07-25 10:17:31.734762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.674 [2024-07-25 10:17:31.734796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.674 qpair failed and we were unable to recover it. 00:28:46.674 [2024-07-25 10:17:31.735008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.674 [2024-07-25 10:17:31.735033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.674 qpair failed and we were unable to recover it. 00:28:46.674 [2024-07-25 10:17:31.735210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.674 [2024-07-25 10:17:31.735269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.674 qpair failed and we were unable to recover it. 00:28:46.674 [2024-07-25 10:17:31.735482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.674 [2024-07-25 10:17:31.735509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.674 qpair failed and we were unable to recover it. 00:28:46.674 [2024-07-25 10:17:31.735707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.674 [2024-07-25 10:17:31.735752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.674 qpair failed and we were unable to recover it. 00:28:46.674 [2024-07-25 10:17:31.735973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.674 [2024-07-25 10:17:31.735999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.674 qpair failed and we were unable to recover it. 00:28:46.674 [2024-07-25 10:17:31.736175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.674 [2024-07-25 10:17:31.736230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.674 qpair failed and we were unable to recover it. 00:28:46.674 [2024-07-25 10:17:31.736403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.674 [2024-07-25 10:17:31.736439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.674 qpair failed and we were unable to recover it. 00:28:46.674 [2024-07-25 10:17:31.736624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.674 [2024-07-25 10:17:31.736652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.674 qpair failed and we were unable to recover it. 00:28:46.674 [2024-07-25 10:17:31.736837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.674 [2024-07-25 10:17:31.736863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.674 qpair failed and we were unable to recover it. 00:28:46.674 [2024-07-25 10:17:31.737041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.674 [2024-07-25 10:17:31.737099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.674 qpair failed and we were unable to recover it. 00:28:46.674 [2024-07-25 10:17:31.737272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.674 [2024-07-25 10:17:31.737301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.674 qpair failed and we were unable to recover it. 00:28:46.674 [2024-07-25 10:17:31.737474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.674 [2024-07-25 10:17:31.737504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.674 qpair failed and we were unable to recover it. 00:28:46.674 [2024-07-25 10:17:31.737667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.674 [2024-07-25 10:17:31.737694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.674 qpair failed and we were unable to recover it. 00:28:46.674 [2024-07-25 10:17:31.737891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.674 [2024-07-25 10:17:31.737941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.674 qpair failed and we were unable to recover it. 00:28:46.674 [2024-07-25 10:17:31.738116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.674 [2024-07-25 10:17:31.738145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.674 qpair failed and we were unable to recover it. 00:28:46.674 [2024-07-25 10:17:31.738298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.674 [2024-07-25 10:17:31.738327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.674 qpair failed and we were unable to recover it. 00:28:46.674 [2024-07-25 10:17:31.738532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.674 [2024-07-25 10:17:31.738560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.674 qpair failed and we were unable to recover it. 00:28:46.674 [2024-07-25 10:17:31.738767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.674 [2024-07-25 10:17:31.738830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.674 qpair failed and we were unable to recover it. 00:28:46.674 [2024-07-25 10:17:31.739000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.675 [2024-07-25 10:17:31.739028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.675 qpair failed and we were unable to recover it. 00:28:46.675 [2024-07-25 10:17:31.739165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.675 [2024-07-25 10:17:31.739194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.675 qpair failed and we were unable to recover it. 00:28:46.675 [2024-07-25 10:17:31.739387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.675 [2024-07-25 10:17:31.739413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.675 qpair failed and we were unable to recover it. 00:28:46.675 [2024-07-25 10:17:31.739652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.675 [2024-07-25 10:17:31.739682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.675 qpair failed and we were unable to recover it. 00:28:46.675 [2024-07-25 10:17:31.739873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.675 [2024-07-25 10:17:31.739902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.675 qpair failed and we were unable to recover it. 00:28:46.675 [2024-07-25 10:17:31.740086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.675 [2024-07-25 10:17:31.740116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.675 qpair failed and we were unable to recover it. 00:28:46.675 [2024-07-25 10:17:31.740299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.675 [2024-07-25 10:17:31.740326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.675 qpair failed and we were unable to recover it. 00:28:46.675 [2024-07-25 10:17:31.740483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.675 [2024-07-25 10:17:31.740513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.675 qpair failed and we were unable to recover it. 00:28:46.675 [2024-07-25 10:17:31.740694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.675 [2024-07-25 10:17:31.740724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.675 qpair failed and we were unable to recover it. 00:28:46.675 [2024-07-25 10:17:31.740862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.675 [2024-07-25 10:17:31.740890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.675 qpair failed and we were unable to recover it. 00:28:46.675 [2024-07-25 10:17:31.741091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.675 [2024-07-25 10:17:31.741117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.675 qpair failed and we were unable to recover it. 00:28:46.675 [2024-07-25 10:17:31.741322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.675 [2024-07-25 10:17:31.741351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.675 qpair failed and we were unable to recover it. 00:28:46.675 [2024-07-25 10:17:31.741506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.675 [2024-07-25 10:17:31.741536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.675 qpair failed and we were unable to recover it. 00:28:46.675 [2024-07-25 10:17:31.741737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.675 [2024-07-25 10:17:31.741779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.675 qpair failed and we were unable to recover it. 00:28:46.675 [2024-07-25 10:17:31.741975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.675 [2024-07-25 10:17:31.742002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.675 qpair failed and we were unable to recover it. 00:28:46.675 [2024-07-25 10:17:31.742157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.675 [2024-07-25 10:17:31.742217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.675 qpair failed and we were unable to recover it. 00:28:46.675 [2024-07-25 10:17:31.742437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.675 [2024-07-25 10:17:31.742465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.675 qpair failed and we were unable to recover it. 00:28:46.675 [2024-07-25 10:17:31.742599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.675 [2024-07-25 10:17:31.742627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.675 qpair failed and we were unable to recover it. 00:28:46.675 [2024-07-25 10:17:31.742861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.675 [2024-07-25 10:17:31.742887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.675 qpair failed and we were unable to recover it. 00:28:46.675 [2024-07-25 10:17:31.743077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.675 [2024-07-25 10:17:31.743138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.675 qpair failed and we were unable to recover it. 00:28:46.675 [2024-07-25 10:17:31.743279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.675 [2024-07-25 10:17:31.743309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.675 qpair failed and we were unable to recover it. 00:28:46.675 [2024-07-25 10:17:31.743504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.675 [2024-07-25 10:17:31.743535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.675 qpair failed and we were unable to recover it. 00:28:46.675 [2024-07-25 10:17:31.743722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.675 [2024-07-25 10:17:31.743746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.675 qpair failed and we were unable to recover it. 00:28:46.675 [2024-07-25 10:17:31.743984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.675 [2024-07-25 10:17:31.744051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.675 qpair failed and we were unable to recover it. 00:28:46.675 [2024-07-25 10:17:31.744251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.675 [2024-07-25 10:17:31.744281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.675 qpair failed and we were unable to recover it. 00:28:46.675 [2024-07-25 10:17:31.744416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.675 [2024-07-25 10:17:31.744451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.675 qpair failed and we were unable to recover it. 00:28:46.675 [2024-07-25 10:17:31.744634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.675 [2024-07-25 10:17:31.744671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.675 qpair failed and we were unable to recover it. 00:28:46.675 [2024-07-25 10:17:31.744909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.675 [2024-07-25 10:17:31.744960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.675 qpair failed and we were unable to recover it. 00:28:46.675 [2024-07-25 10:17:31.745194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.675 [2024-07-25 10:17:31.745223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.675 qpair failed and we were unable to recover it. 00:28:46.675 [2024-07-25 10:17:31.745434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.675 [2024-07-25 10:17:31.745464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.675 qpair failed and we were unable to recover it. 00:28:46.675 [2024-07-25 10:17:31.745650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.675 [2024-07-25 10:17:31.745677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.675 qpair failed and we were unable to recover it. 00:28:46.675 [2024-07-25 10:17:31.745851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.675 [2024-07-25 10:17:31.745913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.675 qpair failed and we were unable to recover it. 00:28:46.675 [2024-07-25 10:17:31.746087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.675 [2024-07-25 10:17:31.746116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.675 qpair failed and we were unable to recover it. 00:28:46.675 [2024-07-25 10:17:31.746314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.675 [2024-07-25 10:17:31.746343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.675 qpair failed and we were unable to recover it. 00:28:46.675 [2024-07-25 10:17:31.746540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.675 [2024-07-25 10:17:31.746566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.675 qpair failed and we were unable to recover it. 00:28:46.675 [2024-07-25 10:17:31.746796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.675 [2024-07-25 10:17:31.746853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.675 qpair failed and we were unable to recover it. 00:28:46.675 [2024-07-25 10:17:31.747053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.675 [2024-07-25 10:17:31.747083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.675 qpair failed and we were unable to recover it. 00:28:46.675 [2024-07-25 10:17:31.747258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.675 [2024-07-25 10:17:31.747286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.676 qpair failed and we were unable to recover it. 00:28:46.676 [2024-07-25 10:17:31.747496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.676 [2024-07-25 10:17:31.747522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.676 qpair failed and we were unable to recover it. 00:28:46.676 [2024-07-25 10:17:31.747687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.676 [2024-07-25 10:17:31.747715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.676 qpair failed and we were unable to recover it. 00:28:46.676 [2024-07-25 10:17:31.747873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.676 [2024-07-25 10:17:31.747903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.676 qpair failed and we were unable to recover it. 00:28:46.676 [2024-07-25 10:17:31.748116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.676 [2024-07-25 10:17:31.748145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.676 qpair failed and we were unable to recover it. 00:28:46.676 [2024-07-25 10:17:31.748296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.676 [2024-07-25 10:17:31.748322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.676 qpair failed and we were unable to recover it. 00:28:46.676 [2024-07-25 10:17:31.748497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.676 [2024-07-25 10:17:31.748525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.676 qpair failed and we were unable to recover it. 00:28:46.676 [2024-07-25 10:17:31.748685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.676 [2024-07-25 10:17:31.748715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.676 qpair failed and we were unable to recover it. 00:28:46.676 [2024-07-25 10:17:31.748859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.676 [2024-07-25 10:17:31.748889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.676 qpair failed and we were unable to recover it. 00:28:46.676 [2024-07-25 10:17:31.749031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.676 [2024-07-25 10:17:31.749056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.676 qpair failed and we were unable to recover it. 00:28:46.676 [2024-07-25 10:17:31.749236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.676 [2024-07-25 10:17:31.749265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.676 qpair failed and we were unable to recover it. 00:28:46.676 [2024-07-25 10:17:31.749401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.676 [2024-07-25 10:17:31.749437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.676 qpair failed and we were unable to recover it. 00:28:46.676 [2024-07-25 10:17:31.749594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.676 [2024-07-25 10:17:31.749624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.676 qpair failed and we were unable to recover it. 00:28:46.676 [2024-07-25 10:17:31.749773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.676 [2024-07-25 10:17:31.749800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.676 qpair failed and we were unable to recover it. 00:28:46.676 [2024-07-25 10:17:31.749976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.676 [2024-07-25 10:17:31.750004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.676 qpair failed and we were unable to recover it. 00:28:46.676 [2024-07-25 10:17:31.750143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.676 [2024-07-25 10:17:31.750171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.676 qpair failed and we were unable to recover it. 00:28:46.676 [2024-07-25 10:17:31.750379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.676 [2024-07-25 10:17:31.750408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.676 qpair failed and we were unable to recover it. 00:28:46.676 [2024-07-25 10:17:31.750568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.676 [2024-07-25 10:17:31.750594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.676 qpair failed and we were unable to recover it. 00:28:46.676 [2024-07-25 10:17:31.750764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.676 [2024-07-25 10:17:31.750807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.676 qpair failed and we were unable to recover it. 00:28:46.676 [2024-07-25 10:17:31.750978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.676 [2024-07-25 10:17:31.751008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.676 qpair failed and we were unable to recover it. 00:28:46.676 [2024-07-25 10:17:31.751194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.676 [2024-07-25 10:17:31.751223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.676 qpair failed and we were unable to recover it. 00:28:46.676 [2024-07-25 10:17:31.751423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.676 [2024-07-25 10:17:31.751458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.676 qpair failed and we were unable to recover it. 00:28:46.676 [2024-07-25 10:17:31.751669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.676 [2024-07-25 10:17:31.751714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.676 qpair failed and we were unable to recover it. 00:28:46.676 [2024-07-25 10:17:31.751889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.676 [2024-07-25 10:17:31.751918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.676 qpair failed and we were unable to recover it. 00:28:46.676 [2024-07-25 10:17:31.752081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.676 [2024-07-25 10:17:31.752110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.676 qpair failed and we were unable to recover it. 00:28:46.676 [2024-07-25 10:17:31.752327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.676 [2024-07-25 10:17:31.752357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.676 qpair failed and we were unable to recover it. 00:28:46.676 [2024-07-25 10:17:31.752504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.676 [2024-07-25 10:17:31.752530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.676 qpair failed and we were unable to recover it. 00:28:46.676 [2024-07-25 10:17:31.752677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.676 [2024-07-25 10:17:31.752703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.676 qpair failed and we were unable to recover it. 00:28:46.676 [2024-07-25 10:17:31.752903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.676 [2024-07-25 10:17:31.752932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.676 qpair failed and we were unable to recover it. 00:28:46.676 [2024-07-25 10:17:31.753142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.676 [2024-07-25 10:17:31.753167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.676 qpair failed and we were unable to recover it. 00:28:46.676 [2024-07-25 10:17:31.753329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.676 [2024-07-25 10:17:31.753359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.676 qpair failed and we were unable to recover it. 00:28:46.676 [2024-07-25 10:17:31.753503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.676 [2024-07-25 10:17:31.753531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.676 qpair failed and we were unable to recover it. 00:28:46.676 [2024-07-25 10:17:31.753685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.676 [2024-07-25 10:17:31.753725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.676 qpair failed and we were unable to recover it. 00:28:46.676 [2024-07-25 10:17:31.753938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.676 [2024-07-25 10:17:31.753963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.676 qpair failed and we were unable to recover it. 00:28:46.676 [2024-07-25 10:17:31.754138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.676 [2024-07-25 10:17:31.754187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.676 qpair failed and we were unable to recover it. 00:28:46.676 [2024-07-25 10:17:31.754362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.676 [2024-07-25 10:17:31.754391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.676 qpair failed and we were unable to recover it. 00:28:46.676 [2024-07-25 10:17:31.754549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.676 [2024-07-25 10:17:31.754579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.676 qpair failed and we were unable to recover it. 00:28:46.676 [2024-07-25 10:17:31.754735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.676 [2024-07-25 10:17:31.754761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.676 qpair failed and we were unable to recover it. 00:28:46.676 [2024-07-25 10:17:31.754941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.677 [2024-07-25 10:17:31.755002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.677 qpair failed and we were unable to recover it. 00:28:46.677 [2024-07-25 10:17:31.755209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.677 [2024-07-25 10:17:31.755238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.677 qpair failed and we were unable to recover it. 00:28:46.677 [2024-07-25 10:17:31.755440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.677 [2024-07-25 10:17:31.755470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.677 qpair failed and we were unable to recover it. 00:28:46.677 [2024-07-25 10:17:31.755652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.677 [2024-07-25 10:17:31.755679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.677 qpair failed and we were unable to recover it. 00:28:46.677 [2024-07-25 10:17:31.755929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.677 [2024-07-25 10:17:31.755980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.677 qpair failed and we were unable to recover it. 00:28:46.677 [2024-07-25 10:17:31.756216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.677 [2024-07-25 10:17:31.756245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.677 qpair failed and we were unable to recover it. 00:28:46.677 [2024-07-25 10:17:31.756426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.677 [2024-07-25 10:17:31.756462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.677 qpair failed and we were unable to recover it. 00:28:46.677 [2024-07-25 10:17:31.756644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.677 [2024-07-25 10:17:31.756670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.677 qpair failed and we were unable to recover it. 00:28:46.677 [2024-07-25 10:17:31.756861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.677 [2024-07-25 10:17:31.756913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.677 qpair failed and we were unable to recover it. 00:28:46.677 [2024-07-25 10:17:31.757108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.677 [2024-07-25 10:17:31.757137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.677 qpair failed and we were unable to recover it. 00:28:46.677 [2024-07-25 10:17:31.757313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.677 [2024-07-25 10:17:31.757342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.677 qpair failed and we were unable to recover it. 00:28:46.677 [2024-07-25 10:17:31.757499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.677 [2024-07-25 10:17:31.757527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.677 qpair failed and we were unable to recover it. 00:28:46.677 [2024-07-25 10:17:31.757667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.677 [2024-07-25 10:17:31.757710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.677 qpair failed and we were unable to recover it. 00:28:46.677 [2024-07-25 10:17:31.757909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.677 [2024-07-25 10:17:31.757943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.677 qpair failed and we were unable to recover it. 00:28:46.677 [2024-07-25 10:17:31.758143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.677 [2024-07-25 10:17:31.758173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.677 qpair failed and we were unable to recover it. 00:28:46.677 [2024-07-25 10:17:31.758341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.677 [2024-07-25 10:17:31.758366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.677 qpair failed and we were unable to recover it. 00:28:46.677 [2024-07-25 10:17:31.758537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.677 [2024-07-25 10:17:31.758567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.677 qpair failed and we were unable to recover it. 00:28:46.677 [2024-07-25 10:17:31.758709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.677 [2024-07-25 10:17:31.758738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.677 qpair failed and we were unable to recover it. 00:28:46.677 [2024-07-25 10:17:31.758945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.677 [2024-07-25 10:17:31.758974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.677 qpair failed and we were unable to recover it. 00:28:46.677 [2024-07-25 10:17:31.759162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.677 [2024-07-25 10:17:31.759188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.677 qpair failed and we were unable to recover it. 00:28:46.677 [2024-07-25 10:17:31.759383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.677 [2024-07-25 10:17:31.759412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.677 qpair failed and we were unable to recover it. 00:28:46.677 [2024-07-25 10:17:31.759591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.677 [2024-07-25 10:17:31.759621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.677 qpair failed and we were unable to recover it. 00:28:46.677 [2024-07-25 10:17:31.759786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.677 [2024-07-25 10:17:31.759815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.677 qpair failed and we were unable to recover it. 00:28:46.677 [2024-07-25 10:17:31.760028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.677 [2024-07-25 10:17:31.760054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.677 qpair failed and we were unable to recover it. 00:28:46.677 [2024-07-25 10:17:31.760196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.677 [2024-07-25 10:17:31.760249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.677 qpair failed and we were unable to recover it. 00:28:46.677 [2024-07-25 10:17:31.760424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.677 [2024-07-25 10:17:31.760459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.677 qpair failed and we were unable to recover it. 00:28:46.677 [2024-07-25 10:17:31.760638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.677 [2024-07-25 10:17:31.760667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.677 qpair failed and we were unable to recover it. 00:28:46.677 [2024-07-25 10:17:31.760847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.677 [2024-07-25 10:17:31.760872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.677 qpair failed and we were unable to recover it. 00:28:46.677 [2024-07-25 10:17:31.761069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.677 [2024-07-25 10:17:31.761097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.677 qpair failed and we were unable to recover it. 00:28:46.677 [2024-07-25 10:17:31.761297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.677 [2024-07-25 10:17:31.761331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.677 qpair failed and we were unable to recover it. 00:28:46.677 [2024-07-25 10:17:31.761521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.677 [2024-07-25 10:17:31.761547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.677 qpair failed and we were unable to recover it. 00:28:46.677 [2024-07-25 10:17:31.761706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.677 [2024-07-25 10:17:31.761755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.677 qpair failed and we were unable to recover it. 00:28:46.677 [2024-07-25 10:17:31.761973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.677 [2024-07-25 10:17:31.761998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.677 qpair failed and we were unable to recover it. 00:28:46.677 [2024-07-25 10:17:31.762153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.677 [2024-07-25 10:17:31.762182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.677 qpair failed and we were unable to recover it. 00:28:46.677 [2024-07-25 10:17:31.762324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.677 [2024-07-25 10:17:31.762353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.677 qpair failed and we were unable to recover it. 00:28:46.677 [2024-07-25 10:17:31.762578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.677 [2024-07-25 10:17:31.762604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.677 qpair failed and we were unable to recover it. 00:28:46.677 [2024-07-25 10:17:31.762743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.677 [2024-07-25 10:17:31.762773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.677 qpair failed and we were unable to recover it. 00:28:46.677 [2024-07-25 10:17:31.762984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.677 [2024-07-25 10:17:31.763013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.677 qpair failed and we were unable to recover it. 00:28:46.678 [2024-07-25 10:17:31.763225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.678 [2024-07-25 10:17:31.763253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.678 qpair failed and we were unable to recover it. 00:28:46.678 [2024-07-25 10:17:31.763448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.678 [2024-07-25 10:17:31.763493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.678 qpair failed and we were unable to recover it. 00:28:46.678 [2024-07-25 10:17:31.763674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.678 [2024-07-25 10:17:31.763721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.678 qpair failed and we were unable to recover it. 00:28:46.678 [2024-07-25 10:17:31.763900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.678 [2024-07-25 10:17:31.763930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.678 qpair failed and we were unable to recover it. 00:28:46.678 [2024-07-25 10:17:31.764097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.678 [2024-07-25 10:17:31.764127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.678 qpair failed and we were unable to recover it. 00:28:46.678 [2024-07-25 10:17:31.764335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.678 [2024-07-25 10:17:31.764358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.678 qpair failed and we were unable to recover it. 00:28:46.678 [2024-07-25 10:17:31.764541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.678 [2024-07-25 10:17:31.764567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.678 qpair failed and we were unable to recover it. 00:28:46.678 [2024-07-25 10:17:31.764745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.678 [2024-07-25 10:17:31.764774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.678 qpair failed and we were unable to recover it. 00:28:46.678 [2024-07-25 10:17:31.764979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.678 [2024-07-25 10:17:31.765008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.678 qpair failed and we were unable to recover it. 00:28:46.678 [2024-07-25 10:17:31.765181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.678 [2024-07-25 10:17:31.765205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.678 qpair failed and we were unable to recover it. 00:28:46.678 [2024-07-25 10:17:31.765382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.678 [2024-07-25 10:17:31.765411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.678 qpair failed and we were unable to recover it. 00:28:46.678 [2024-07-25 10:17:31.765570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.678 [2024-07-25 10:17:31.765598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.678 qpair failed and we were unable to recover it. 00:28:46.678 [2024-07-25 10:17:31.765741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.678 [2024-07-25 10:17:31.765771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.678 qpair failed and we were unable to recover it. 00:28:46.678 [2024-07-25 10:17:31.765915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.678 [2024-07-25 10:17:31.765953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.678 qpair failed and we were unable to recover it. 00:28:46.678 [2024-07-25 10:17:31.766140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.678 [2024-07-25 10:17:31.766197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.678 qpair failed and we were unable to recover it. 00:28:46.678 [2024-07-25 10:17:31.766372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.678 [2024-07-25 10:17:31.766404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.678 qpair failed and we were unable to recover it. 00:28:46.678 [2024-07-25 10:17:31.766570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.678 [2024-07-25 10:17:31.766600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.678 qpair failed and we were unable to recover it. 00:28:46.678 [2024-07-25 10:17:31.766760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.678 [2024-07-25 10:17:31.766783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.678 qpair failed and we were unable to recover it. 00:28:46.678 [2024-07-25 10:17:31.766970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.678 [2024-07-25 10:17:31.767027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.678 qpair failed and we were unable to recover it. 00:28:46.678 [2024-07-25 10:17:31.767163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.678 [2024-07-25 10:17:31.767191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.678 qpair failed and we were unable to recover it. 00:28:46.678 [2024-07-25 10:17:31.767391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.678 [2024-07-25 10:17:31.767420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.678 qpair failed and we were unable to recover it. 00:28:46.678 [2024-07-25 10:17:31.767600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.678 [2024-07-25 10:17:31.767627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.678 qpair failed and we were unable to recover it. 00:28:46.678 [2024-07-25 10:17:31.767840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.678 [2024-07-25 10:17:31.767891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.678 qpair failed and we were unable to recover it. 00:28:46.678 [2024-07-25 10:17:31.768115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.678 [2024-07-25 10:17:31.768144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.678 qpair failed and we were unable to recover it. 00:28:46.678 [2024-07-25 10:17:31.768315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.678 [2024-07-25 10:17:31.768344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.678 qpair failed and we were unable to recover it. 00:28:46.678 [2024-07-25 10:17:31.768496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.678 [2024-07-25 10:17:31.768522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.678 qpair failed and we were unable to recover it. 00:28:46.678 [2024-07-25 10:17:31.768701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.678 [2024-07-25 10:17:31.768740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.678 qpair failed and we were unable to recover it. 00:28:46.678 [2024-07-25 10:17:31.768925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.678 [2024-07-25 10:17:31.768954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.678 qpair failed and we were unable to recover it. 00:28:46.678 [2024-07-25 10:17:31.769129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.678 [2024-07-25 10:17:31.769159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.678 qpair failed and we were unable to recover it. 00:28:46.678 [2024-07-25 10:17:31.769357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.678 [2024-07-25 10:17:31.769387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.678 qpair failed and we were unable to recover it. 00:28:46.678 [2024-07-25 10:17:31.769546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.678 [2024-07-25 10:17:31.769573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.678 qpair failed and we were unable to recover it. 00:28:46.678 [2024-07-25 10:17:31.769757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.678 [2024-07-25 10:17:31.769796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.679 qpair failed and we were unable to recover it. 00:28:46.679 [2024-07-25 10:17:31.769989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.679 [2024-07-25 10:17:31.770018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.679 qpair failed and we were unable to recover it. 00:28:46.679 [2024-07-25 10:17:31.770232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.679 [2024-07-25 10:17:31.770255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.679 qpair failed and we were unable to recover it. 00:28:46.679 [2024-07-25 10:17:31.770403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.679 [2024-07-25 10:17:31.770437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.679 qpair failed and we were unable to recover it. 00:28:46.679 [2024-07-25 10:17:31.770617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.679 [2024-07-25 10:17:31.770641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.679 qpair failed and we were unable to recover it. 00:28:46.679 [2024-07-25 10:17:31.770821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.679 [2024-07-25 10:17:31.770849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.679 qpair failed and we were unable to recover it. 00:28:46.679 [2024-07-25 10:17:31.771064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.679 [2024-07-25 10:17:31.771087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.679 qpair failed and we were unable to recover it. 00:28:46.679 [2024-07-25 10:17:31.771260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.679 [2024-07-25 10:17:31.771289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.679 qpair failed and we were unable to recover it. 00:28:46.679 [2024-07-25 10:17:31.771519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.679 [2024-07-25 10:17:31.771549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.679 qpair failed and we were unable to recover it. 00:28:46.679 [2024-07-25 10:17:31.771701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.679 [2024-07-25 10:17:31.771730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.679 qpair failed and we were unable to recover it. 00:28:46.679 [2024-07-25 10:17:31.771927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.679 [2024-07-25 10:17:31.771949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.679 qpair failed and we were unable to recover it. 00:28:46.679 [2024-07-25 10:17:31.772174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.679 [2024-07-25 10:17:31.772223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.679 qpair failed and we were unable to recover it. 00:28:46.679 [2024-07-25 10:17:31.772426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.679 [2024-07-25 10:17:31.772471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.679 qpair failed and we were unable to recover it. 00:28:46.679 [2024-07-25 10:17:31.772628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.679 [2024-07-25 10:17:31.772656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.679 qpair failed and we were unable to recover it. 00:28:46.679 [2024-07-25 10:17:31.772827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.679 [2024-07-25 10:17:31.772851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.679 qpair failed and we were unable to recover it. 00:28:46.679 [2024-07-25 10:17:31.773086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.679 [2024-07-25 10:17:31.773138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.679 qpair failed and we were unable to recover it. 00:28:46.679 [2024-07-25 10:17:31.773338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.679 [2024-07-25 10:17:31.773367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.679 qpair failed and we were unable to recover it. 00:28:46.679 [2024-07-25 10:17:31.773543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.679 [2024-07-25 10:17:31.773573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.679 qpair failed and we were unable to recover it. 00:28:46.679 [2024-07-25 10:17:31.773768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.679 [2024-07-25 10:17:31.773791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.679 qpair failed and we were unable to recover it. 00:28:46.679 [2024-07-25 10:17:31.773955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.679 [2024-07-25 10:17:31.774005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.679 qpair failed and we were unable to recover it. 00:28:46.679 [2024-07-25 10:17:31.774152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.679 [2024-07-25 10:17:31.774180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.679 qpair failed and we were unable to recover it. 00:28:46.679 [2024-07-25 10:17:31.774317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.679 [2024-07-25 10:17:31.774348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.679 qpair failed and we were unable to recover it. 00:28:46.679 [2024-07-25 10:17:31.774559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.679 [2024-07-25 10:17:31.774598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.679 qpair failed and we were unable to recover it. 00:28:46.679 [2024-07-25 10:17:31.774828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.679 [2024-07-25 10:17:31.774858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.679 qpair failed and we were unable to recover it. 00:28:46.679 [2024-07-25 10:17:31.775055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.679 [2024-07-25 10:17:31.775089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.679 qpair failed and we were unable to recover it. 00:28:46.679 [2024-07-25 10:17:31.775276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.679 [2024-07-25 10:17:31.775306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.679 qpair failed and we were unable to recover it. 00:28:46.679 [2024-07-25 10:17:31.775463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.679 [2024-07-25 10:17:31.775488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.679 qpair failed and we were unable to recover it. 00:28:46.679 [2024-07-25 10:17:31.775661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.679 [2024-07-25 10:17:31.775685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.679 qpair failed and we were unable to recover it. 00:28:46.679 [2024-07-25 10:17:31.775868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.679 [2024-07-25 10:17:31.775897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.679 qpair failed and we were unable to recover it. 00:28:46.679 [2024-07-25 10:17:31.776081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.679 [2024-07-25 10:17:31.776110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.679 qpair failed and we were unable to recover it. 00:28:46.679 [2024-07-25 10:17:31.776277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.679 [2024-07-25 10:17:31.776300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.679 qpair failed and we were unable to recover it. 00:28:46.679 [2024-07-25 10:17:31.776498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.679 [2024-07-25 10:17:31.776527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.679 qpair failed and we were unable to recover it. 00:28:46.679 [2024-07-25 10:17:31.776664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.679 [2024-07-25 10:17:31.776693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.679 qpair failed and we were unable to recover it. 00:28:46.679 [2024-07-25 10:17:31.776861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.679 [2024-07-25 10:17:31.776891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.679 qpair failed and we were unable to recover it. 00:28:46.679 [2024-07-25 10:17:31.777066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.679 [2024-07-25 10:17:31.777089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.679 qpair failed and we were unable to recover it. 00:28:46.679 [2024-07-25 10:17:31.777269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.679 [2024-07-25 10:17:31.777298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.679 qpair failed and we were unable to recover it. 00:28:46.679 [2024-07-25 10:17:31.777487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.679 [2024-07-25 10:17:31.777518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.679 qpair failed and we were unable to recover it. 00:28:46.679 [2024-07-25 10:17:31.777670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.679 [2024-07-25 10:17:31.777699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.680 qpair failed and we were unable to recover it. 00:28:46.680 [2024-07-25 10:17:31.777921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.680 [2024-07-25 10:17:31.777945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.680 qpair failed and we were unable to recover it. 00:28:46.680 [2024-07-25 10:17:31.778157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.680 [2024-07-25 10:17:31.778187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.680 qpair failed and we were unable to recover it. 00:28:46.680 [2024-07-25 10:17:31.778378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.680 [2024-07-25 10:17:31.778407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.680 qpair failed and we were unable to recover it. 00:28:46.680 [2024-07-25 10:17:31.778603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.680 [2024-07-25 10:17:31.778628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.680 qpair failed and we were unable to recover it. 00:28:46.680 [2024-07-25 10:17:31.778787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.680 [2024-07-25 10:17:31.778809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.680 qpair failed and we were unable to recover it. 00:28:46.680 [2024-07-25 10:17:31.778964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.680 [2024-07-25 10:17:31.779023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.680 qpair failed and we were unable to recover it. 00:28:46.680 [2024-07-25 10:17:31.779224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.680 [2024-07-25 10:17:31.779254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.680 qpair failed and we were unable to recover it. 00:28:46.680 [2024-07-25 10:17:31.779444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.680 [2024-07-25 10:17:31.779474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.680 qpair failed and we were unable to recover it. 00:28:46.680 [2024-07-25 10:17:31.779643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.680 [2024-07-25 10:17:31.779668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.680 qpair failed and we were unable to recover it. 00:28:46.680 [2024-07-25 10:17:31.779890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.680 [2024-07-25 10:17:31.779938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.680 qpair failed and we were unable to recover it. 00:28:46.680 [2024-07-25 10:17:31.780146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.680 [2024-07-25 10:17:31.780175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.680 qpair failed and we were unable to recover it. 00:28:46.680 [2024-07-25 10:17:31.780365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.680 [2024-07-25 10:17:31.780403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.680 qpair failed and we were unable to recover it. 00:28:46.680 [2024-07-25 10:17:31.780586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.680 [2024-07-25 10:17:31.780611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.680 qpair failed and we were unable to recover it. 00:28:46.680 [2024-07-25 10:17:31.780873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.680 [2024-07-25 10:17:31.780932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.680 qpair failed and we were unable to recover it. 00:28:46.680 [2024-07-25 10:17:31.781129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.680 [2024-07-25 10:17:31.781159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.680 qpair failed and we were unable to recover it. 00:28:46.680 [2024-07-25 10:17:31.781337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.680 [2024-07-25 10:17:31.781366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.680 qpair failed and we were unable to recover it. 00:28:46.680 [2024-07-25 10:17:31.781530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.680 [2024-07-25 10:17:31.781555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.680 qpair failed and we were unable to recover it. 00:28:46.680 [2024-07-25 10:17:31.781744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.680 [2024-07-25 10:17:31.781774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.680 qpair failed and we were unable to recover it. 00:28:46.680 [2024-07-25 10:17:31.781959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.680 [2024-07-25 10:17:31.781988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.680 qpair failed and we were unable to recover it. 00:28:46.680 [2024-07-25 10:17:31.782196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.680 [2024-07-25 10:17:31.782225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.680 qpair failed and we were unable to recover it. 00:28:46.680 [2024-07-25 10:17:31.782413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.680 [2024-07-25 10:17:31.782457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.680 qpair failed and we were unable to recover it. 00:28:46.680 [2024-07-25 10:17:31.782596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.680 [2024-07-25 10:17:31.782634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.680 qpair failed and we were unable to recover it. 00:28:46.680 [2024-07-25 10:17:31.782805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.680 [2024-07-25 10:17:31.782835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.680 qpair failed and we were unable to recover it. 00:28:46.680 [2024-07-25 10:17:31.783013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.680 [2024-07-25 10:17:31.783042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.680 qpair failed and we were unable to recover it. 00:28:46.680 [2024-07-25 10:17:31.783210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.680 [2024-07-25 10:17:31.783232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.680 qpair failed and we were unable to recover it. 00:28:46.680 [2024-07-25 10:17:31.783402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.680 [2024-07-25 10:17:31.783437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.680 qpair failed and we were unable to recover it. 00:28:46.680 [2024-07-25 10:17:31.783597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.680 [2024-07-25 10:17:31.783631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.680 qpair failed and we were unable to recover it. 00:28:46.680 [2024-07-25 10:17:31.783804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.680 [2024-07-25 10:17:31.783833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.680 qpair failed and we were unable to recover it. 00:28:46.680 [2024-07-25 10:17:31.784023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.680 [2024-07-25 10:17:31.784046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.680 qpair failed and we were unable to recover it. 00:28:46.680 [2024-07-25 10:17:31.784271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.680 [2024-07-25 10:17:31.784321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.680 qpair failed and we were unable to recover it. 00:28:46.680 [2024-07-25 10:17:31.784497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.680 [2024-07-25 10:17:31.784525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.680 qpair failed and we were unable to recover it. 00:28:46.680 [2024-07-25 10:17:31.784675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.680 [2024-07-25 10:17:31.784704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.680 qpair failed and we were unable to recover it. 00:28:46.680 [2024-07-25 10:17:31.784903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.680 [2024-07-25 10:17:31.784927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.680 qpair failed and we were unable to recover it. 00:28:46.680 [2024-07-25 10:17:31.785114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.680 [2024-07-25 10:17:31.785163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.680 qpair failed and we were unable to recover it. 00:28:46.680 [2024-07-25 10:17:31.785344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.680 [2024-07-25 10:17:31.785372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.680 qpair failed and we were unable to recover it. 00:28:46.680 [2024-07-25 10:17:31.785536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.680 [2024-07-25 10:17:31.785566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.680 qpair failed and we were unable to recover it. 00:28:46.680 [2024-07-25 10:17:31.785771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.680 [2024-07-25 10:17:31.785794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.680 qpair failed and we were unable to recover it. 00:28:46.680 [2024-07-25 10:17:31.785975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.681 [2024-07-25 10:17:31.786027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.681 qpair failed and we were unable to recover it. 00:28:46.681 [2024-07-25 10:17:31.786164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.681 [2024-07-25 10:17:31.786192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.681 qpair failed and we were unable to recover it. 00:28:46.681 [2024-07-25 10:17:31.786368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.681 [2024-07-25 10:17:31.786397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.681 qpair failed and we were unable to recover it. 00:28:46.681 [2024-07-25 10:17:31.786588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.681 [2024-07-25 10:17:31.786613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.681 qpair failed and we were unable to recover it. 00:28:46.681 [2024-07-25 10:17:31.786857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.681 [2024-07-25 10:17:31.786907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.681 qpair failed and we were unable to recover it. 00:28:46.681 [2024-07-25 10:17:31.787084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.681 [2024-07-25 10:17:31.787113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.681 qpair failed and we were unable to recover it. 00:28:46.681 [2024-07-25 10:17:31.787247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.681 [2024-07-25 10:17:31.787274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.681 qpair failed and we were unable to recover it. 00:28:46.681 [2024-07-25 10:17:31.787509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.681 [2024-07-25 10:17:31.787534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.681 qpair failed and we were unable to recover it. 00:28:46.681 [2024-07-25 10:17:31.787684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.681 [2024-07-25 10:17:31.787709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.681 qpair failed and we were unable to recover it. 00:28:46.681 [2024-07-25 10:17:31.787877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.681 [2024-07-25 10:17:31.787907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.681 qpair failed and we were unable to recover it. 00:28:46.681 [2024-07-25 10:17:31.788109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.681 [2024-07-25 10:17:31.788139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.681 qpair failed and we were unable to recover it. 00:28:46.681 [2024-07-25 10:17:31.788352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.681 [2024-07-25 10:17:31.788381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.681 qpair failed and we were unable to recover it. 00:28:46.681 [2024-07-25 10:17:31.788557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.681 [2024-07-25 10:17:31.788583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.681 qpair failed and we were unable to recover it. 00:28:46.681 [2024-07-25 10:17:31.788759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.681 [2024-07-25 10:17:31.788788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.681 qpair failed and we were unable to recover it. 00:28:46.681 [2024-07-25 10:17:31.788974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.681 [2024-07-25 10:17:31.789010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.681 qpair failed and we were unable to recover it. 00:28:46.681 [2024-07-25 10:17:31.789175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.681 [2024-07-25 10:17:31.789198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.681 qpair failed and we were unable to recover it. 00:28:46.681 [2024-07-25 10:17:31.789386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.681 [2024-07-25 10:17:31.789416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.681 qpair failed and we were unable to recover it. 00:28:46.681 [2024-07-25 10:17:31.789622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.681 [2024-07-25 10:17:31.789649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.681 qpair failed and we were unable to recover it. 00:28:46.681 [2024-07-25 10:17:31.789833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.681 [2024-07-25 10:17:31.789872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.681 qpair failed and we were unable to recover it. 00:28:46.681 [2024-07-25 10:17:31.790058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.681 [2024-07-25 10:17:31.790084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.681 qpair failed and we were unable to recover it. 00:28:46.681 [2024-07-25 10:17:31.790277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.681 [2024-07-25 10:17:31.790306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.681 qpair failed and we were unable to recover it. 00:28:46.681 [2024-07-25 10:17:31.790490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.681 [2024-07-25 10:17:31.790520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.681 qpair failed and we were unable to recover it. 00:28:46.681 [2024-07-25 10:17:31.790656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.681 [2024-07-25 10:17:31.790685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.681 qpair failed and we were unable to recover it. 00:28:46.681 [2024-07-25 10:17:31.790828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.681 [2024-07-25 10:17:31.790866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.681 qpair failed and we were unable to recover it. 00:28:46.681 [2024-07-25 10:17:31.791036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.681 [2024-07-25 10:17:31.791081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.681 qpair failed and we were unable to recover it. 00:28:46.681 [2024-07-25 10:17:31.791321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.681 [2024-07-25 10:17:31.791349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.681 qpair failed and we were unable to recover it. 00:28:46.681 [2024-07-25 10:17:31.791538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.681 [2024-07-25 10:17:31.791567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.681 qpair failed and we were unable to recover it. 00:28:46.681 [2024-07-25 10:17:31.791768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.681 [2024-07-25 10:17:31.791794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.681 qpair failed and we were unable to recover it. 00:28:46.681 [2024-07-25 10:17:31.792021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.681 [2024-07-25 10:17:31.792067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.681 qpair failed and we were unable to recover it. 00:28:46.681 [2024-07-25 10:17:31.792252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.681 [2024-07-25 10:17:31.792288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.681 qpair failed and we were unable to recover it. 00:28:46.681 [2024-07-25 10:17:31.792494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.681 [2024-07-25 10:17:31.792524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.681 qpair failed and we were unable to recover it. 00:28:46.681 [2024-07-25 10:17:31.792686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.681 [2024-07-25 10:17:31.792710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.681 qpair failed and we were unable to recover it. 00:28:46.681 [2024-07-25 10:17:31.792950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.681 [2024-07-25 10:17:31.792996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.681 qpair failed and we were unable to recover it. 00:28:46.681 [2024-07-25 10:17:31.793171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.681 [2024-07-25 10:17:31.793199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.681 qpair failed and we were unable to recover it. 00:28:46.959 [2024-07-25 10:17:31.793395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.959 [2024-07-25 10:17:31.793424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.959 qpair failed and we were unable to recover it. 00:28:46.959 [2024-07-25 10:17:31.793581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.959 [2024-07-25 10:17:31.793608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.959 qpair failed and we were unable to recover it. 00:28:46.959 [2024-07-25 10:17:31.793814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.959 [2024-07-25 10:17:31.793851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.959 qpair failed and we were unable to recover it. 00:28:46.959 [2024-07-25 10:17:31.794001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.959 [2024-07-25 10:17:31.794029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.959 qpair failed and we were unable to recover it. 00:28:46.959 [2024-07-25 10:17:31.794189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.959 [2024-07-25 10:17:31.794219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.959 qpair failed and we were unable to recover it. 00:28:46.959 [2024-07-25 10:17:31.794410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.959 [2024-07-25 10:17:31.794458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.959 qpair failed and we were unable to recover it. 00:28:46.959 [2024-07-25 10:17:31.794630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.959 [2024-07-25 10:17:31.794659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.959 qpair failed and we were unable to recover it. 00:28:46.959 [2024-07-25 10:17:31.794869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.959 [2024-07-25 10:17:31.794899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.959 qpair failed and we were unable to recover it. 00:28:46.959 [2024-07-25 10:17:31.795031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.959 [2024-07-25 10:17:31.795059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.959 qpair failed and we were unable to recover it. 00:28:46.959 [2024-07-25 10:17:31.795260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.959 [2024-07-25 10:17:31.795285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.959 qpair failed and we were unable to recover it. 00:28:46.959 [2024-07-25 10:17:31.795514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.959 [2024-07-25 10:17:31.795544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.959 qpair failed and we were unable to recover it. 00:28:46.959 [2024-07-25 10:17:31.795690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.959 [2024-07-25 10:17:31.795720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.959 qpair failed and we were unable to recover it. 00:28:46.959 [2024-07-25 10:17:31.795915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.959 [2024-07-25 10:17:31.795945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.959 qpair failed and we were unable to recover it. 00:28:46.959 [2024-07-25 10:17:31.796151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.959 [2024-07-25 10:17:31.796176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.959 qpair failed and we were unable to recover it. 00:28:46.959 [2024-07-25 10:17:31.796388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.959 [2024-07-25 10:17:31.796418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.959 qpair failed and we were unable to recover it. 00:28:46.959 [2024-07-25 10:17:31.796578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.959 [2024-07-25 10:17:31.796607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.959 qpair failed and we were unable to recover it. 00:28:46.959 [2024-07-25 10:17:31.796772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.959 [2024-07-25 10:17:31.796801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.959 qpair failed and we were unable to recover it. 00:28:46.959 [2024-07-25 10:17:31.797004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.959 [2024-07-25 10:17:31.797029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.959 qpair failed and we were unable to recover it. 00:28:46.959 [2024-07-25 10:17:31.797210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.959 [2024-07-25 10:17:31.797258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.959 qpair failed and we were unable to recover it. 00:28:46.959 [2024-07-25 10:17:31.797440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.959 [2024-07-25 10:17:31.797483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.959 qpair failed and we were unable to recover it. 00:28:46.959 [2024-07-25 10:17:31.797628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.959 [2024-07-25 10:17:31.797655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.959 qpair failed and we were unable to recover it. 00:28:46.959 [2024-07-25 10:17:31.797846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.959 [2024-07-25 10:17:31.797871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.959 qpair failed and we were unable to recover it. 00:28:46.959 [2024-07-25 10:17:31.798093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.959 [2024-07-25 10:17:31.798147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.960 qpair failed and we were unable to recover it. 00:28:46.960 [2024-07-25 10:17:31.798345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.960 [2024-07-25 10:17:31.798374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.960 qpair failed and we were unable to recover it. 00:28:46.960 [2024-07-25 10:17:31.798527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.960 [2024-07-25 10:17:31.798556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.960 qpair failed and we were unable to recover it. 00:28:46.960 [2024-07-25 10:17:31.798744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.960 [2024-07-25 10:17:31.798769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.960 qpair failed and we were unable to recover it. 00:28:46.960 [2024-07-25 10:17:31.798974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.960 [2024-07-25 10:17:31.799024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.960 qpair failed and we were unable to recover it. 00:28:46.960 [2024-07-25 10:17:31.799199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.960 [2024-07-25 10:17:31.799228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.960 qpair failed and we were unable to recover it. 00:28:46.960 [2024-07-25 10:17:31.799384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.960 [2024-07-25 10:17:31.799413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.960 qpair failed and we were unable to recover it. 00:28:46.960 [2024-07-25 10:17:31.799586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.960 [2024-07-25 10:17:31.799611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.960 qpair failed and we were unable to recover it. 00:28:46.960 [2024-07-25 10:17:31.799768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.960 [2024-07-25 10:17:31.799808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.960 qpair failed and we were unable to recover it. 00:28:46.960 [2024-07-25 10:17:31.800012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.960 [2024-07-25 10:17:31.800041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.960 qpair failed and we were unable to recover it. 00:28:46.960 [2024-07-25 10:17:31.800211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.960 [2024-07-25 10:17:31.800240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.960 qpair failed and we were unable to recover it. 00:28:46.960 [2024-07-25 10:17:31.800441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.960 [2024-07-25 10:17:31.800466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.960 qpair failed and we were unable to recover it. 00:28:46.960 [2024-07-25 10:17:31.800636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.960 [2024-07-25 10:17:31.800664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.960 qpair failed and we were unable to recover it. 00:28:46.960 [2024-07-25 10:17:31.800814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.960 [2024-07-25 10:17:31.800848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.960 qpair failed and we were unable to recover it. 00:28:46.960 [2024-07-25 10:17:31.801047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.960 [2024-07-25 10:17:31.801077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.960 qpair failed and we were unable to recover it. 00:28:46.960 [2024-07-25 10:17:31.801250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.960 [2024-07-25 10:17:31.801274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.960 qpair failed and we were unable to recover it. 00:28:46.960 [2024-07-25 10:17:31.801487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.960 [2024-07-25 10:17:31.801517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.960 qpair failed and we were unable to recover it. 00:28:46.960 [2024-07-25 10:17:31.801671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.960 [2024-07-25 10:17:31.801700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.960 qpair failed and we were unable to recover it. 00:28:46.960 [2024-07-25 10:17:31.801839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.960 [2024-07-25 10:17:31.801868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.960 qpair failed and we were unable to recover it. 00:28:46.960 [2024-07-25 10:17:31.802049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.960 [2024-07-25 10:17:31.802073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.960 qpair failed and we were unable to recover it. 00:28:46.960 [2024-07-25 10:17:31.802252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.960 [2024-07-25 10:17:31.802305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.960 qpair failed and we were unable to recover it. 00:28:46.960 [2024-07-25 10:17:31.802482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.960 [2024-07-25 10:17:31.802511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.960 qpair failed and we were unable to recover it. 00:28:46.960 [2024-07-25 10:17:31.802681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.960 [2024-07-25 10:17:31.802710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.960 qpair failed and we were unable to recover it. 00:28:46.960 [2024-07-25 10:17:31.802913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.960 [2024-07-25 10:17:31.802937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.960 qpair failed and we were unable to recover it. 00:28:46.960 [2024-07-25 10:17:31.803079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.960 [2024-07-25 10:17:31.803138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.960 qpair failed and we were unable to recover it. 00:28:46.960 [2024-07-25 10:17:31.803323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.960 [2024-07-25 10:17:31.803352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.960 qpair failed and we were unable to recover it. 00:28:46.960 [2024-07-25 10:17:31.803534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.960 [2024-07-25 10:17:31.803564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.960 qpair failed and we were unable to recover it. 00:28:46.960 [2024-07-25 10:17:31.803762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.960 [2024-07-25 10:17:31.803786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.960 qpair failed and we were unable to recover it. 00:28:46.960 [2024-07-25 10:17:31.803990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.960 [2024-07-25 10:17:31.804040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.960 qpair failed and we were unable to recover it. 00:28:46.960 [2024-07-25 10:17:31.804181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.960 [2024-07-25 10:17:31.804210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.960 qpair failed and we were unable to recover it. 00:28:46.960 [2024-07-25 10:17:31.804378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.960 [2024-07-25 10:17:31.804407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.960 qpair failed and we were unable to recover it. 00:28:46.960 [2024-07-25 10:17:31.804605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.960 [2024-07-25 10:17:31.804629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.960 qpair failed and we were unable to recover it. 00:28:46.960 [2024-07-25 10:17:31.804825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.960 [2024-07-25 10:17:31.804886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.960 qpair failed and we were unable to recover it. 00:28:46.960 [2024-07-25 10:17:31.805059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.960 [2024-07-25 10:17:31.805088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.960 qpair failed and we were unable to recover it. 00:28:46.960 [2024-07-25 10:17:31.805296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.960 [2024-07-25 10:17:31.805326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.960 qpair failed and we were unable to recover it. 00:28:46.960 [2024-07-25 10:17:31.805494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.960 [2024-07-25 10:17:31.805519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.960 qpair failed and we were unable to recover it. 00:28:46.960 [2024-07-25 10:17:31.805655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.960 [2024-07-25 10:17:31.805680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.960 qpair failed and we were unable to recover it. 00:28:46.960 [2024-07-25 10:17:31.805869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.960 [2024-07-25 10:17:31.805898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.960 qpair failed and we were unable to recover it. 00:28:46.960 [2024-07-25 10:17:31.806102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.961 [2024-07-25 10:17:31.806133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.961 qpair failed and we were unable to recover it. 00:28:46.961 [2024-07-25 10:17:31.806321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.961 [2024-07-25 10:17:31.806351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.961 qpair failed and we were unable to recover it. 00:28:46.961 [2024-07-25 10:17:31.806517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.961 [2024-07-25 10:17:31.806543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.961 qpair failed and we were unable to recover it. 00:28:46.961 [2024-07-25 10:17:31.806672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.961 [2024-07-25 10:17:31.806711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.961 qpair failed and we were unable to recover it. 00:28:46.961 [2024-07-25 10:17:31.806892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.961 [2024-07-25 10:17:31.806921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.961 qpair failed and we were unable to recover it. 00:28:46.961 [2024-07-25 10:17:31.807055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.961 [2024-07-25 10:17:31.807078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.961 qpair failed and we were unable to recover it. 00:28:46.961 [2024-07-25 10:17:31.807281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.961 [2024-07-25 10:17:31.807310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.961 qpair failed and we were unable to recover it. 00:28:46.961 [2024-07-25 10:17:31.807485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.961 [2024-07-25 10:17:31.807509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.961 qpair failed and we were unable to recover it. 00:28:46.961 [2024-07-25 10:17:31.807723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.961 [2024-07-25 10:17:31.807752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.961 qpair failed and we were unable to recover it. 00:28:46.961 [2024-07-25 10:17:31.807935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.961 [2024-07-25 10:17:31.807959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.961 qpair failed and we were unable to recover it. 00:28:46.961 [2024-07-25 10:17:31.808184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.961 [2024-07-25 10:17:31.808234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.961 qpair failed and we were unable to recover it. 00:28:46.961 [2024-07-25 10:17:31.808463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.961 [2024-07-25 10:17:31.808493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.961 qpair failed and we were unable to recover it. 00:28:46.961 [2024-07-25 10:17:31.808647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.961 [2024-07-25 10:17:31.808676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.961 qpair failed and we were unable to recover it. 00:28:46.961 [2024-07-25 10:17:31.808885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.961 [2024-07-25 10:17:31.808909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.961 qpair failed and we were unable to recover it. 00:28:46.961 [2024-07-25 10:17:31.809117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.961 [2024-07-25 10:17:31.809167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.961 qpair failed and we were unable to recover it. 00:28:46.961 [2024-07-25 10:17:31.809321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.961 [2024-07-25 10:17:31.809354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.961 qpair failed and we were unable to recover it. 00:28:46.961 [2024-07-25 10:17:31.810189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.961 [2024-07-25 10:17:31.810223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.961 qpair failed and we were unable to recover it. 00:28:46.961 [2024-07-25 10:17:31.810433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.961 [2024-07-25 10:17:31.810474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.961 qpair failed and we were unable to recover it. 00:28:46.961 [2024-07-25 10:17:31.810636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.961 [2024-07-25 10:17:31.810666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.961 qpair failed and we were unable to recover it. 00:28:46.961 [2024-07-25 10:17:31.810832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.961 [2024-07-25 10:17:31.810861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.961 qpair failed and we were unable to recover it. 00:28:46.961 [2024-07-25 10:17:31.811054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.961 [2024-07-25 10:17:31.811103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.961 qpair failed and we were unable to recover it. 00:28:46.961 [2024-07-25 10:17:31.811286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.961 [2024-07-25 10:17:31.811311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.961 qpair failed and we were unable to recover it. 00:28:46.961 [2024-07-25 10:17:31.811475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.961 [2024-07-25 10:17:31.811500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.961 qpair failed and we were unable to recover it. 00:28:46.961 [2024-07-25 10:17:31.811702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.961 [2024-07-25 10:17:31.811732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.961 qpair failed and we were unable to recover it. 00:28:46.961 [2024-07-25 10:17:31.811940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.961 [2024-07-25 10:17:31.811969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.961 qpair failed and we were unable to recover it. 00:28:46.961 [2024-07-25 10:17:31.812154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.961 [2024-07-25 10:17:31.812178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.961 qpair failed and we were unable to recover it. 00:28:46.961 [2024-07-25 10:17:31.812381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.961 [2024-07-25 10:17:31.812410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.961 qpair failed and we were unable to recover it. 00:28:46.961 [2024-07-25 10:17:31.812565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.961 [2024-07-25 10:17:31.812594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.961 qpair failed and we were unable to recover it. 00:28:46.961 [2024-07-25 10:17:31.812770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.961 [2024-07-25 10:17:31.812798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.961 qpair failed and we were unable to recover it. 00:28:46.961 [2024-07-25 10:17:31.812988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.961 [2024-07-25 10:17:31.813012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.961 qpair failed and we were unable to recover it. 00:28:46.961 [2024-07-25 10:17:31.813199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.961 [2024-07-25 10:17:31.813248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.961 qpair failed and we were unable to recover it. 00:28:46.961 [2024-07-25 10:17:31.813417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.961 [2024-07-25 10:17:31.813454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.961 qpair failed and we were unable to recover it. 00:28:46.961 [2024-07-25 10:17:31.813633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.961 [2024-07-25 10:17:31.813663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.961 qpair failed and we were unable to recover it. 00:28:46.961 [2024-07-25 10:17:31.813821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.961 [2024-07-25 10:17:31.813844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.961 qpair failed and we were unable to recover it. 00:28:46.961 [2024-07-25 10:17:31.814069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.961 [2024-07-25 10:17:31.814119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.961 qpair failed and we were unable to recover it. 00:28:46.961 [2024-07-25 10:17:31.814304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.961 [2024-07-25 10:17:31.814333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.961 qpair failed and we were unable to recover it. 00:28:46.961 [2024-07-25 10:17:31.814505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.961 [2024-07-25 10:17:31.814535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.961 qpair failed and we were unable to recover it. 00:28:46.961 [2024-07-25 10:17:31.814726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.961 [2024-07-25 10:17:31.814764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.961 qpair failed and we were unable to recover it. 00:28:46.962 [2024-07-25 10:17:31.814951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.962 [2024-07-25 10:17:31.815002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.962 qpair failed and we were unable to recover it. 00:28:46.962 [2024-07-25 10:17:31.815206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.962 [2024-07-25 10:17:31.815235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.962 qpair failed and we were unable to recover it. 00:28:46.962 [2024-07-25 10:17:31.815374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.962 [2024-07-25 10:17:31.815402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.962 qpair failed and we were unable to recover it. 00:28:46.962 [2024-07-25 10:17:31.815583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.962 [2024-07-25 10:17:31.815619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.962 qpair failed and we were unable to recover it. 00:28:46.962 [2024-07-25 10:17:31.815825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.962 [2024-07-25 10:17:31.815876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.962 qpair failed and we were unable to recover it. 00:28:46.962 [2024-07-25 10:17:31.816066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.962 [2024-07-25 10:17:31.816095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.962 qpair failed and we were unable to recover it. 00:28:46.962 [2024-07-25 10:17:31.816301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.962 [2024-07-25 10:17:31.816331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.962 qpair failed and we were unable to recover it. 00:28:46.962 [2024-07-25 10:17:31.816517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.962 [2024-07-25 10:17:31.816542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.962 qpair failed and we were unable to recover it. 00:28:46.962 [2024-07-25 10:17:31.816676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.962 [2024-07-25 10:17:31.816716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.962 qpair failed and we were unable to recover it. 00:28:46.962 [2024-07-25 10:17:31.816888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.962 [2024-07-25 10:17:31.816915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.962 qpair failed and we were unable to recover it. 00:28:46.962 [2024-07-25 10:17:31.817126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.962 [2024-07-25 10:17:31.817154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.962 qpair failed and we were unable to recover it. 00:28:46.962 [2024-07-25 10:17:31.817319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.962 [2024-07-25 10:17:31.817347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.962 qpair failed and we were unable to recover it. 00:28:46.962 [2024-07-25 10:17:31.817537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.962 [2024-07-25 10:17:31.817562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.962 qpair failed and we were unable to recover it. 00:28:46.962 [2024-07-25 10:17:31.817710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.962 [2024-07-25 10:17:31.817733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.962 qpair failed and we were unable to recover it. 00:28:46.962 [2024-07-25 10:17:31.817921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.962 [2024-07-25 10:17:31.817949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.962 qpair failed and we were unable to recover it. 00:28:46.962 [2024-07-25 10:17:31.818121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.962 [2024-07-25 10:17:31.818144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.962 qpair failed and we were unable to recover it. 00:28:46.962 [2024-07-25 10:17:31.818348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.962 [2024-07-25 10:17:31.818377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.962 qpair failed and we were unable to recover it. 00:28:46.962 [2024-07-25 10:17:31.818549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.962 [2024-07-25 10:17:31.818576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.962 qpair failed and we were unable to recover it. 00:28:46.962 [2024-07-25 10:17:31.818745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.962 [2024-07-25 10:17:31.818774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.962 qpair failed and we were unable to recover it. 00:28:46.962 [2024-07-25 10:17:31.818947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.962 [2024-07-25 10:17:31.818970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.962 qpair failed and we were unable to recover it. 00:28:46.962 [2024-07-25 10:17:31.819140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.962 [2024-07-25 10:17:31.819198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.962 qpair failed and we were unable to recover it. 00:28:46.962 [2024-07-25 10:17:31.819333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.962 [2024-07-25 10:17:31.819360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.962 qpair failed and we were unable to recover it. 00:28:46.962 [2024-07-25 10:17:31.819510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.962 [2024-07-25 10:17:31.819539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.962 qpair failed and we were unable to recover it. 00:28:46.962 [2024-07-25 10:17:31.819687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.962 [2024-07-25 10:17:31.819725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.962 qpair failed and we were unable to recover it. 00:28:46.962 [2024-07-25 10:17:31.819955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.962 [2024-07-25 10:17:31.820007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.962 qpair failed and we were unable to recover it. 00:28:46.962 [2024-07-25 10:17:31.820221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.962 [2024-07-25 10:17:31.820251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.962 qpair failed and we were unable to recover it. 00:28:46.962 [2024-07-25 10:17:31.820453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.962 [2024-07-25 10:17:31.820483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.962 qpair failed and we were unable to recover it. 00:28:46.962 [2024-07-25 10:17:31.820660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.962 [2024-07-25 10:17:31.820685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.962 qpair failed and we were unable to recover it. 00:28:46.962 [2024-07-25 10:17:31.820868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.962 [2024-07-25 10:17:31.820919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.962 qpair failed and we were unable to recover it. 00:28:46.962 [2024-07-25 10:17:31.821090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.962 [2024-07-25 10:17:31.821118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.962 qpair failed and we were unable to recover it. 00:28:46.962 [2024-07-25 10:17:31.821324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.962 [2024-07-25 10:17:31.821353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.962 qpair failed and we were unable to recover it. 00:28:46.962 [2024-07-25 10:17:31.821519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.962 [2024-07-25 10:17:31.821544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.962 qpair failed and we were unable to recover it. 00:28:46.962 [2024-07-25 10:17:31.821716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.962 [2024-07-25 10:17:31.821745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.962 qpair failed and we were unable to recover it. 00:28:46.962 [2024-07-25 10:17:31.821914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.962 [2024-07-25 10:17:31.821943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.962 qpair failed and we were unable to recover it. 00:28:46.962 [2024-07-25 10:17:31.822119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.962 [2024-07-25 10:17:31.822148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.962 qpair failed and we were unable to recover it. 00:28:46.962 [2024-07-25 10:17:31.822331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.962 [2024-07-25 10:17:31.822354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.962 qpair failed and we were unable to recover it. 00:28:46.962 [2024-07-25 10:17:31.822534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.962 [2024-07-25 10:17:31.822575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.962 qpair failed and we were unable to recover it. 00:28:46.962 [2024-07-25 10:17:31.822720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.962 [2024-07-25 10:17:31.822749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.963 qpair failed and we were unable to recover it. 00:28:46.963 [2024-07-25 10:17:31.822888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.963 [2024-07-25 10:17:31.822916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.963 qpair failed and we were unable to recover it. 00:28:46.963 [2024-07-25 10:17:31.823125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.963 [2024-07-25 10:17:31.823148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.963 qpair failed and we were unable to recover it. 00:28:46.963 [2024-07-25 10:17:31.823326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.963 [2024-07-25 10:17:31.823354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.963 qpair failed and we were unable to recover it. 00:28:46.963 [2024-07-25 10:17:31.823524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.963 [2024-07-25 10:17:31.823554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.963 qpair failed and we were unable to recover it. 00:28:46.963 [2024-07-25 10:17:31.823696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.963 [2024-07-25 10:17:31.823724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.963 qpair failed and we were unable to recover it. 00:28:46.963 [2024-07-25 10:17:31.823911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.963 [2024-07-25 10:17:31.823934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.963 qpair failed and we were unable to recover it. 00:28:46.963 [2024-07-25 10:17:31.824158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.963 [2024-07-25 10:17:31.824208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.963 qpair failed and we were unable to recover it. 00:28:46.963 [2024-07-25 10:17:31.824356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.963 [2024-07-25 10:17:31.824384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.963 qpair failed and we were unable to recover it. 00:28:46.963 [2024-07-25 10:17:31.824569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.963 [2024-07-25 10:17:31.824599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.963 qpair failed and we were unable to recover it. 00:28:46.963 [2024-07-25 10:17:31.824810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.963 [2024-07-25 10:17:31.824833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.963 qpair failed and we were unable to recover it. 00:28:46.963 [2024-07-25 10:17:31.825029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.963 [2024-07-25 10:17:31.825079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.963 qpair failed and we were unable to recover it. 00:28:46.963 [2024-07-25 10:17:31.825252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.963 [2024-07-25 10:17:31.825282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.963 qpair failed and we were unable to recover it. 00:28:46.963 [2024-07-25 10:17:31.825458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.963 [2024-07-25 10:17:31.825503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.963 qpair failed and we were unable to recover it. 00:28:46.963 [2024-07-25 10:17:31.825687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.963 [2024-07-25 10:17:31.825711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.963 qpair failed and we were unable to recover it. 00:28:46.963 [2024-07-25 10:17:31.825885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.963 [2024-07-25 10:17:31.825943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.963 qpair failed and we were unable to recover it. 00:28:46.963 [2024-07-25 10:17:31.826148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.963 [2024-07-25 10:17:31.826177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.963 qpair failed and we were unable to recover it. 00:28:46.963 [2024-07-25 10:17:31.826351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.963 [2024-07-25 10:17:31.826380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.963 qpair failed and we were unable to recover it. 00:28:46.963 [2024-07-25 10:17:31.826535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.963 [2024-07-25 10:17:31.826561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.963 qpair failed and we were unable to recover it. 00:28:46.963 [2024-07-25 10:17:31.826749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.963 [2024-07-25 10:17:31.826806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.963 qpair failed and we were unable to recover it. 00:28:46.963 [2024-07-25 10:17:31.827043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.963 [2024-07-25 10:17:31.827075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.963 qpair failed and we were unable to recover it. 00:28:46.963 [2024-07-25 10:17:31.827275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.963 [2024-07-25 10:17:31.827304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.963 qpair failed and we were unable to recover it. 00:28:46.963 [2024-07-25 10:17:31.827528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.963 [2024-07-25 10:17:31.827553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.963 qpair failed and we were unable to recover it. 00:28:46.963 [2024-07-25 10:17:31.827718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.963 [2024-07-25 10:17:31.827759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.963 qpair failed and we were unable to recover it. 00:28:46.963 [2024-07-25 10:17:31.827934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.963 [2024-07-25 10:17:31.827964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.963 qpair failed and we were unable to recover it. 00:28:46.963 [2024-07-25 10:17:31.828142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.963 [2024-07-25 10:17:31.828170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.963 qpair failed and we were unable to recover it. 00:28:46.963 [2024-07-25 10:17:31.828333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.963 [2024-07-25 10:17:31.828361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.963 qpair failed and we were unable to recover it. 00:28:46.963 [2024-07-25 10:17:31.828502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.963 [2024-07-25 10:17:31.828528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.963 qpair failed and we were unable to recover it. 00:28:46.963 [2024-07-25 10:17:31.828675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.963 [2024-07-25 10:17:31.828714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.963 qpair failed and we were unable to recover it. 00:28:46.963 [2024-07-25 10:17:31.828901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.963 [2024-07-25 10:17:31.828929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.963 qpair failed and we were unable to recover it. 00:28:46.963 [2024-07-25 10:17:31.829122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.963 [2024-07-25 10:17:31.829146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.963 qpair failed and we were unable to recover it. 00:28:46.963 [2024-07-25 10:17:31.829351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.963 [2024-07-25 10:17:31.829380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.963 qpair failed and we were unable to recover it. 00:28:46.963 [2024-07-25 10:17:31.829551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.963 [2024-07-25 10:17:31.829576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.963 qpair failed and we were unable to recover it. 00:28:46.963 [2024-07-25 10:17:31.829809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.963 [2024-07-25 10:17:31.829838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.963 qpair failed and we were unable to recover it. 00:28:46.964 [2024-07-25 10:17:31.830026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.964 [2024-07-25 10:17:31.830050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.964 qpair failed and we were unable to recover it. 00:28:46.964 [2024-07-25 10:17:31.830225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.964 [2024-07-25 10:17:31.830254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.964 qpair failed and we were unable to recover it. 00:28:46.964 [2024-07-25 10:17:31.830471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.964 [2024-07-25 10:17:31.830501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.964 qpair failed and we were unable to recover it. 00:28:46.964 [2024-07-25 10:17:31.830678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.964 [2024-07-25 10:17:31.830707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.964 qpair failed and we were unable to recover it. 00:28:46.964 [2024-07-25 10:17:31.830879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.964 [2024-07-25 10:17:31.830903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.964 qpair failed and we were unable to recover it. 00:28:46.964 [2024-07-25 10:17:31.831087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.964 [2024-07-25 10:17:31.831116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.964 qpair failed and we were unable to recover it. 00:28:46.964 [2024-07-25 10:17:31.831301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.964 [2024-07-25 10:17:31.831330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.964 qpair failed and we were unable to recover it. 00:28:46.964 [2024-07-25 10:17:31.831521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.964 [2024-07-25 10:17:31.831549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.964 qpair failed and we were unable to recover it. 00:28:46.964 [2024-07-25 10:17:31.831697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.964 [2024-07-25 10:17:31.831736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.964 qpair failed and we were unable to recover it. 00:28:46.964 [2024-07-25 10:17:31.831872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.964 [2024-07-25 10:17:31.831895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.964 qpair failed and we were unable to recover it. 00:28:46.964 [2024-07-25 10:17:31.832065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.964 [2024-07-25 10:17:31.832112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.964 qpair failed and we were unable to recover it. 00:28:46.964 [2024-07-25 10:17:31.832283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.964 [2024-07-25 10:17:31.832311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.964 qpair failed and we were unable to recover it. 00:28:46.964 [2024-07-25 10:17:31.832511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.964 [2024-07-25 10:17:31.832538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.964 qpair failed and we were unable to recover it. 00:28:46.964 [2024-07-25 10:17:31.832754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.964 [2024-07-25 10:17:31.832784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.964 qpair failed and we were unable to recover it. 00:28:46.964 [2024-07-25 10:17:31.832979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.964 [2024-07-25 10:17:31.833008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.964 qpair failed and we were unable to recover it. 00:28:46.964 [2024-07-25 10:17:31.833188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.964 [2024-07-25 10:17:31.833216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.964 qpair failed and we were unable to recover it. 00:28:46.964 [2024-07-25 10:17:31.833406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.964 [2024-07-25 10:17:31.833461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.964 qpair failed and we were unable to recover it. 00:28:46.964 [2024-07-25 10:17:31.833661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.964 [2024-07-25 10:17:31.833690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.964 qpair failed and we were unable to recover it. 00:28:46.964 [2024-07-25 10:17:31.833857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.964 [2024-07-25 10:17:31.833891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.964 qpair failed and we were unable to recover it. 00:28:46.964 [2024-07-25 10:17:31.834115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.964 [2024-07-25 10:17:31.834144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.964 qpair failed and we were unable to recover it. 00:28:46.964 [2024-07-25 10:17:31.834324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.964 [2024-07-25 10:17:31.834348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.964 qpair failed and we were unable to recover it. 00:28:46.964 [2024-07-25 10:17:31.834528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.964 [2024-07-25 10:17:31.834559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.964 qpair failed and we were unable to recover it. 00:28:46.964 [2024-07-25 10:17:31.834773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.964 [2024-07-25 10:17:31.834798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.964 qpair failed and we were unable to recover it. 00:28:46.964 [2024-07-25 10:17:31.835042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.964 [2024-07-25 10:17:31.835067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.964 qpair failed and we were unable to recover it. 00:28:46.964 [2024-07-25 10:17:31.835254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.964 [2024-07-25 10:17:31.835284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.964 qpair failed and we were unable to recover it. 00:28:46.964 [2024-07-25 10:17:31.835496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.964 [2024-07-25 10:17:31.835523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.964 qpair failed and we were unable to recover it. 00:28:46.964 [2024-07-25 10:17:31.835664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.964 [2024-07-25 10:17:31.835693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.964 qpair failed and we were unable to recover it. 00:28:46.964 [2024-07-25 10:17:31.835909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.964 [2024-07-25 10:17:31.835938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.964 qpair failed and we were unable to recover it. 00:28:46.964 [2024-07-25 10:17:31.836169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.964 [2024-07-25 10:17:31.836202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.964 qpair failed and we were unable to recover it. 00:28:46.964 [2024-07-25 10:17:31.836421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.964 [2024-07-25 10:17:31.836454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.964 qpair failed and we were unable to recover it. 00:28:46.964 [2024-07-25 10:17:31.836597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.964 [2024-07-25 10:17:31.836623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.964 qpair failed and we were unable to recover it. 00:28:46.964 [2024-07-25 10:17:31.836826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.964 [2024-07-25 10:17:31.836856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.964 qpair failed and we were unable to recover it. 00:28:46.964 [2024-07-25 10:17:31.837040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.964 [2024-07-25 10:17:31.837065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.964 qpair failed and we were unable to recover it. 00:28:46.964 [2024-07-25 10:17:31.837270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.964 [2024-07-25 10:17:31.837315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.964 qpair failed and we were unable to recover it. 00:28:46.964 [2024-07-25 10:17:31.837494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.964 [2024-07-25 10:17:31.837521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.964 qpair failed and we were unable to recover it. 00:28:46.964 [2024-07-25 10:17:31.837713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.965 [2024-07-25 10:17:31.837737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.965 qpair failed and we were unable to recover it. 00:28:46.965 [2024-07-25 10:17:31.837968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.965 [2024-07-25 10:17:31.837992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.965 qpair failed and we were unable to recover it. 00:28:46.965 [2024-07-25 10:17:31.838211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.965 [2024-07-25 10:17:31.838257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.965 qpair failed and we were unable to recover it. 00:28:46.965 [2024-07-25 10:17:31.838436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.965 [2024-07-25 10:17:31.838480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.965 qpair failed and we were unable to recover it. 00:28:46.965 [2024-07-25 10:17:31.838623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.965 [2024-07-25 10:17:31.838650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.965 qpair failed and we were unable to recover it. 00:28:46.965 [2024-07-25 10:17:31.838837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.965 [2024-07-25 10:17:31.838868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.965 qpair failed and we were unable to recover it. 00:28:46.965 [2024-07-25 10:17:31.839000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.965 [2024-07-25 10:17:31.839029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.965 qpair failed and we were unable to recover it. 00:28:46.965 [2024-07-25 10:17:31.839218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.965 [2024-07-25 10:17:31.839266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.965 qpair failed and we were unable to recover it. 00:28:46.965 [2024-07-25 10:17:31.839482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.965 [2024-07-25 10:17:31.839508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.965 qpair failed and we were unable to recover it. 00:28:46.965 [2024-07-25 10:17:31.839683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.965 [2024-07-25 10:17:31.839710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.965 qpair failed and we were unable to recover it. 00:28:46.965 [2024-07-25 10:17:31.839859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.965 [2024-07-25 10:17:31.839887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.965 qpair failed and we were unable to recover it. 00:28:46.965 [2024-07-25 10:17:31.840113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.965 [2024-07-25 10:17:31.840156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.965 qpair failed and we were unable to recover it. 00:28:46.965 [2024-07-25 10:17:31.840388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.965 [2024-07-25 10:17:31.840417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.965 qpair failed and we were unable to recover it. 00:28:46.965 [2024-07-25 10:17:31.840627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.965 [2024-07-25 10:17:31.840654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.965 qpair failed and we were unable to recover it. 00:28:46.965 [2024-07-25 10:17:31.840847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.965 [2024-07-25 10:17:31.840876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.965 qpair failed and we were unable to recover it. 00:28:46.965 [2024-07-25 10:17:31.841083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.965 [2024-07-25 10:17:31.841115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.965 qpair failed and we were unable to recover it. 00:28:46.965 [2024-07-25 10:17:31.841314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.965 [2024-07-25 10:17:31.841342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.965 qpair failed and we were unable to recover it. 00:28:46.965 [2024-07-25 10:17:31.841560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.965 [2024-07-25 10:17:31.841586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.965 qpair failed and we were unable to recover it. 00:28:46.965 [2024-07-25 10:17:31.841780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.965 [2024-07-25 10:17:31.841809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.965 qpair failed and we were unable to recover it. 00:28:46.965 [2024-07-25 10:17:31.842057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.965 [2024-07-25 10:17:31.842084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.965 qpair failed and we were unable to recover it. 00:28:46.965 [2024-07-25 10:17:31.842305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.965 [2024-07-25 10:17:31.842334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.965 qpair failed and we were unable to recover it. 00:28:46.965 [2024-07-25 10:17:31.842546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.965 [2024-07-25 10:17:31.842572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.965 qpair failed and we were unable to recover it. 00:28:46.965 [2024-07-25 10:17:31.842789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.965 [2024-07-25 10:17:31.842819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.965 qpair failed and we were unable to recover it. 00:28:46.965 [2024-07-25 10:17:31.842998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.965 [2024-07-25 10:17:31.843022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.965 qpair failed and we were unable to recover it. 00:28:46.965 [2024-07-25 10:17:31.843242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.965 [2024-07-25 10:17:31.843291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.965 qpair failed and we were unable to recover it. 00:28:46.965 [2024-07-25 10:17:31.843499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.965 [2024-07-25 10:17:31.843526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.965 qpair failed and we were unable to recover it. 00:28:46.965 [2024-07-25 10:17:31.843739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.965 [2024-07-25 10:17:31.843768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.965 qpair failed and we were unable to recover it. 00:28:46.965 [2024-07-25 10:17:31.843925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.965 [2024-07-25 10:17:31.843949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.965 qpair failed and we were unable to recover it. 00:28:46.965 [2024-07-25 10:17:31.844137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.965 [2024-07-25 10:17:31.844186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.965 qpair failed and we were unable to recover it. 00:28:46.965 [2024-07-25 10:17:31.844383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.965 [2024-07-25 10:17:31.844413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.965 qpair failed and we were unable to recover it. 00:28:46.965 [2024-07-25 10:17:31.844638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.965 [2024-07-25 10:17:31.844665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.965 qpair failed and we were unable to recover it. 00:28:46.965 [2024-07-25 10:17:31.844883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.965 [2024-07-25 10:17:31.844911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.965 qpair failed and we were unable to recover it. 00:28:46.965 [2024-07-25 10:17:31.845075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.965 [2024-07-25 10:17:31.845126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.965 qpair failed and we were unable to recover it. 00:28:46.965 [2024-07-25 10:17:31.845331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.965 [2024-07-25 10:17:31.845359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.965 qpair failed and we were unable to recover it. 00:28:46.965 [2024-07-25 10:17:31.845572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.965 [2024-07-25 10:17:31.845598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.965 qpair failed and we were unable to recover it. 00:28:46.965 [2024-07-25 10:17:31.845792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.965 [2024-07-25 10:17:31.845816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.965 qpair failed and we were unable to recover it. 00:28:46.965 [2024-07-25 10:17:31.846017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.965 [2024-07-25 10:17:31.846080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.965 qpair failed and we were unable to recover it. 00:28:46.965 [2024-07-25 10:17:31.846252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.965 [2024-07-25 10:17:31.846281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.965 qpair failed and we were unable to recover it. 00:28:46.965 [2024-07-25 10:17:31.846426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.966 [2024-07-25 10:17:31.846476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.966 qpair failed and we were unable to recover it. 00:28:46.966 [2024-07-25 10:17:31.846646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.966 [2024-07-25 10:17:31.846672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.966 qpair failed and we were unable to recover it. 00:28:46.966 [2024-07-25 10:17:31.846858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.966 [2024-07-25 10:17:31.846887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:46.966 qpair failed and we were unable to recover it. 00:28:46.966 [2024-07-25 10:17:31.847094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.966 [2024-07-25 10:17:31.847159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.966 qpair failed and we were unable to recover it. 00:28:46.966 [2024-07-25 10:17:31.847409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.966 [2024-07-25 10:17:31.847492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.966 qpair failed and we were unable to recover it. 00:28:46.966 [2024-07-25 10:17:31.847645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.966 [2024-07-25 10:17:31.847673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.966 qpair failed and we were unable to recover it. 00:28:46.966 [2024-07-25 10:17:31.847862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.966 [2024-07-25 10:17:31.847912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.966 qpair failed and we were unable to recover it. 00:28:46.966 [2024-07-25 10:17:31.848164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.966 [2024-07-25 10:17:31.848214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.966 qpair failed and we were unable to recover it. 00:28:46.966 [2024-07-25 10:17:31.848494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.966 [2024-07-25 10:17:31.848522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.966 qpair failed and we were unable to recover it. 00:28:46.966 [2024-07-25 10:17:31.848797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.966 [2024-07-25 10:17:31.848836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.966 qpair failed and we were unable to recover it. 00:28:46.966 [2024-07-25 10:17:31.849054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.966 [2024-07-25 10:17:31.849102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.966 qpair failed and we were unable to recover it. 00:28:46.966 [2024-07-25 10:17:31.849348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.966 [2024-07-25 10:17:31.849396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.966 qpair failed and we were unable to recover it. 00:28:46.966 [2024-07-25 10:17:31.849548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.966 [2024-07-25 10:17:31.849587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.966 qpair failed and we were unable to recover it. 00:28:46.966 [2024-07-25 10:17:31.849831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.966 [2024-07-25 10:17:31.849873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.966 qpair failed and we were unable to recover it. 00:28:46.966 [2024-07-25 10:17:31.850078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.966 [2024-07-25 10:17:31.850105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.966 qpair failed and we were unable to recover it. 00:28:46.966 [2024-07-25 10:17:31.850296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.966 [2024-07-25 10:17:31.850347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.966 qpair failed and we were unable to recover it. 00:28:46.966 [2024-07-25 10:17:31.850508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.966 [2024-07-25 10:17:31.850544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.966 qpair failed and we were unable to recover it. 00:28:46.966 [2024-07-25 10:17:31.850796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.966 [2024-07-25 10:17:31.850821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.966 qpair failed and we were unable to recover it. 00:28:46.966 [2024-07-25 10:17:31.851119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.966 [2024-07-25 10:17:31.851167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.966 qpair failed and we were unable to recover it. 00:28:46.966 [2024-07-25 10:17:31.851379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.966 [2024-07-25 10:17:31.851408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.966 qpair failed and we were unable to recover it. 00:28:46.966 [2024-07-25 10:17:31.851630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.966 [2024-07-25 10:17:31.851660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.966 qpair failed and we were unable to recover it. 00:28:46.966 [2024-07-25 10:17:31.851874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.966 [2024-07-25 10:17:31.851896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.966 qpair failed and we were unable to recover it. 00:28:46.966 [2024-07-25 10:17:31.852100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.966 [2024-07-25 10:17:31.852151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.966 qpair failed and we were unable to recover it. 00:28:46.966 [2024-07-25 10:17:31.852367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.966 [2024-07-25 10:17:31.852395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.966 qpair failed and we were unable to recover it. 00:28:46.966 [2024-07-25 10:17:31.852588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.966 [2024-07-25 10:17:31.852613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.966 qpair failed and we were unable to recover it. 00:28:46.966 [2024-07-25 10:17:31.852804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.966 [2024-07-25 10:17:31.852827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.966 qpair failed and we were unable to recover it. 00:28:46.966 [2024-07-25 10:17:31.853021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.966 [2024-07-25 10:17:31.853077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.966 qpair failed and we were unable to recover it. 00:28:46.966 [2024-07-25 10:17:31.853258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.966 [2024-07-25 10:17:31.853287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.966 qpair failed and we were unable to recover it. 00:28:46.966 [2024-07-25 10:17:31.853485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.966 [2024-07-25 10:17:31.853512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.966 qpair failed and we were unable to recover it. 00:28:46.966 [2024-07-25 10:17:31.853674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.966 [2024-07-25 10:17:31.853700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.966 qpair failed and we were unable to recover it. 00:28:46.966 [2024-07-25 10:17:31.853932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.966 [2024-07-25 10:17:31.853982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.966 qpair failed and we were unable to recover it. 00:28:46.966 [2024-07-25 10:17:31.854189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.966 [2024-07-25 10:17:31.854238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.966 qpair failed and we were unable to recover it. 00:28:46.966 [2024-07-25 10:17:31.854420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.966 [2024-07-25 10:17:31.854454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.966 qpair failed and we were unable to recover it. 00:28:46.966 [2024-07-25 10:17:31.854659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.966 [2024-07-25 10:17:31.854694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.966 qpair failed and we were unable to recover it. 00:28:46.966 [2024-07-25 10:17:31.854861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.966 [2024-07-25 10:17:31.854910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.966 qpair failed and we were unable to recover it. 00:28:46.966 [2024-07-25 10:17:31.855068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.966 [2024-07-25 10:17:31.855117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.966 qpair failed and we were unable to recover it. 00:28:46.966 [2024-07-25 10:17:31.855253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.966 [2024-07-25 10:17:31.855281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.966 qpair failed and we were unable to recover it. 00:28:46.966 [2024-07-25 10:17:31.855524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.966 [2024-07-25 10:17:31.855550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.966 qpair failed and we were unable to recover it. 00:28:46.967 [2024-07-25 10:17:31.855716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.967 [2024-07-25 10:17:31.855741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.967 qpair failed and we were unable to recover it. 00:28:46.967 [2024-07-25 10:17:31.855981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.967 [2024-07-25 10:17:31.856029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.967 qpair failed and we were unable to recover it. 00:28:46.967 [2024-07-25 10:17:31.856179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.967 [2024-07-25 10:17:31.856207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.967 qpair failed and we were unable to recover it. 00:28:46.967 [2024-07-25 10:17:31.856438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.967 [2024-07-25 10:17:31.856483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.967 qpair failed and we were unable to recover it. 00:28:46.967 [2024-07-25 10:17:31.856640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.967 [2024-07-25 10:17:31.856666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.967 qpair failed and we were unable to recover it. 00:28:46.967 [2024-07-25 10:17:31.856924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.967 [2024-07-25 10:17:31.856974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.967 qpair failed and we were unable to recover it. 00:28:46.967 [2024-07-25 10:17:31.857163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.967 [2024-07-25 10:17:31.857212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.967 qpair failed and we were unable to recover it. 00:28:46.967 [2024-07-25 10:17:31.857384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.967 [2024-07-25 10:17:31.857412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.967 qpair failed and we were unable to recover it. 00:28:46.967 [2024-07-25 10:17:31.857571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.967 [2024-07-25 10:17:31.857596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.967 qpair failed and we were unable to recover it. 00:28:46.967 [2024-07-25 10:17:31.857739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.967 [2024-07-25 10:17:31.857778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.967 qpair failed and we were unable to recover it. 00:28:46.967 [2024-07-25 10:17:31.857960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.967 [2024-07-25 10:17:31.857989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.967 qpair failed and we were unable to recover it. 00:28:46.967 [2024-07-25 10:17:31.858248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.967 [2024-07-25 10:17:31.858297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.967 qpair failed and we were unable to recover it. 00:28:46.967 [2024-07-25 10:17:31.858507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.967 [2024-07-25 10:17:31.858533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.967 qpair failed and we were unable to recover it. 00:28:46.967 [2024-07-25 10:17:31.858691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.967 [2024-07-25 10:17:31.858730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.967 qpair failed and we were unable to recover it. 00:28:46.967 [2024-07-25 10:17:31.858953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.967 [2024-07-25 10:17:31.858981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.967 qpair failed and we were unable to recover it. 00:28:46.967 [2024-07-25 10:17:31.859218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.967 [2024-07-25 10:17:31.859246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.967 qpair failed and we were unable to recover it. 00:28:46.967 [2024-07-25 10:17:31.859436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.967 [2024-07-25 10:17:31.859497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.967 qpair failed and we were unable to recover it. 00:28:46.967 [2024-07-25 10:17:31.859658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.967 [2024-07-25 10:17:31.859684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.967 qpair failed and we were unable to recover it. 00:28:46.967 [2024-07-25 10:17:31.859877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.967 [2024-07-25 10:17:31.859905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.967 qpair failed and we were unable to recover it. 00:28:46.967 [2024-07-25 10:17:31.860096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.967 [2024-07-25 10:17:31.860146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.967 qpair failed and we were unable to recover it. 00:28:46.967 [2024-07-25 10:17:31.860308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.967 [2024-07-25 10:17:31.860336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.967 qpair failed and we were unable to recover it. 00:28:46.967 [2024-07-25 10:17:31.860495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.967 [2024-07-25 10:17:31.860523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.967 qpair failed and we were unable to recover it. 00:28:46.967 [2024-07-25 10:17:31.860683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.967 [2024-07-25 10:17:31.860722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.967 qpair failed and we were unable to recover it. 00:28:46.967 [2024-07-25 10:17:31.860928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.967 [2024-07-25 10:17:31.860951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.967 qpair failed and we were unable to recover it. 00:28:46.967 [2024-07-25 10:17:31.861169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.967 [2024-07-25 10:17:31.861217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.967 qpair failed and we were unable to recover it. 00:28:46.967 [2024-07-25 10:17:31.861379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.967 [2024-07-25 10:17:31.861407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.967 qpair failed and we were unable to recover it. 00:28:46.967 [2024-07-25 10:17:31.861601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.967 [2024-07-25 10:17:31.861626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.967 qpair failed and we were unable to recover it. 00:28:46.967 [2024-07-25 10:17:31.861861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.967 [2024-07-25 10:17:31.861884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.967 qpair failed and we were unable to recover it. 00:28:46.967 [2024-07-25 10:17:31.862119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.967 [2024-07-25 10:17:31.862168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.967 qpair failed and we were unable to recover it. 00:28:46.967 [2024-07-25 10:17:31.862379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.967 [2024-07-25 10:17:31.862407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.967 qpair failed and we were unable to recover it. 00:28:46.967 [2024-07-25 10:17:31.862573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.967 [2024-07-25 10:17:31.862598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.967 qpair failed and we were unable to recover it. 00:28:46.967 [2024-07-25 10:17:31.862802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.967 [2024-07-25 10:17:31.862825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.967 qpair failed and we were unable to recover it. 00:28:46.967 [2024-07-25 10:17:31.863074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.967 [2024-07-25 10:17:31.863125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.967 qpair failed and we were unable to recover it. 00:28:46.967 [2024-07-25 10:17:31.863325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.967 [2024-07-25 10:17:31.863353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.967 qpair failed and we were unable to recover it. 00:28:46.967 [2024-07-25 10:17:31.863528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.967 [2024-07-25 10:17:31.863554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.967 qpair failed and we were unable to recover it. 00:28:46.967 [2024-07-25 10:17:31.863689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.967 [2024-07-25 10:17:31.863715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.967 qpair failed and we were unable to recover it. 00:28:46.967 [2024-07-25 10:17:31.863953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.967 [2024-07-25 10:17:31.863981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.967 qpair failed and we were unable to recover it. 00:28:46.967 [2024-07-25 10:17:31.864183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.967 [2024-07-25 10:17:31.864222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.967 qpair failed and we were unable to recover it. 00:28:46.968 [2024-07-25 10:17:31.864418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.968 [2024-07-25 10:17:31.864454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.968 qpair failed and we were unable to recover it. 00:28:46.968 [2024-07-25 10:17:31.864607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.968 [2024-07-25 10:17:31.864632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.968 qpair failed and we were unable to recover it. 00:28:46.968 [2024-07-25 10:17:31.864857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.968 [2024-07-25 10:17:31.864885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.968 qpair failed and we were unable to recover it. 00:28:46.968 [2024-07-25 10:17:31.865093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.968 [2024-07-25 10:17:31.865131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.968 qpair failed and we were unable to recover it. 00:28:46.968 [2024-07-25 10:17:31.865305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.968 [2024-07-25 10:17:31.865343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.968 qpair failed and we were unable to recover it. 00:28:46.968 [2024-07-25 10:17:31.865514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.968 [2024-07-25 10:17:31.865541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.968 qpair failed and we were unable to recover it. 00:28:46.968 [2024-07-25 10:17:31.865701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.968 [2024-07-25 10:17:31.865729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.968 qpair failed and we were unable to recover it. 00:28:46.968 [2024-07-25 10:17:31.865937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.968 [2024-07-25 10:17:31.865965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.968 qpair failed and we were unable to recover it. 00:28:46.968 [2024-07-25 10:17:31.866142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.968 [2024-07-25 10:17:31.866170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.968 qpair failed and we were unable to recover it. 00:28:46.968 [2024-07-25 10:17:31.866401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.968 [2024-07-25 10:17:31.866449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.968 qpair failed and we were unable to recover it. 00:28:46.968 [2024-07-25 10:17:31.866602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.968 [2024-07-25 10:17:31.866630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.968 qpair failed and we were unable to recover it. 00:28:46.968 [2024-07-25 10:17:31.866847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.968 [2024-07-25 10:17:31.866876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.968 qpair failed and we were unable to recover it. 00:28:46.968 [2024-07-25 10:17:31.867056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.968 [2024-07-25 10:17:31.867088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.968 qpair failed and we were unable to recover it. 00:28:46.968 [2024-07-25 10:17:31.867248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.968 [2024-07-25 10:17:31.867271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.968 qpair failed and we were unable to recover it. 00:28:46.968 [2024-07-25 10:17:31.867470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.968 [2024-07-25 10:17:31.867513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.968 qpair failed and we were unable to recover it. 00:28:46.968 [2024-07-25 10:17:31.867678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.968 [2024-07-25 10:17:31.867707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.968 qpair failed and we were unable to recover it. 00:28:46.968 [2024-07-25 10:17:31.867888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.968 [2024-07-25 10:17:31.867916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.968 qpair failed and we were unable to recover it. 00:28:46.968 [2024-07-25 10:17:31.868098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.968 [2024-07-25 10:17:31.868121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.968 qpair failed and we were unable to recover it. 00:28:46.968 [2024-07-25 10:17:31.868300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.968 [2024-07-25 10:17:31.868329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.968 qpair failed and we were unable to recover it. 00:28:46.968 [2024-07-25 10:17:31.868525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.968 [2024-07-25 10:17:31.868554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.968 qpair failed and we were unable to recover it. 00:28:46.968 [2024-07-25 10:17:31.868699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.968 [2024-07-25 10:17:31.868727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.968 qpair failed and we were unable to recover it. 00:28:46.968 [2024-07-25 10:17:31.868847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.968 [2024-07-25 10:17:31.868886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.968 qpair failed and we were unable to recover it. 00:28:46.968 [2024-07-25 10:17:31.869031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.968 [2024-07-25 10:17:31.869071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.968 qpair failed and we were unable to recover it. 00:28:46.968 [2024-07-25 10:17:31.869323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.968 [2024-07-25 10:17:31.869352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.968 qpair failed and we were unable to recover it. 00:28:46.968 [2024-07-25 10:17:31.869557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.968 [2024-07-25 10:17:31.869586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.968 qpair failed and we were unable to recover it. 00:28:46.968 [2024-07-25 10:17:31.869763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.968 [2024-07-25 10:17:31.869786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.968 qpair failed and we were unable to recover it. 00:28:46.968 [2024-07-25 10:17:31.869998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.968 [2024-07-25 10:17:31.870026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.968 qpair failed and we were unable to recover it. 00:28:46.968 [2024-07-25 10:17:31.870238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.968 [2024-07-25 10:17:31.870267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.968 qpair failed and we were unable to recover it. 00:28:46.968 [2024-07-25 10:17:31.870455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.968 [2024-07-25 10:17:31.870484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.968 qpair failed and we were unable to recover it. 00:28:46.968 [2024-07-25 10:17:31.870650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.968 [2024-07-25 10:17:31.870685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.968 qpair failed and we were unable to recover it. 00:28:46.968 [2024-07-25 10:17:31.870869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.968 [2024-07-25 10:17:31.870897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.968 qpair failed and we were unable to recover it. 00:28:46.968 [2024-07-25 10:17:31.871086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.968 [2024-07-25 10:17:31.871114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.968 qpair failed and we were unable to recover it. 00:28:46.968 [2024-07-25 10:17:31.871290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.969 [2024-07-25 10:17:31.871318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.969 qpair failed and we were unable to recover it. 00:28:46.969 [2024-07-25 10:17:31.871516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.969 [2024-07-25 10:17:31.871542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.969 qpair failed and we were unable to recover it. 00:28:46.969 [2024-07-25 10:17:31.871689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.969 [2024-07-25 10:17:31.871729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.969 qpair failed and we were unable to recover it. 00:28:46.969 [2024-07-25 10:17:31.871934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.969 [2024-07-25 10:17:31.871962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.969 qpair failed and we were unable to recover it. 00:28:46.969 [2024-07-25 10:17:31.872145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.969 [2024-07-25 10:17:31.872174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.969 qpair failed and we were unable to recover it. 00:28:46.969 [2024-07-25 10:17:31.872478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.969 [2024-07-25 10:17:31.872503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.969 qpair failed and we were unable to recover it. 00:28:46.969 [2024-07-25 10:17:31.872637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.969 [2024-07-25 10:17:31.872661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.969 qpair failed and we were unable to recover it. 00:28:46.969 [2024-07-25 10:17:31.872925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.969 [2024-07-25 10:17:31.872957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.969 qpair failed and we were unable to recover it. 00:28:46.969 [2024-07-25 10:17:31.873219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.969 [2024-07-25 10:17:31.873247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.969 qpair failed and we were unable to recover it. 00:28:46.969 [2024-07-25 10:17:31.873420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.969 [2024-07-25 10:17:31.873454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.969 qpair failed and we were unable to recover it. 00:28:46.969 [2024-07-25 10:17:31.873603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.969 [2024-07-25 10:17:31.873627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.969 qpair failed and we were unable to recover it. 00:28:46.969 [2024-07-25 10:17:31.873843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.969 [2024-07-25 10:17:31.873872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.969 qpair failed and we were unable to recover it. 00:28:46.969 [2024-07-25 10:17:31.874074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.969 [2024-07-25 10:17:31.874102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.969 qpair failed and we were unable to recover it. 00:28:46.969 [2024-07-25 10:17:31.874264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.969 [2024-07-25 10:17:31.874286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.969 qpair failed and we were unable to recover it. 00:28:46.969 [2024-07-25 10:17:31.874539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.969 [2024-07-25 10:17:31.874567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.969 qpair failed and we were unable to recover it. 00:28:46.969 [2024-07-25 10:17:31.874735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.969 [2024-07-25 10:17:31.874763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.969 qpair failed and we were unable to recover it. 00:28:46.969 [2024-07-25 10:17:31.874976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.969 [2024-07-25 10:17:31.875004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.969 qpair failed and we were unable to recover it. 00:28:46.969 [2024-07-25 10:17:31.875244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.969 [2024-07-25 10:17:31.875266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.969 qpair failed and we were unable to recover it. 00:28:46.969 [2024-07-25 10:17:31.875541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.969 [2024-07-25 10:17:31.875570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.969 qpair failed and we were unable to recover it. 00:28:46.969 [2024-07-25 10:17:31.875728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.969 [2024-07-25 10:17:31.875756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.969 qpair failed and we were unable to recover it. 00:28:46.969 [2024-07-25 10:17:31.876006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.969 [2024-07-25 10:17:31.876034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.969 qpair failed and we were unable to recover it. 00:28:46.969 [2024-07-25 10:17:31.876262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.969 [2024-07-25 10:17:31.876285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.969 qpair failed and we were unable to recover it. 00:28:46.969 [2024-07-25 10:17:31.876476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.969 [2024-07-25 10:17:31.876505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.969 qpair failed and we were unable to recover it. 00:28:46.969 [2024-07-25 10:17:31.876643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.969 [2024-07-25 10:17:31.876671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.969 qpair failed and we were unable to recover it. 00:28:46.969 [2024-07-25 10:17:31.876979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.969 [2024-07-25 10:17:31.877007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.969 qpair failed and we were unable to recover it. 00:28:46.969 [2024-07-25 10:17:31.877287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.969 [2024-07-25 10:17:31.877310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.969 qpair failed and we were unable to recover it. 00:28:46.969 [2024-07-25 10:17:31.877524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.969 [2024-07-25 10:17:31.877549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.969 qpair failed and we were unable to recover it. 00:28:46.969 [2024-07-25 10:17:31.877685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.969 [2024-07-25 10:17:31.877708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.969 qpair failed and we were unable to recover it. 00:28:46.969 [2024-07-25 10:17:31.877935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.969 [2024-07-25 10:17:31.877963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.969 qpair failed and we were unable to recover it. 00:28:46.969 [2024-07-25 10:17:31.878200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.969 [2024-07-25 10:17:31.878223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.969 qpair failed and we were unable to recover it. 00:28:46.969 [2024-07-25 10:17:31.878469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.969 [2024-07-25 10:17:31.878500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.969 qpair failed and we were unable to recover it. 00:28:46.969 [2024-07-25 10:17:31.878640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.969 [2024-07-25 10:17:31.878668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.969 qpair failed and we were unable to recover it. 00:28:46.969 [2024-07-25 10:17:31.878873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.969 [2024-07-25 10:17:31.878901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.969 qpair failed and we were unable to recover it. 00:28:46.969 [2024-07-25 10:17:31.879030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.969 [2024-07-25 10:17:31.879053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.969 qpair failed and we were unable to recover it. 00:28:46.969 [2024-07-25 10:17:31.879260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.969 [2024-07-25 10:17:31.879292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.969 qpair failed and we were unable to recover it. 00:28:46.969 [2024-07-25 10:17:31.879476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.969 [2024-07-25 10:17:31.879504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.969 qpair failed and we were unable to recover it. 00:28:46.969 [2024-07-25 10:17:31.879636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.969 [2024-07-25 10:17:31.879664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.969 qpair failed and we were unable to recover it. 00:28:46.969 [2024-07-25 10:17:31.879813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.969 [2024-07-25 10:17:31.879851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.969 qpair failed and we were unable to recover it. 00:28:46.969 [2024-07-25 10:17:31.880023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.970 [2024-07-25 10:17:31.880051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.970 qpair failed and we were unable to recover it. 00:28:46.970 [2024-07-25 10:17:31.880183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.970 [2024-07-25 10:17:31.880211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.970 qpair failed and we were unable to recover it. 00:28:46.970 [2024-07-25 10:17:31.880414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.970 [2024-07-25 10:17:31.880450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.970 qpair failed and we were unable to recover it. 00:28:46.970 [2024-07-25 10:17:31.880607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.970 [2024-07-25 10:17:31.880631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.970 qpair failed and we were unable to recover it. 00:28:46.970 [2024-07-25 10:17:31.880803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.970 [2024-07-25 10:17:31.880831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.970 qpair failed and we were unable to recover it. 00:28:46.970 [2024-07-25 10:17:31.880997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.970 [2024-07-25 10:17:31.881025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.970 qpair failed and we were unable to recover it. 00:28:46.970 [2024-07-25 10:17:31.881236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.970 [2024-07-25 10:17:31.881264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.970 qpair failed and we were unable to recover it. 00:28:46.970 [2024-07-25 10:17:31.881493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.970 [2024-07-25 10:17:31.881517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.970 qpair failed and we were unable to recover it. 00:28:46.970 [2024-07-25 10:17:31.881691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.970 [2024-07-25 10:17:31.881719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.970 qpair failed and we were unable to recover it. 00:28:46.970 [2024-07-25 10:17:31.881907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.970 [2024-07-25 10:17:31.881936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.970 qpair failed and we were unable to recover it. 00:28:46.970 [2024-07-25 10:17:31.882169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.970 [2024-07-25 10:17:31.882198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.970 qpair failed and we were unable to recover it. 00:28:46.970 [2024-07-25 10:17:31.882396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.970 [2024-07-25 10:17:31.882438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.970 qpair failed and we were unable to recover it. 00:28:46.970 [2024-07-25 10:17:31.882609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.970 [2024-07-25 10:17:31.882633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.970 qpair failed and we were unable to recover it. 00:28:46.970 [2024-07-25 10:17:31.882800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.970 [2024-07-25 10:17:31.882861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.970 qpair failed and we were unable to recover it. 00:28:46.970 [2024-07-25 10:17:31.883117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.970 [2024-07-25 10:17:31.883145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.970 qpair failed and we were unable to recover it. 00:28:46.970 [2024-07-25 10:17:31.883362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.970 [2024-07-25 10:17:31.883396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.970 qpair failed and we were unable to recover it. 00:28:46.970 [2024-07-25 10:17:31.883572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.970 [2024-07-25 10:17:31.883596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.970 qpair failed and we were unable to recover it. 00:28:46.970 [2024-07-25 10:17:31.883792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.970 [2024-07-25 10:17:31.883841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.970 qpair failed and we were unable to recover it. 00:28:46.970 [2024-07-25 10:17:31.884099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.970 [2024-07-25 10:17:31.884127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.970 qpair failed and we were unable to recover it. 00:28:46.970 [2024-07-25 10:17:31.884348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.970 [2024-07-25 10:17:31.884371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.970 qpair failed and we were unable to recover it. 00:28:46.970 [2024-07-25 10:17:31.884543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.970 [2024-07-25 10:17:31.884568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.970 qpair failed and we were unable to recover it. 00:28:46.970 [2024-07-25 10:17:31.884744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.970 [2024-07-25 10:17:31.884784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.970 qpair failed and we were unable to recover it. 00:28:46.970 [2024-07-25 10:17:31.884971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.970 [2024-07-25 10:17:31.884999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.970 qpair failed and we were unable to recover it. 00:28:46.970 [2024-07-25 10:17:31.885212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.970 [2024-07-25 10:17:31.885235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.970 qpair failed and we were unable to recover it. 00:28:46.970 [2024-07-25 10:17:31.885482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.970 [2024-07-25 10:17:31.885519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.970 qpair failed and we were unable to recover it. 00:28:46.970 [2024-07-25 10:17:31.885687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.970 [2024-07-25 10:17:31.885715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.970 qpair failed and we were unable to recover it. 00:28:46.970 [2024-07-25 10:17:31.885925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.970 [2024-07-25 10:17:31.885953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.970 qpair failed and we were unable to recover it. 00:28:46.970 [2024-07-25 10:17:31.886136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.970 [2024-07-25 10:17:31.886166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.970 qpair failed and we were unable to recover it. 00:28:46.970 [2024-07-25 10:17:31.886410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.970 [2024-07-25 10:17:31.886447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.970 qpair failed and we were unable to recover it. 00:28:46.970 [2024-07-25 10:17:31.886613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.970 [2024-07-25 10:17:31.886641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.970 qpair failed and we were unable to recover it. 00:28:46.970 [2024-07-25 10:17:31.886819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.970 [2024-07-25 10:17:31.886846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.970 qpair failed and we were unable to recover it. 00:28:46.970 [2024-07-25 10:17:31.887035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.970 [2024-07-25 10:17:31.887058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.970 qpair failed and we were unable to recover it. 00:28:46.970 [2024-07-25 10:17:31.887275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.970 [2024-07-25 10:17:31.887303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.970 qpair failed and we were unable to recover it. 00:28:46.970 [2024-07-25 10:17:31.887538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.970 [2024-07-25 10:17:31.887567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.970 qpair failed and we were unable to recover it. 00:28:46.970 [2024-07-25 10:17:31.887749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.970 [2024-07-25 10:17:31.887777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.970 qpair failed and we were unable to recover it. 00:28:46.970 [2024-07-25 10:17:31.887955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.970 [2024-07-25 10:17:31.887978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.970 qpair failed and we were unable to recover it. 00:28:46.970 [2024-07-25 10:17:31.888172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.970 [2024-07-25 10:17:31.888200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.970 qpair failed and we were unable to recover it. 00:28:46.970 [2024-07-25 10:17:31.888412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.970 [2024-07-25 10:17:31.888461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.970 qpair failed and we were unable to recover it. 00:28:46.970 [2024-07-25 10:17:31.888623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.970 [2024-07-25 10:17:31.888651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.971 qpair failed and we were unable to recover it. 00:28:46.971 [2024-07-25 10:17:31.888830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.971 [2024-07-25 10:17:31.888853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.971 qpair failed and we were unable to recover it. 00:28:46.971 [2024-07-25 10:17:31.889073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.971 [2024-07-25 10:17:31.889121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.971 qpair failed and we were unable to recover it. 00:28:46.971 [2024-07-25 10:17:31.889396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.971 [2024-07-25 10:17:31.889425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.971 qpair failed and we were unable to recover it. 00:28:46.971 [2024-07-25 10:17:31.889606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.971 [2024-07-25 10:17:31.889630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.971 qpair failed and we were unable to recover it. 00:28:46.971 [2024-07-25 10:17:31.889815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.971 [2024-07-25 10:17:31.889838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.971 qpair failed and we were unable to recover it. 00:28:46.971 [2024-07-25 10:17:31.890011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.971 [2024-07-25 10:17:31.890039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.971 qpair failed and we were unable to recover it. 00:28:46.971 [2024-07-25 10:17:31.890230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.971 [2024-07-25 10:17:31.890258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.971 qpair failed and we were unable to recover it. 00:28:46.971 [2024-07-25 10:17:31.890490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.971 [2024-07-25 10:17:31.890518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.971 qpair failed and we were unable to recover it. 00:28:46.971 [2024-07-25 10:17:31.890698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.971 [2024-07-25 10:17:31.890735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.971 qpair failed and we were unable to recover it. 00:28:46.971 [2024-07-25 10:17:31.890950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.971 [2024-07-25 10:17:31.890978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.971 qpair failed and we were unable to recover it. 00:28:46.971 [2024-07-25 10:17:31.891153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.971 [2024-07-25 10:17:31.891181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.971 qpair failed and we were unable to recover it. 00:28:46.971 [2024-07-25 10:17:31.891389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.971 [2024-07-25 10:17:31.891417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.971 qpair failed and we were unable to recover it. 00:28:46.971 [2024-07-25 10:17:31.891604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.971 [2024-07-25 10:17:31.891628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.971 qpair failed and we were unable to recover it. 00:28:46.971 [2024-07-25 10:17:31.891889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.971 [2024-07-25 10:17:31.891917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.971 qpair failed and we were unable to recover it. 00:28:46.971 [2024-07-25 10:17:31.892167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.971 [2024-07-25 10:17:31.892195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.971 qpair failed and we were unable to recover it. 00:28:46.971 [2024-07-25 10:17:31.892380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.971 [2024-07-25 10:17:31.892411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.971 qpair failed and we were unable to recover it. 00:28:46.971 [2024-07-25 10:17:31.892611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.971 [2024-07-25 10:17:31.892635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.971 qpair failed and we were unable to recover it. 00:28:46.971 [2024-07-25 10:17:31.892821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.971 [2024-07-25 10:17:31.892849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.971 qpair failed and we were unable to recover it. 00:28:46.971 [2024-07-25 10:17:31.893030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.971 [2024-07-25 10:17:31.893058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.971 qpair failed and we were unable to recover it. 00:28:46.971 [2024-07-25 10:17:31.893243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.971 [2024-07-25 10:17:31.893280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.971 qpair failed and we were unable to recover it. 00:28:46.971 [2024-07-25 10:17:31.893498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.971 [2024-07-25 10:17:31.893522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.971 qpair failed and we were unable to recover it. 00:28:46.971 [2024-07-25 10:17:31.893694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.971 [2024-07-25 10:17:31.893722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.971 qpair failed and we were unable to recover it. 00:28:46.971 [2024-07-25 10:17:31.893923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.971 [2024-07-25 10:17:31.893951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.971 qpair failed and we were unable to recover it. 00:28:46.971 [2024-07-25 10:17:31.894252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.971 [2024-07-25 10:17:31.894280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.971 qpair failed and we were unable to recover it. 00:28:46.971 [2024-07-25 10:17:31.894537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.971 [2024-07-25 10:17:31.894562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.971 qpair failed and we were unable to recover it. 00:28:46.971 [2024-07-25 10:17:31.894757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.971 [2024-07-25 10:17:31.894789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.971 qpair failed and we were unable to recover it. 00:28:46.971 [2024-07-25 10:17:31.895007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.971 [2024-07-25 10:17:31.895035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.971 qpair failed and we were unable to recover it. 00:28:46.971 [2024-07-25 10:17:31.895271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.971 [2024-07-25 10:17:31.895299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.971 qpair failed and we were unable to recover it. 00:28:46.971 [2024-07-25 10:17:31.895538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.971 [2024-07-25 10:17:31.895563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.971 qpair failed and we were unable to recover it. 00:28:46.971 [2024-07-25 10:17:31.895737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.971 [2024-07-25 10:17:31.895766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.971 qpair failed and we were unable to recover it. 00:28:46.971 [2024-07-25 10:17:31.896057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.971 [2024-07-25 10:17:31.896104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.971 qpair failed and we were unable to recover it. 00:28:46.971 [2024-07-25 10:17:31.896304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.971 [2024-07-25 10:17:31.896332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.971 qpair failed and we were unable to recover it. 00:28:46.971 [2024-07-25 10:17:31.896506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.971 [2024-07-25 10:17:31.896535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.971 qpair failed and we were unable to recover it. 00:28:46.971 [2024-07-25 10:17:31.896686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.971 [2024-07-25 10:17:31.896714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.971 qpair failed and we were unable to recover it. 00:28:46.971 [2024-07-25 10:17:31.896895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.971 [2024-07-25 10:17:31.896923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.971 qpair failed and we were unable to recover it. 00:28:46.971 [2024-07-25 10:17:31.897226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.971 [2024-07-25 10:17:31.897254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.971 qpair failed and we were unable to recover it. 00:28:46.971 [2024-07-25 10:17:31.897514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.971 [2024-07-25 10:17:31.897539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.971 qpair failed and we were unable to recover it. 00:28:46.971 [2024-07-25 10:17:31.897723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.972 [2024-07-25 10:17:31.897751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.972 qpair failed and we were unable to recover it. 00:28:46.972 [2024-07-25 10:17:31.898043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.972 [2024-07-25 10:17:31.898071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.972 qpair failed and we were unable to recover it. 00:28:46.972 [2024-07-25 10:17:31.898355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.972 [2024-07-25 10:17:31.898382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.972 qpair failed and we were unable to recover it. 00:28:46.972 [2024-07-25 10:17:31.898564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.972 [2024-07-25 10:17:31.898588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.972 qpair failed and we were unable to recover it. 00:28:46.972 [2024-07-25 10:17:31.898810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.972 [2024-07-25 10:17:31.898838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.972 qpair failed and we were unable to recover it. 00:28:46.972 [2024-07-25 10:17:31.899020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.972 [2024-07-25 10:17:31.899047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.972 qpair failed and we were unable to recover it. 00:28:46.972 [2024-07-25 10:17:31.899278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.972 [2024-07-25 10:17:31.899306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.972 qpair failed and we were unable to recover it. 00:28:46.972 [2024-07-25 10:17:31.899532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.972 [2024-07-25 10:17:31.899556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.972 qpair failed and we were unable to recover it. 00:28:46.972 [2024-07-25 10:17:31.899713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.972 [2024-07-25 10:17:31.899741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.972 qpair failed and we were unable to recover it. 00:28:46.972 [2024-07-25 10:17:31.899960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.972 [2024-07-25 10:17:31.899988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.972 qpair failed and we were unable to recover it. 00:28:46.972 [2024-07-25 10:17:31.900192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.972 [2024-07-25 10:17:31.900220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.972 qpair failed and we were unable to recover it. 00:28:46.972 [2024-07-25 10:17:31.900485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.972 [2024-07-25 10:17:31.900509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.972 qpair failed and we were unable to recover it. 00:28:46.972 [2024-07-25 10:17:31.900651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.972 [2024-07-25 10:17:31.900675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.972 qpair failed and we were unable to recover it. 00:28:46.972 [2024-07-25 10:17:31.900879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.972 [2024-07-25 10:17:31.900907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.972 qpair failed and we were unable to recover it. 00:28:46.972 [2024-07-25 10:17:31.901098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.972 [2024-07-25 10:17:31.901126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.972 qpair failed and we were unable to recover it. 00:28:46.972 [2024-07-25 10:17:31.901278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.972 [2024-07-25 10:17:31.901305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.972 qpair failed and we were unable to recover it. 00:28:46.972 [2024-07-25 10:17:31.901459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.972 [2024-07-25 10:17:31.901512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.972 qpair failed and we were unable to recover it. 00:28:46.972 [2024-07-25 10:17:31.901689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.972 [2024-07-25 10:17:31.901717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.972 qpair failed and we were unable to recover it. 00:28:46.972 [2024-07-25 10:17:31.901914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.972 [2024-07-25 10:17:31.901942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.972 qpair failed and we were unable to recover it. 00:28:46.972 [2024-07-25 10:17:31.902115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.972 [2024-07-25 10:17:31.902138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.972 qpair failed and we were unable to recover it. 00:28:46.972 [2024-07-25 10:17:31.902365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.972 [2024-07-25 10:17:31.902393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.972 qpair failed and we were unable to recover it. 00:28:46.972 [2024-07-25 10:17:31.902595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.972 [2024-07-25 10:17:31.902619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.972 qpair failed and we were unable to recover it. 00:28:46.972 [2024-07-25 10:17:31.902817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.972 [2024-07-25 10:17:31.902845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.972 qpair failed and we were unable to recover it. 00:28:46.972 [2024-07-25 10:17:31.902998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.972 [2024-07-25 10:17:31.903021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.972 qpair failed and we were unable to recover it. 00:28:46.972 [2024-07-25 10:17:31.903197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.972 [2024-07-25 10:17:31.903225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.972 qpair failed and we were unable to recover it. 00:28:46.972 [2024-07-25 10:17:31.903348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.972 [2024-07-25 10:17:31.903376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.972 qpair failed and we were unable to recover it. 00:28:46.972 [2024-07-25 10:17:31.903571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.972 [2024-07-25 10:17:31.903596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.972 qpair failed and we were unable to recover it. 00:28:46.972 [2024-07-25 10:17:31.903766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.972 [2024-07-25 10:17:31.903789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.972 qpair failed and we were unable to recover it. 00:28:46.972 [2024-07-25 10:17:31.904000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.972 [2024-07-25 10:17:31.904028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.972 qpair failed and we were unable to recover it. 00:28:46.972 [2024-07-25 10:17:31.904266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.972 [2024-07-25 10:17:31.904295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.972 qpair failed and we were unable to recover it. 00:28:46.972 [2024-07-25 10:17:31.904457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.972 [2024-07-25 10:17:31.904485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.972 qpair failed and we were unable to recover it. 00:28:46.972 [2024-07-25 10:17:31.904742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.972 [2024-07-25 10:17:31.904765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.972 qpair failed and we were unable to recover it. 00:28:46.972 [2024-07-25 10:17:31.905030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.972 [2024-07-25 10:17:31.905058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.972 qpair failed and we were unable to recover it. 00:28:46.972 [2024-07-25 10:17:31.905316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.972 [2024-07-25 10:17:31.905344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.972 qpair failed and we were unable to recover it. 00:28:46.972 [2024-07-25 10:17:31.905589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.972 [2024-07-25 10:17:31.905617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.972 qpair failed and we were unable to recover it. 00:28:46.972 [2024-07-25 10:17:31.905799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.972 [2024-07-25 10:17:31.905822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.972 qpair failed and we were unable to recover it. 00:28:46.972 [2024-07-25 10:17:31.906040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.972 [2024-07-25 10:17:31.906068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.972 qpair failed and we were unable to recover it. 00:28:46.972 [2024-07-25 10:17:31.906357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.972 [2024-07-25 10:17:31.906385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.972 qpair failed and we were unable to recover it. 00:28:46.972 [2024-07-25 10:17:31.906623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.973 [2024-07-25 10:17:31.906647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.973 qpair failed and we were unable to recover it. 00:28:46.973 [2024-07-25 10:17:31.906865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.973 [2024-07-25 10:17:31.906888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.973 qpair failed and we were unable to recover it. 00:28:46.973 [2024-07-25 10:17:31.907074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.973 [2024-07-25 10:17:31.907102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.973 qpair failed and we were unable to recover it. 00:28:46.973 [2024-07-25 10:17:31.907269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.973 [2024-07-25 10:17:31.907298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.973 qpair failed and we were unable to recover it. 00:28:46.973 [2024-07-25 10:17:31.907537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.973 [2024-07-25 10:17:31.907565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.973 qpair failed and we were unable to recover it. 00:28:46.973 [2024-07-25 10:17:31.907751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.973 [2024-07-25 10:17:31.907774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.973 qpair failed and we were unable to recover it. 00:28:46.973 [2024-07-25 10:17:31.908023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.973 [2024-07-25 10:17:31.908051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.973 qpair failed and we were unable to recover it. 00:28:46.973 [2024-07-25 10:17:31.908303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.973 [2024-07-25 10:17:31.908332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.973 qpair failed and we were unable to recover it. 00:28:46.974 [2024-07-25 10:17:31.908498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.974 [2024-07-25 10:17:31.908527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.974 qpair failed and we were unable to recover it. 00:28:46.974 [2024-07-25 10:17:31.908733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.974 [2024-07-25 10:17:31.908756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.974 qpair failed and we were unable to recover it. 00:28:46.974 [2024-07-25 10:17:31.908918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.974 [2024-07-25 10:17:31.908946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.974 qpair failed and we were unable to recover it. 00:28:46.974 [2024-07-25 10:17:31.909124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.974 [2024-07-25 10:17:31.909152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.974 qpair failed and we were unable to recover it. 00:28:46.974 [2024-07-25 10:17:31.909353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.974 [2024-07-25 10:17:31.909381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.974 qpair failed and we were unable to recover it. 00:28:46.974 [2024-07-25 10:17:31.909677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.974 [2024-07-25 10:17:31.909701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.974 qpair failed and we were unable to recover it. 00:28:46.974 [2024-07-25 10:17:31.910016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.974 [2024-07-25 10:17:31.910044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.974 qpair failed and we were unable to recover it. 00:28:46.974 [2024-07-25 10:17:31.910309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.974 [2024-07-25 10:17:31.910337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.974 qpair failed and we were unable to recover it. 00:28:46.974 [2024-07-25 10:17:31.910570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.974 [2024-07-25 10:17:31.910598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.974 qpair failed and we were unable to recover it. 00:28:46.974 [2024-07-25 10:17:31.910834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.974 [2024-07-25 10:17:31.910857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.974 qpair failed and we were unable to recover it. 00:28:46.974 [2024-07-25 10:17:31.911085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.974 [2024-07-25 10:17:31.911114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.974 qpair failed and we were unable to recover it. 00:28:46.974 [2024-07-25 10:17:31.911272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.974 [2024-07-25 10:17:31.911300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.974 qpair failed and we were unable to recover it. 00:28:46.974 [2024-07-25 10:17:31.911457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.974 [2024-07-25 10:17:31.911486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.974 qpair failed and we were unable to recover it. 00:28:46.974 [2024-07-25 10:17:31.911788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.974 [2024-07-25 10:17:31.911811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.974 qpair failed and we were unable to recover it. 00:28:46.974 [2024-07-25 10:17:31.912047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.974 [2024-07-25 10:17:31.912074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.974 qpair failed and we were unable to recover it. 00:28:46.974 [2024-07-25 10:17:31.912236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.974 [2024-07-25 10:17:31.912265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.974 qpair failed and we were unable to recover it. 00:28:46.974 [2024-07-25 10:17:31.912438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.974 [2024-07-25 10:17:31.912467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.974 qpair failed and we were unable to recover it. 00:28:46.974 [2024-07-25 10:17:31.912654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.974 [2024-07-25 10:17:31.912676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.974 qpair failed and we were unable to recover it. 00:28:46.974 [2024-07-25 10:17:31.912932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.974 [2024-07-25 10:17:31.912960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.974 qpair failed and we were unable to recover it. 00:28:46.974 [2024-07-25 10:17:31.913147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.974 [2024-07-25 10:17:31.913175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.974 qpair failed and we were unable to recover it. 00:28:46.974 [2024-07-25 10:17:31.913347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.974 [2024-07-25 10:17:31.913375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.974 qpair failed and we were unable to recover it. 00:28:46.974 [2024-07-25 10:17:31.913631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.974 [2024-07-25 10:17:31.913656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.974 qpair failed and we were unable to recover it. 00:28:46.974 [2024-07-25 10:17:31.913888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.974 [2024-07-25 10:17:31.913915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.974 qpair failed and we were unable to recover it. 00:28:46.974 [2024-07-25 10:17:31.914094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.974 [2024-07-25 10:17:31.914122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.974 qpair failed and we were unable to recover it. 00:28:46.974 [2024-07-25 10:17:31.914305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.974 [2024-07-25 10:17:31.914336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.974 qpair failed and we were unable to recover it. 00:28:46.974 [2024-07-25 10:17:31.914608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.974 [2024-07-25 10:17:31.914632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.974 qpair failed and we were unable to recover it. 00:28:46.974 [2024-07-25 10:17:31.914791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.974 [2024-07-25 10:17:31.914819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.974 qpair failed and we were unable to recover it. 00:28:46.974 [2024-07-25 10:17:31.915047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.974 [2024-07-25 10:17:31.915076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.974 qpair failed and we were unable to recover it. 00:28:46.974 [2024-07-25 10:17:31.915296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.974 [2024-07-25 10:17:31.915324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.974 qpair failed and we were unable to recover it. 00:28:46.974 [2024-07-25 10:17:31.915539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.974 [2024-07-25 10:17:31.915563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.974 qpair failed and we were unable to recover it. 00:28:46.974 [2024-07-25 10:17:31.915774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.974 [2024-07-25 10:17:31.915802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.974 qpair failed and we were unable to recover it. 00:28:46.974 [2024-07-25 10:17:31.916040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.974 [2024-07-25 10:17:31.916069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.974 qpair failed and we were unable to recover it. 00:28:46.974 [2024-07-25 10:17:31.916303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.974 [2024-07-25 10:17:31.916331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.974 qpair failed and we were unable to recover it. 00:28:46.974 [2024-07-25 10:17:31.916585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.974 [2024-07-25 10:17:31.916608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.974 qpair failed and we were unable to recover it. 00:28:46.974 [2024-07-25 10:17:31.916842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.974 [2024-07-25 10:17:31.916870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.974 qpair failed and we were unable to recover it. 00:28:46.975 [2024-07-25 10:17:31.917044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.975 [2024-07-25 10:17:31.917091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.975 qpair failed and we were unable to recover it. 00:28:46.975 [2024-07-25 10:17:31.917226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.975 [2024-07-25 10:17:31.917254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.975 qpair failed and we were unable to recover it. 00:28:46.975 [2024-07-25 10:17:31.917462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.975 [2024-07-25 10:17:31.917488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.975 qpair failed and we were unable to recover it. 00:28:46.975 [2024-07-25 10:17:31.917718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.975 [2024-07-25 10:17:31.917746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.975 qpair failed and we were unable to recover it. 00:28:46.975 [2024-07-25 10:17:31.918003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.975 [2024-07-25 10:17:31.918032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.975 qpair failed and we were unable to recover it. 00:28:46.975 [2024-07-25 10:17:31.918184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.975 [2024-07-25 10:17:31.918212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.975 qpair failed and we were unable to recover it. 00:28:46.975 [2024-07-25 10:17:31.918442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.975 [2024-07-25 10:17:31.918465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.975 qpair failed and we were unable to recover it. 00:28:46.975 [2024-07-25 10:17:31.918694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.975 [2024-07-25 10:17:31.918722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.975 qpair failed and we were unable to recover it. 00:28:46.975 [2024-07-25 10:17:31.918971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.975 [2024-07-25 10:17:31.919000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.975 qpair failed and we were unable to recover it. 00:28:46.975 [2024-07-25 10:17:31.919165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.975 [2024-07-25 10:17:31.919193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.975 qpair failed and we were unable to recover it. 00:28:46.975 [2024-07-25 10:17:31.919347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.975 [2024-07-25 10:17:31.919370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.975 qpair failed and we were unable to recover it. 00:28:46.975 [2024-07-25 10:17:31.919552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.975 [2024-07-25 10:17:31.919576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.975 qpair failed and we were unable to recover it. 00:28:46.975 [2024-07-25 10:17:31.919744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.975 [2024-07-25 10:17:31.919772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.975 qpair failed and we were unable to recover it. 00:28:46.975 [2024-07-25 10:17:31.919955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.975 [2024-07-25 10:17:31.919983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.975 qpair failed and we were unable to recover it. 00:28:46.975 [2024-07-25 10:17:31.920143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.975 [2024-07-25 10:17:31.920167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.975 qpair failed and we were unable to recover it. 00:28:46.975 [2024-07-25 10:17:31.920338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.975 [2024-07-25 10:17:31.920366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.975 qpair failed and we were unable to recover it. 00:28:46.975 [2024-07-25 10:17:31.920578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.975 [2024-07-25 10:17:31.920603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.975 qpair failed and we were unable to recover it. 00:28:46.975 [2024-07-25 10:17:31.920826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.975 [2024-07-25 10:17:31.920854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.975 qpair failed and we were unable to recover it. 00:28:46.975 [2024-07-25 10:17:31.921040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.975 [2024-07-25 10:17:31.921068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.975 qpair failed and we were unable to recover it. 00:28:46.975 [2024-07-25 10:17:31.921267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.975 [2024-07-25 10:17:31.921295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.975 qpair failed and we were unable to recover it. 00:28:46.975 [2024-07-25 10:17:31.921501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.975 [2024-07-25 10:17:31.921528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.975 qpair failed and we were unable to recover it. 00:28:46.975 [2024-07-25 10:17:31.921663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.975 [2024-07-25 10:17:31.921688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.975 qpair failed and we were unable to recover it. 00:28:46.975 [2024-07-25 10:17:31.921857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.975 [2024-07-25 10:17:31.921882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.975 qpair failed and we were unable to recover it. 00:28:46.975 [2024-07-25 10:17:31.922066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.975 [2024-07-25 10:17:31.922094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.975 qpair failed and we were unable to recover it. 00:28:46.975 [2024-07-25 10:17:31.922256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.975 [2024-07-25 10:17:31.922284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.975 qpair failed and we were unable to recover it. 00:28:46.975 [2024-07-25 10:17:31.922443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.975 [2024-07-25 10:17:31.922471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.975 qpair failed and we were unable to recover it. 00:28:46.975 [2024-07-25 10:17:31.922658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.975 [2024-07-25 10:17:31.922684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.975 qpair failed and we were unable to recover it. 00:28:46.975 [2024-07-25 10:17:31.922831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.975 [2024-07-25 10:17:31.922859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.975 qpair failed and we were unable to recover it. 00:28:46.975 [2024-07-25 10:17:31.923070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.975 [2024-07-25 10:17:31.923098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.975 qpair failed and we were unable to recover it. 00:28:46.975 [2024-07-25 10:17:31.923269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.975 [2024-07-25 10:17:31.923300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.975 qpair failed and we were unable to recover it. 00:28:46.975 [2024-07-25 10:17:31.923545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.975 [2024-07-25 10:17:31.923571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.975 qpair failed and we were unable to recover it. 00:28:46.975 [2024-07-25 10:17:31.923728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.975 [2024-07-25 10:17:31.923757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.975 qpair failed and we were unable to recover it. 00:28:46.975 [2024-07-25 10:17:31.923885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.975 [2024-07-25 10:17:31.923913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.975 qpair failed and we were unable to recover it. 00:28:46.975 [2024-07-25 10:17:31.924127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.975 [2024-07-25 10:17:31.924155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.975 qpair failed and we were unable to recover it. 00:28:46.976 [2024-07-25 10:17:31.924398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.976 [2024-07-25 10:17:31.924423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.976 qpair failed and we were unable to recover it. 00:28:46.976 [2024-07-25 10:17:31.924730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.976 [2024-07-25 10:17:31.924758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.976 qpair failed and we were unable to recover it. 00:28:46.976 [2024-07-25 10:17:31.925056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.976 [2024-07-25 10:17:31.925085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.976 qpair failed and we were unable to recover it. 00:28:46.976 [2024-07-25 10:17:31.925321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.976 [2024-07-25 10:17:31.925349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.976 qpair failed and we were unable to recover it. 00:28:46.976 [2024-07-25 10:17:31.925633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.976 [2024-07-25 10:17:31.925659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.976 qpair failed and we were unable to recover it. 00:28:46.976 [2024-07-25 10:17:31.925923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.976 [2024-07-25 10:17:31.925951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.976 qpair failed and we were unable to recover it. 00:28:46.976 [2024-07-25 10:17:31.926134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.976 [2024-07-25 10:17:31.926162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.976 qpair failed and we were unable to recover it. 00:28:46.976 [2024-07-25 10:17:31.926394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.976 [2024-07-25 10:17:31.926422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.976 qpair failed and we were unable to recover it. 00:28:46.976 [2024-07-25 10:17:31.926697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.976 [2024-07-25 10:17:31.926722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.976 qpair failed and we were unable to recover it. 00:28:46.976 [2024-07-25 10:17:31.926961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.976 [2024-07-25 10:17:31.926998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.976 qpair failed and we were unable to recover it. 00:28:46.976 [2024-07-25 10:17:31.927208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.976 [2024-07-25 10:17:31.927236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.976 qpair failed and we were unable to recover it. 00:28:46.976 [2024-07-25 10:17:31.927457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.976 [2024-07-25 10:17:31.927500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.976 qpair failed and we were unable to recover it. 00:28:46.976 [2024-07-25 10:17:31.927775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.976 [2024-07-25 10:17:31.927814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.976 qpair failed and we were unable to recover it. 00:28:46.976 [2024-07-25 10:17:31.928027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.976 [2024-07-25 10:17:31.928055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.976 qpair failed and we were unable to recover it. 00:28:46.976 [2024-07-25 10:17:31.928235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.976 [2024-07-25 10:17:31.928264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.976 qpair failed and we were unable to recover it. 00:28:46.976 [2024-07-25 10:17:31.928448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.976 [2024-07-25 10:17:31.928476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.976 qpair failed and we were unable to recover it. 00:28:46.976 [2024-07-25 10:17:31.928645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.976 [2024-07-25 10:17:31.928669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.976 qpair failed and we were unable to recover it. 00:28:46.976 [2024-07-25 10:17:31.928934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.976 [2024-07-25 10:17:31.928962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.976 qpair failed and we were unable to recover it. 00:28:46.976 [2024-07-25 10:17:31.929227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.976 [2024-07-25 10:17:31.929256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.976 qpair failed and we were unable to recover it. 00:28:46.976 [2024-07-25 10:17:31.929463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.976 [2024-07-25 10:17:31.929491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.976 qpair failed and we were unable to recover it. 00:28:46.976 [2024-07-25 10:17:31.929698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.976 [2024-07-25 10:17:31.929741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.976 qpair failed and we were unable to recover it. 00:28:46.976 [2024-07-25 10:17:31.929961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.976 [2024-07-25 10:17:31.929989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.976 qpair failed and we were unable to recover it. 00:28:46.976 [2024-07-25 10:17:31.930239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.976 [2024-07-25 10:17:31.930271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.976 qpair failed and we were unable to recover it. 00:28:46.976 [2024-07-25 10:17:31.930482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.976 [2024-07-25 10:17:31.930510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.976 qpair failed and we were unable to recover it. 00:28:46.976 [2024-07-25 10:17:31.930678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.976 [2024-07-25 10:17:31.930703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.976 qpair failed and we were unable to recover it. 00:28:46.976 [2024-07-25 10:17:31.930878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.976 [2024-07-25 10:17:31.930905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.976 qpair failed and we were unable to recover it. 00:28:46.976 [2024-07-25 10:17:31.931092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.976 [2024-07-25 10:17:31.931121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.976 qpair failed and we were unable to recover it. 00:28:46.976 [2024-07-25 10:17:31.931321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.976 [2024-07-25 10:17:31.931349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.976 qpair failed and we were unable to recover it. 00:28:46.976 [2024-07-25 10:17:31.931514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.976 [2024-07-25 10:17:31.931540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.976 qpair failed and we were unable to recover it. 00:28:46.976 [2024-07-25 10:17:31.931711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.976 [2024-07-25 10:17:31.931739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.976 qpair failed and we were unable to recover it. 00:28:46.976 [2024-07-25 10:17:31.931960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.976 [2024-07-25 10:17:31.931987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.976 qpair failed and we were unable to recover it. 00:28:46.976 [2024-07-25 10:17:31.932200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.976 [2024-07-25 10:17:31.932228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.976 qpair failed and we were unable to recover it. 00:28:46.976 [2024-07-25 10:17:31.932518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.976 [2024-07-25 10:17:31.932543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.976 qpair failed and we were unable to recover it. 00:28:46.976 [2024-07-25 10:17:31.932732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.976 [2024-07-25 10:17:31.932760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.976 qpair failed and we were unable to recover it. 00:28:46.976 [2024-07-25 10:17:31.932899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.976 [2024-07-25 10:17:31.932927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.976 qpair failed and we were unable to recover it. 00:28:46.976 [2024-07-25 10:17:31.933122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.976 [2024-07-25 10:17:31.933150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.976 qpair failed and we were unable to recover it. 00:28:46.976 [2024-07-25 10:17:31.933391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.976 [2024-07-25 10:17:31.933417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.976 qpair failed and we were unable to recover it. 00:28:46.976 [2024-07-25 10:17:31.933604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.976 [2024-07-25 10:17:31.933629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.976 qpair failed and we were unable to recover it. 00:28:46.977 [2024-07-25 10:17:31.933805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.977 [2024-07-25 10:17:31.933833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.977 qpair failed and we were unable to recover it. 00:28:46.977 [2024-07-25 10:17:31.934049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.977 [2024-07-25 10:17:31.934077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.977 qpair failed and we were unable to recover it. 00:28:46.977 [2024-07-25 10:17:31.934247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.977 [2024-07-25 10:17:31.934272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.977 qpair failed and we were unable to recover it. 00:28:46.977 [2024-07-25 10:17:31.934510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.977 [2024-07-25 10:17:31.934539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.977 qpair failed and we were unable to recover it. 00:28:46.977 [2024-07-25 10:17:31.934736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.977 [2024-07-25 10:17:31.934765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.977 qpair failed and we were unable to recover it. 00:28:46.977 [2024-07-25 10:17:31.934939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.977 [2024-07-25 10:17:31.934967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.977 qpair failed and we were unable to recover it. 00:28:46.977 [2024-07-25 10:17:31.935245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.977 [2024-07-25 10:17:31.935271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.977 qpair failed and we were unable to recover it. 00:28:46.977 [2024-07-25 10:17:31.935526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.977 [2024-07-25 10:17:31.935554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.977 qpair failed and we were unable to recover it. 00:28:46.977 [2024-07-25 10:17:31.935835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.977 [2024-07-25 10:17:31.935864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.977 qpair failed and we were unable to recover it. 00:28:46.977 [2024-07-25 10:17:31.936032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.977 [2024-07-25 10:17:31.936060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.977 qpair failed and we were unable to recover it. 00:28:46.977 [2024-07-25 10:17:31.936245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.977 [2024-07-25 10:17:31.936269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.977 qpair failed and we were unable to recover it. 00:28:46.977 [2024-07-25 10:17:31.936498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.977 [2024-07-25 10:17:31.936526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.977 qpair failed and we were unable to recover it. 00:28:46.977 [2024-07-25 10:17:31.936802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.977 [2024-07-25 10:17:31.936830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.977 qpair failed and we were unable to recover it. 00:28:46.977 [2024-07-25 10:17:31.937069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.977 [2024-07-25 10:17:31.937097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.977 qpair failed and we were unable to recover it. 00:28:46.977 [2024-07-25 10:17:31.937277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.977 [2024-07-25 10:17:31.937302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.977 qpair failed and we were unable to recover it. 00:28:46.977 [2024-07-25 10:17:31.937505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.977 [2024-07-25 10:17:31.937533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.977 qpair failed and we were unable to recover it. 00:28:46.977 [2024-07-25 10:17:31.937737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.977 [2024-07-25 10:17:31.937765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.977 qpair failed and we were unable to recover it. 00:28:46.977 [2024-07-25 10:17:31.937939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.977 [2024-07-25 10:17:31.937976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.977 qpair failed and we were unable to recover it. 00:28:46.977 [2024-07-25 10:17:31.938242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.977 [2024-07-25 10:17:31.938266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.977 qpair failed and we were unable to recover it. 00:28:46.977 [2024-07-25 10:17:31.938497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.977 [2024-07-25 10:17:31.938526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.977 qpair failed and we were unable to recover it. 00:28:46.977 [2024-07-25 10:17:31.938749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.977 [2024-07-25 10:17:31.938778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.977 qpair failed and we were unable to recover it. 00:28:46.977 [2024-07-25 10:17:31.938985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.977 [2024-07-25 10:17:31.939013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.977 qpair failed and we were unable to recover it. 00:28:46.977 [2024-07-25 10:17:31.939265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.977 [2024-07-25 10:17:31.939290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.977 qpair failed and we were unable to recover it. 00:28:46.977 [2024-07-25 10:17:31.939569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.977 [2024-07-25 10:17:31.939598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.977 qpair failed and we were unable to recover it. 00:28:46.977 [2024-07-25 10:17:31.939829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.977 [2024-07-25 10:17:31.939858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.977 qpair failed and we were unable to recover it. 00:28:46.977 [2024-07-25 10:17:31.940087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.977 [2024-07-25 10:17:31.940121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.977 qpair failed and we were unable to recover it. 00:28:46.977 [2024-07-25 10:17:31.940363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.977 [2024-07-25 10:17:31.940387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.977 qpair failed and we were unable to recover it. 00:28:46.977 [2024-07-25 10:17:31.940658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.977 [2024-07-25 10:17:31.940683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.977 qpair failed and we were unable to recover it. 00:28:46.977 [2024-07-25 10:17:31.940870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.977 [2024-07-25 10:17:31.940899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.977 qpair failed and we were unable to recover it. 00:28:46.977 [2024-07-25 10:17:31.941065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.977 [2024-07-25 10:17:31.941093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.977 qpair failed and we were unable to recover it. 00:28:46.977 [2024-07-25 10:17:31.941332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.977 [2024-07-25 10:17:31.941357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.977 qpair failed and we were unable to recover it. 00:28:46.977 [2024-07-25 10:17:31.941671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.977 [2024-07-25 10:17:31.941699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.977 qpair failed and we were unable to recover it. 00:28:46.977 [2024-07-25 10:17:31.941990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.977 [2024-07-25 10:17:31.942018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.977 qpair failed and we were unable to recover it. 00:28:46.977 [2024-07-25 10:17:31.942169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.977 [2024-07-25 10:17:31.942196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.977 qpair failed and we were unable to recover it. 00:28:46.977 [2024-07-25 10:17:31.942383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.978 [2024-07-25 10:17:31.942407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.978 qpair failed and we were unable to recover it. 00:28:46.978 [2024-07-25 10:17:31.942585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.978 [2024-07-25 10:17:31.942610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.978 qpair failed and we were unable to recover it. 00:28:46.978 [2024-07-25 10:17:31.942780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.978 [2024-07-25 10:17:31.942808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.978 qpair failed and we were unable to recover it. 00:28:46.978 [2024-07-25 10:17:31.943016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.978 [2024-07-25 10:17:31.943043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.978 qpair failed and we were unable to recover it. 00:28:46.978 [2024-07-25 10:17:31.943291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.978 [2024-07-25 10:17:31.943316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.978 qpair failed and we were unable to recover it. 00:28:46.978 [2024-07-25 10:17:31.943496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.978 [2024-07-25 10:17:31.943524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.978 qpair failed and we were unable to recover it. 00:28:46.978 [2024-07-25 10:17:31.943738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.978 [2024-07-25 10:17:31.943766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.978 qpair failed and we were unable to recover it. 00:28:46.978 [2024-07-25 10:17:31.943942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.978 [2024-07-25 10:17:31.943970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.978 qpair failed and we were unable to recover it. 00:28:46.978 [2024-07-25 10:17:31.944233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.978 [2024-07-25 10:17:31.944258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.978 qpair failed and we were unable to recover it. 00:28:46.978 [2024-07-25 10:17:31.944523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.978 [2024-07-25 10:17:31.944552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.978 qpair failed and we were unable to recover it. 00:28:46.978 [2024-07-25 10:17:31.944823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.978 [2024-07-25 10:17:31.944852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.978 qpair failed and we were unable to recover it. 00:28:46.978 [2024-07-25 10:17:31.945060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.978 [2024-07-25 10:17:31.945089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.978 qpair failed and we were unable to recover it. 00:28:46.978 [2024-07-25 10:17:31.945316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.978 [2024-07-25 10:17:31.945341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.978 qpair failed and we were unable to recover it. 00:28:46.978 [2024-07-25 10:17:31.945540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.978 [2024-07-25 10:17:31.945568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.978 qpair failed and we were unable to recover it. 00:28:46.978 [2024-07-25 10:17:31.945753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.978 [2024-07-25 10:17:31.945781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.978 qpair failed and we were unable to recover it. 00:28:46.978 [2024-07-25 10:17:31.945968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.978 [2024-07-25 10:17:31.945996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.978 qpair failed and we were unable to recover it. 00:28:46.978 [2024-07-25 10:17:31.946154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.978 [2024-07-25 10:17:31.946179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.978 qpair failed and we were unable to recover it. 00:28:46.978 [2024-07-25 10:17:31.946359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.978 [2024-07-25 10:17:31.946395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.978 qpair failed and we were unable to recover it. 00:28:46.978 [2024-07-25 10:17:31.946577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.978 [2024-07-25 10:17:31.946607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.978 qpair failed and we were unable to recover it. 00:28:46.978 [2024-07-25 10:17:31.946854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.978 [2024-07-25 10:17:31.946882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.978 qpair failed and we were unable to recover it. 00:28:46.978 [2024-07-25 10:17:31.947025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.978 [2024-07-25 10:17:31.947050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.978 qpair failed and we were unable to recover it. 00:28:46.978 [2024-07-25 10:17:31.947215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.978 [2024-07-25 10:17:31.947257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.978 qpair failed and we were unable to recover it. 00:28:46.978 [2024-07-25 10:17:31.947424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.978 [2024-07-25 10:17:31.947461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.978 qpair failed and we were unable to recover it. 00:28:46.978 [2024-07-25 10:17:31.947635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.978 [2024-07-25 10:17:31.947663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.978 qpair failed and we were unable to recover it. 00:28:46.978 [2024-07-25 10:17:31.947835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.978 [2024-07-25 10:17:31.947859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.978 qpair failed and we were unable to recover it. 00:28:46.978 [2024-07-25 10:17:31.948046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.978 [2024-07-25 10:17:31.948074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.978 qpair failed and we were unable to recover it. 00:28:46.978 [2024-07-25 10:17:31.948286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.978 [2024-07-25 10:17:31.948315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.978 qpair failed and we were unable to recover it. 00:28:46.978 [2024-07-25 10:17:31.948477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.978 [2024-07-25 10:17:31.948506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.978 qpair failed and we were unable to recover it. 00:28:46.978 [2024-07-25 10:17:31.948676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.978 [2024-07-25 10:17:31.948701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.978 qpair failed and we were unable to recover it. 00:28:46.978 [2024-07-25 10:17:31.948880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.978 [2024-07-25 10:17:31.948908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.978 qpair failed and we were unable to recover it. 00:28:46.978 [2024-07-25 10:17:31.949134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.978 [2024-07-25 10:17:31.949162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.978 qpair failed and we were unable to recover it. 00:28:46.978 [2024-07-25 10:17:31.949404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.978 [2024-07-25 10:17:31.949444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.978 qpair failed and we were unable to recover it. 00:28:46.978 [2024-07-25 10:17:31.949645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.979 [2024-07-25 10:17:31.949670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.979 qpair failed and we were unable to recover it. 00:28:46.979 [2024-07-25 10:17:31.949830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.979 [2024-07-25 10:17:31.949858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.979 qpair failed and we were unable to recover it. 00:28:46.979 [2024-07-25 10:17:31.950045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.979 [2024-07-25 10:17:31.950074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.979 qpair failed and we were unable to recover it. 00:28:46.979 [2024-07-25 10:17:31.950244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.979 [2024-07-25 10:17:31.950272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.979 qpair failed and we were unable to recover it. 00:28:46.979 [2024-07-25 10:17:31.950423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.979 [2024-07-25 10:17:31.950455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.979 qpair failed and we were unable to recover it. 00:28:46.979 [2024-07-25 10:17:31.950585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.979 [2024-07-25 10:17:31.950627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.979 qpair failed and we were unable to recover it. 00:28:46.979 [2024-07-25 10:17:31.950814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.979 [2024-07-25 10:17:31.950842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.979 qpair failed and we were unable to recover it. 00:28:46.979 [2024-07-25 10:17:31.951030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.979 [2024-07-25 10:17:31.951057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.979 qpair failed and we were unable to recover it. 00:28:46.979 [2024-07-25 10:17:31.951308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.979 [2024-07-25 10:17:31.951333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.979 qpair failed and we were unable to recover it. 00:28:46.979 [2024-07-25 10:17:31.951525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.979 [2024-07-25 10:17:31.951553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.979 qpair failed and we were unable to recover it. 00:28:46.979 [2024-07-25 10:17:31.951717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.979 [2024-07-25 10:17:31.951746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.979 qpair failed and we were unable to recover it. 00:28:46.979 [2024-07-25 10:17:31.951945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.979 [2024-07-25 10:17:31.951973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.979 qpair failed and we were unable to recover it. 00:28:46.979 [2024-07-25 10:17:31.952187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.979 [2024-07-25 10:17:31.952212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.979 qpair failed and we were unable to recover it. 00:28:46.979 [2024-07-25 10:17:31.952500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.979 [2024-07-25 10:17:31.952533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.979 qpair failed and we were unable to recover it. 00:28:46.979 [2024-07-25 10:17:31.952825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.979 [2024-07-25 10:17:31.952853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.979 qpair failed and we were unable to recover it. 00:28:46.979 [2024-07-25 10:17:31.953181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.979 [2024-07-25 10:17:31.953209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.979 qpair failed and we were unable to recover it. 00:28:46.979 [2024-07-25 10:17:31.953420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.979 [2024-07-25 10:17:31.953452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.979 qpair failed and we were unable to recover it. 00:28:46.979 [2024-07-25 10:17:31.953634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.979 [2024-07-25 10:17:31.953661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.979 qpair failed and we were unable to recover it. 00:28:46.979 [2024-07-25 10:17:31.953847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.979 [2024-07-25 10:17:31.953876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.979 qpair failed and we were unable to recover it. 00:28:46.979 [2024-07-25 10:17:31.954037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.979 [2024-07-25 10:17:31.954065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.979 qpair failed and we were unable to recover it. 00:28:46.979 [2024-07-25 10:17:31.954248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.979 [2024-07-25 10:17:31.954274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.979 qpair failed and we were unable to recover it. 00:28:46.979 [2024-07-25 10:17:31.954447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.979 [2024-07-25 10:17:31.954476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.979 qpair failed and we were unable to recover it. 00:28:46.979 [2024-07-25 10:17:31.954612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.979 [2024-07-25 10:17:31.954640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.979 qpair failed and we were unable to recover it. 00:28:46.979 [2024-07-25 10:17:31.954851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.979 [2024-07-25 10:17:31.954878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.979 qpair failed and we were unable to recover it. 00:28:46.979 [2024-07-25 10:17:31.955021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.979 [2024-07-25 10:17:31.955046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.979 qpair failed and we were unable to recover it. 00:28:46.979 [2024-07-25 10:17:31.955260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.979 [2024-07-25 10:17:31.955287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.979 qpair failed and we were unable to recover it. 00:28:46.979 [2024-07-25 10:17:31.955600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.979 [2024-07-25 10:17:31.955628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.979 qpair failed and we were unable to recover it. 00:28:46.979 [2024-07-25 10:17:31.955944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.979 [2024-07-25 10:17:31.955972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.979 qpair failed and we were unable to recover it. 00:28:46.979 [2024-07-25 10:17:31.956171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.979 [2024-07-25 10:17:31.956196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.979 qpair failed and we were unable to recover it. 00:28:46.979 [2024-07-25 10:17:31.956353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.979 [2024-07-25 10:17:31.956381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.979 qpair failed and we were unable to recover it. 00:28:46.979 [2024-07-25 10:17:31.956594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.979 [2024-07-25 10:17:31.956621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.979 qpair failed and we were unable to recover it. 00:28:46.979 [2024-07-25 10:17:31.956793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.979 [2024-07-25 10:17:31.956820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.979 qpair failed and we were unable to recover it. 00:28:46.979 [2024-07-25 10:17:31.957019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.979 [2024-07-25 10:17:31.957044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.979 qpair failed and we were unable to recover it. 00:28:46.979 [2024-07-25 10:17:31.957311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.979 [2024-07-25 10:17:31.957339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.979 qpair failed and we were unable to recover it. 00:28:46.979 [2024-07-25 10:17:31.957647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.979 [2024-07-25 10:17:31.957676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.980 qpair failed and we were unable to recover it. 00:28:46.980 [2024-07-25 10:17:31.957932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.980 [2024-07-25 10:17:31.957960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.980 qpair failed and we were unable to recover it. 00:28:46.980 [2024-07-25 10:17:31.958164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.980 [2024-07-25 10:17:31.958189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.980 qpair failed and we were unable to recover it. 00:28:46.980 [2024-07-25 10:17:31.958361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.980 [2024-07-25 10:17:31.958389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.980 qpair failed and we were unable to recover it. 00:28:46.980 [2024-07-25 10:17:31.958616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.980 [2024-07-25 10:17:31.958642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.980 qpair failed and we were unable to recover it. 00:28:46.980 [2024-07-25 10:17:31.958842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.980 [2024-07-25 10:17:31.958870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.980 qpair failed and we were unable to recover it. 00:28:46.980 [2024-07-25 10:17:31.959073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.980 [2024-07-25 10:17:31.959097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.980 qpair failed and we were unable to recover it. 00:28:46.980 [2024-07-25 10:17:31.959246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.980 [2024-07-25 10:17:31.959273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.980 qpair failed and we were unable to recover it. 00:28:46.980 [2024-07-25 10:17:31.959454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.980 [2024-07-25 10:17:31.959498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.980 qpair failed and we were unable to recover it. 00:28:46.980 [2024-07-25 10:17:31.959730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.980 [2024-07-25 10:17:31.959774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.980 qpair failed and we were unable to recover it. 00:28:46.980 [2024-07-25 10:17:31.960007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.980 [2024-07-25 10:17:31.960032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.980 qpair failed and we were unable to recover it. 00:28:46.980 [2024-07-25 10:17:31.960283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.980 [2024-07-25 10:17:31.960310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.980 qpair failed and we were unable to recover it. 00:28:46.980 [2024-07-25 10:17:31.960552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.980 [2024-07-25 10:17:31.960578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.980 qpair failed and we were unable to recover it. 00:28:46.980 [2024-07-25 10:17:31.960807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.980 [2024-07-25 10:17:31.960834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.980 qpair failed and we were unable to recover it. 00:28:46.980 [2024-07-25 10:17:31.961123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.980 [2024-07-25 10:17:31.961147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.980 qpair failed and we were unable to recover it. 00:28:46.980 [2024-07-25 10:17:31.961333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.980 [2024-07-25 10:17:31.961360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.980 qpair failed and we were unable to recover it. 00:28:46.980 [2024-07-25 10:17:31.961605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.980 [2024-07-25 10:17:31.961630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.980 qpair failed and we were unable to recover it. 00:28:46.980 [2024-07-25 10:17:31.961864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.980 [2024-07-25 10:17:31.961893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.980 qpair failed and we were unable to recover it. 00:28:46.980 [2024-07-25 10:17:31.962068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.980 [2024-07-25 10:17:31.962092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.980 qpair failed and we were unable to recover it. 00:28:46.980 [2024-07-25 10:17:31.962324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.980 [2024-07-25 10:17:31.962351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.980 qpair failed and we were unable to recover it. 00:28:46.980 [2024-07-25 10:17:31.962667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.980 [2024-07-25 10:17:31.962693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.980 qpair failed and we were unable to recover it. 00:28:46.980 [2024-07-25 10:17:31.962919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.980 [2024-07-25 10:17:31.962947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.980 qpair failed and we were unable to recover it. 00:28:46.980 [2024-07-25 10:17:31.963210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.980 [2024-07-25 10:17:31.963250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.980 qpair failed and we were unable to recover it. 00:28:46.980 [2024-07-25 10:17:31.963464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.980 [2024-07-25 10:17:31.963506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.980 qpair failed and we were unable to recover it. 00:28:46.980 [2024-07-25 10:17:31.963719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.980 [2024-07-25 10:17:31.963762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.980 qpair failed and we were unable to recover it. 00:28:46.980 [2024-07-25 10:17:31.963961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.980 [2024-07-25 10:17:31.963989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.980 qpair failed and we were unable to recover it. 00:28:46.980 [2024-07-25 10:17:31.964158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.980 [2024-07-25 10:17:31.964183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.980 qpair failed and we were unable to recover it. 00:28:46.980 [2024-07-25 10:17:31.964386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.980 [2024-07-25 10:17:31.964414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.980 qpair failed and we were unable to recover it. 00:28:46.980 [2024-07-25 10:17:31.964591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.980 [2024-07-25 10:17:31.964628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.980 qpair failed and we were unable to recover it. 00:28:46.980 [2024-07-25 10:17:31.964808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.980 [2024-07-25 10:17:31.964837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.980 qpair failed and we were unable to recover it. 00:28:46.980 [2024-07-25 10:17:31.965007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.980 [2024-07-25 10:17:31.965032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.980 qpair failed and we were unable to recover it. 00:28:46.980 [2024-07-25 10:17:31.965208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.980 [2024-07-25 10:17:31.965236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.980 qpair failed and we were unable to recover it. 00:28:46.980 [2024-07-25 10:17:31.965385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.980 [2024-07-25 10:17:31.965413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.980 qpair failed and we were unable to recover it. 00:28:46.980 [2024-07-25 10:17:31.965634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.980 [2024-07-25 10:17:31.965659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.980 qpair failed and we were unable to recover it. 00:28:46.980 [2024-07-25 10:17:31.965869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.980 [2024-07-25 10:17:31.965894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.980 qpair failed and we were unable to recover it. 00:28:46.980 [2024-07-25 10:17:31.966072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.980 [2024-07-25 10:17:31.966100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.980 qpair failed and we were unable to recover it. 00:28:46.980 [2024-07-25 10:17:31.966290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.980 [2024-07-25 10:17:31.966320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.980 qpair failed and we were unable to recover it. 00:28:46.980 [2024-07-25 10:17:31.966502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.980 [2024-07-25 10:17:31.966530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.980 qpair failed and we were unable to recover it. 00:28:46.980 [2024-07-25 10:17:31.966697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.980 [2024-07-25 10:17:31.966736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.980 qpair failed and we were unable to recover it. 00:28:46.981 [2024-07-25 10:17:31.966930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.981 [2024-07-25 10:17:31.966959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.981 qpair failed and we were unable to recover it. 00:28:46.981 [2024-07-25 10:17:31.967151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.981 [2024-07-25 10:17:31.967179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.981 qpair failed and we were unable to recover it. 00:28:46.981 [2024-07-25 10:17:31.967386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.981 [2024-07-25 10:17:31.967413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.981 qpair failed and we were unable to recover it. 00:28:46.981 [2024-07-25 10:17:31.967643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.981 [2024-07-25 10:17:31.967669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.981 qpair failed and we were unable to recover it. 00:28:46.981 [2024-07-25 10:17:31.967857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.981 [2024-07-25 10:17:31.967885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.981 qpair failed and we were unable to recover it. 00:28:46.981 [2024-07-25 10:17:31.968068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.981 [2024-07-25 10:17:31.968097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.981 qpair failed and we were unable to recover it. 00:28:46.981 [2024-07-25 10:17:31.968260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.981 [2024-07-25 10:17:31.968288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.981 qpair failed and we were unable to recover it. 00:28:46.981 [2024-07-25 10:17:31.968458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.981 [2024-07-25 10:17:31.968484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.981 qpair failed and we were unable to recover it. 00:28:46.981 [2024-07-25 10:17:31.968692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.981 [2024-07-25 10:17:31.968739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.981 qpair failed and we were unable to recover it. 00:28:46.981 [2024-07-25 10:17:31.968905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.981 [2024-07-25 10:17:31.968933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.981 qpair failed and we were unable to recover it. 00:28:46.981 [2024-07-25 10:17:31.969070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.981 [2024-07-25 10:17:31.969098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.981 qpair failed and we were unable to recover it. 00:28:46.981 [2024-07-25 10:17:31.969292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.981 [2024-07-25 10:17:31.969318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.981 qpair failed and we were unable to recover it. 00:28:46.981 [2024-07-25 10:17:31.969515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.981 [2024-07-25 10:17:31.969543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.981 qpair failed and we were unable to recover it. 00:28:46.981 [2024-07-25 10:17:31.969702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.981 [2024-07-25 10:17:31.969730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.981 qpair failed and we were unable to recover it. 00:28:46.981 [2024-07-25 10:17:31.969890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.981 [2024-07-25 10:17:31.969919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.981 qpair failed and we were unable to recover it. 00:28:46.981 [2024-07-25 10:17:31.970115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.981 [2024-07-25 10:17:31.970140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.981 qpair failed and we were unable to recover it. 00:28:46.981 [2024-07-25 10:17:31.970335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.981 [2024-07-25 10:17:31.970363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.981 qpair failed and we were unable to recover it. 00:28:46.981 [2024-07-25 10:17:31.970539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.981 [2024-07-25 10:17:31.970564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.981 qpair failed and we were unable to recover it. 00:28:46.981 [2024-07-25 10:17:31.970754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.981 [2024-07-25 10:17:31.970783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.981 qpair failed and we were unable to recover it. 00:28:46.981 [2024-07-25 10:17:31.970945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.981 [2024-07-25 10:17:31.970970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.981 qpair failed and we were unable to recover it. 00:28:46.981 [2024-07-25 10:17:31.971146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.981 [2024-07-25 10:17:31.971173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.981 qpair failed and we were unable to recover it. 00:28:46.981 [2024-07-25 10:17:31.971345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.981 [2024-07-25 10:17:31.971373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.981 qpair failed and we were unable to recover it. 00:28:46.981 [2024-07-25 10:17:31.971575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.981 [2024-07-25 10:17:31.971601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.981 qpair failed and we were unable to recover it. 00:28:46.981 [2024-07-25 10:17:31.971743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.981 [2024-07-25 10:17:31.971769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.981 qpair failed and we were unable to recover it. 00:28:46.981 [2024-07-25 10:17:31.971947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.981 [2024-07-25 10:17:31.971975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.981 qpair failed and we were unable to recover it. 00:28:46.981 [2024-07-25 10:17:31.972179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.981 [2024-07-25 10:17:31.972208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.981 qpair failed and we were unable to recover it. 00:28:46.981 [2024-07-25 10:17:31.972354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.981 [2024-07-25 10:17:31.972381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.981 qpair failed and we were unable to recover it. 00:28:46.981 [2024-07-25 10:17:31.972544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.981 [2024-07-25 10:17:31.972571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.981 qpair failed and we were unable to recover it. 00:28:46.981 [2024-07-25 10:17:31.972786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.981 [2024-07-25 10:17:31.972814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.981 qpair failed and we were unable to recover it. 00:28:46.981 [2024-07-25 10:17:31.973035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.981 [2024-07-25 10:17:31.973087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.981 qpair failed and we were unable to recover it. 00:28:46.981 [2024-07-25 10:17:31.973243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.981 [2024-07-25 10:17:31.973271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.981 qpair failed and we were unable to recover it. 00:28:46.981 [2024-07-25 10:17:31.973413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.981 [2024-07-25 10:17:31.973444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.981 qpair failed and we were unable to recover it. 00:28:46.981 [2024-07-25 10:17:31.973576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.981 [2024-07-25 10:17:31.973601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.981 qpair failed and we were unable to recover it. 00:28:46.981 [2024-07-25 10:17:31.973798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.981 [2024-07-25 10:17:31.973826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.981 qpair failed and we were unable to recover it. 00:28:46.981 [2024-07-25 10:17:31.973983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.982 [2024-07-25 10:17:31.974011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.982 qpair failed and we were unable to recover it. 00:28:46.982 [2024-07-25 10:17:31.974323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.982 [2024-07-25 10:17:31.974354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.982 qpair failed and we were unable to recover it. 00:28:46.982 [2024-07-25 10:17:31.974579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.982 [2024-07-25 10:17:31.974606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.982 qpair failed and we were unable to recover it. 00:28:46.982 [2024-07-25 10:17:31.974793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.982 [2024-07-25 10:17:31.974839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.982 qpair failed and we were unable to recover it. 00:28:46.982 [2024-07-25 10:17:31.975063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.982 [2024-07-25 10:17:31.975090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.982 qpair failed and we were unable to recover it. 00:28:46.982 [2024-07-25 10:17:31.975306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.982 [2024-07-25 10:17:31.975331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.982 qpair failed and we were unable to recover it. 00:28:46.982 [2024-07-25 10:17:31.975463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.982 [2024-07-25 10:17:31.975492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.982 qpair failed and we were unable to recover it. 00:28:46.982 [2024-07-25 10:17:31.975654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.982 [2024-07-25 10:17:31.975691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.982 qpair failed and we were unable to recover it. 00:28:46.982 [2024-07-25 10:17:31.975986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.982 [2024-07-25 10:17:31.976014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.982 qpair failed and we were unable to recover it. 00:28:46.982 [2024-07-25 10:17:31.976258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.982 [2024-07-25 10:17:31.976283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.982 qpair failed and we were unable to recover it. 00:28:46.982 [2024-07-25 10:17:31.976532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.982 [2024-07-25 10:17:31.976561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.982 qpair failed and we were unable to recover it. 00:28:46.982 [2024-07-25 10:17:31.976771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.982 [2024-07-25 10:17:31.976799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.982 qpair failed and we were unable to recover it. 00:28:46.982 [2024-07-25 10:17:31.976972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.982 [2024-07-25 10:17:31.977000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.982 qpair failed and we were unable to recover it. 00:28:46.982 [2024-07-25 10:17:31.977178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.982 [2024-07-25 10:17:31.977217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.982 qpair failed and we were unable to recover it. 00:28:46.982 [2024-07-25 10:17:31.977435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.982 [2024-07-25 10:17:31.977464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.982 qpair failed and we were unable to recover it. 00:28:46.982 [2024-07-25 10:17:31.977635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.982 [2024-07-25 10:17:31.977663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.982 qpair failed and we were unable to recover it. 00:28:46.982 [2024-07-25 10:17:31.977832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.982 [2024-07-25 10:17:31.977860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.982 qpair failed and we were unable to recover it. 00:28:46.982 [2024-07-25 10:17:31.978145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.982 [2024-07-25 10:17:31.978170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.982 qpair failed and we were unable to recover it. 00:28:46.982 [2024-07-25 10:17:31.978499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.982 [2024-07-25 10:17:31.978527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.982 qpair failed and we were unable to recover it. 00:28:46.982 [2024-07-25 10:17:31.978718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.982 [2024-07-25 10:17:31.978746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.982 qpair failed and we were unable to recover it. 00:28:46.982 [2024-07-25 10:17:31.978921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.982 [2024-07-25 10:17:31.978956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.982 qpair failed and we were unable to recover it. 00:28:46.982 [2024-07-25 10:17:31.979256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.982 [2024-07-25 10:17:31.979281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.982 qpair failed and we were unable to recover it. 00:28:46.982 [2024-07-25 10:17:31.979531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.982 [2024-07-25 10:17:31.979560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.982 qpair failed and we were unable to recover it. 00:28:46.982 [2024-07-25 10:17:31.979699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.982 [2024-07-25 10:17:31.979727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.982 qpair failed and we were unable to recover it. 00:28:46.982 [2024-07-25 10:17:31.979900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.982 [2024-07-25 10:17:31.979928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.982 qpair failed and we were unable to recover it. 00:28:46.982 [2024-07-25 10:17:31.980138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.982 [2024-07-25 10:17:31.980163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.982 qpair failed and we were unable to recover it. 00:28:46.982 [2024-07-25 10:17:31.980351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.982 [2024-07-25 10:17:31.980379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.982 qpair failed and we were unable to recover it. 00:28:46.982 [2024-07-25 10:17:31.980592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.982 [2024-07-25 10:17:31.980618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.982 qpair failed and we were unable to recover it. 00:28:46.982 [2024-07-25 10:17:31.980811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.982 [2024-07-25 10:17:31.980842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.982 qpair failed and we were unable to recover it. 00:28:46.982 [2024-07-25 10:17:31.981050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.982 [2024-07-25 10:17:31.981075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.982 qpair failed and we were unable to recover it. 00:28:46.982 [2024-07-25 10:17:31.981302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.982 [2024-07-25 10:17:31.981330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.982 qpair failed and we were unable to recover it. 00:28:46.982 [2024-07-25 10:17:31.981576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.982 [2024-07-25 10:17:31.981604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.982 qpair failed and we were unable to recover it. 00:28:46.982 [2024-07-25 10:17:31.981834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.982 [2024-07-25 10:17:31.981862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.982 qpair failed and we were unable to recover it. 00:28:46.982 [2024-07-25 10:17:31.982084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.982 [2024-07-25 10:17:31.982119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.982 qpair failed and we were unable to recover it. 00:28:46.982 [2024-07-25 10:17:31.982348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.982 [2024-07-25 10:17:31.982376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.982 qpair failed and we were unable to recover it. 00:28:46.982 [2024-07-25 10:17:31.982552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.982 [2024-07-25 10:17:31.982580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.982 qpair failed and we were unable to recover it. 00:28:46.982 [2024-07-25 10:17:31.982745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.982 [2024-07-25 10:17:31.982772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.982 qpair failed and we were unable to recover it. 00:28:46.982 [2024-07-25 10:17:31.983042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.982 [2024-07-25 10:17:31.983067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.982 qpair failed and we were unable to recover it. 00:28:46.982 [2024-07-25 10:17:31.983328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.983 [2024-07-25 10:17:31.983355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.983 qpair failed and we were unable to recover it. 00:28:46.983 [2024-07-25 10:17:31.983579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.983 [2024-07-25 10:17:31.983605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.983 qpair failed and we were unable to recover it. 00:28:46.983 [2024-07-25 10:17:31.983808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.983 [2024-07-25 10:17:31.983836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.983 qpair failed and we were unable to recover it. 00:28:46.983 [2024-07-25 10:17:31.983999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.983 [2024-07-25 10:17:31.984024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.983 qpair failed and we were unable to recover it. 00:28:46.983 [2024-07-25 10:17:31.984254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.983 [2024-07-25 10:17:31.984282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.983 qpair failed and we were unable to recover it. 00:28:46.983 [2024-07-25 10:17:31.984563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.983 [2024-07-25 10:17:31.984591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.983 qpair failed and we were unable to recover it. 00:28:46.983 [2024-07-25 10:17:31.984794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.983 [2024-07-25 10:17:31.984822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.983 qpair failed and we were unable to recover it. 00:28:46.983 [2024-07-25 10:17:31.985030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.983 [2024-07-25 10:17:31.985056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.983 qpair failed and we were unable to recover it. 00:28:46.983 [2024-07-25 10:17:31.985270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.983 [2024-07-25 10:17:31.985298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.983 qpair failed and we were unable to recover it. 00:28:46.983 [2024-07-25 10:17:31.985583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.983 [2024-07-25 10:17:31.985611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.983 qpair failed and we were unable to recover it. 00:28:46.983 [2024-07-25 10:17:31.985778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.983 [2024-07-25 10:17:31.985806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.983 qpair failed and we were unable to recover it. 00:28:46.983 [2024-07-25 10:17:31.986034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.983 [2024-07-25 10:17:31.986059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.983 qpair failed and we were unable to recover it. 00:28:46.983 [2024-07-25 10:17:31.986249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.983 [2024-07-25 10:17:31.986277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.983 qpair failed and we were unable to recover it. 00:28:46.983 [2024-07-25 10:17:31.986463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.983 [2024-07-25 10:17:31.986492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.983 qpair failed and we were unable to recover it. 00:28:46.983 [2024-07-25 10:17:31.986666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.983 [2024-07-25 10:17:31.986694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.983 qpair failed and we were unable to recover it. 00:28:46.983 [2024-07-25 10:17:31.986830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.983 [2024-07-25 10:17:31.986855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.983 qpair failed and we were unable to recover it. 00:28:46.983 [2024-07-25 10:17:31.987032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.983 [2024-07-25 10:17:31.987059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.983 qpair failed and we were unable to recover it. 00:28:46.983 [2024-07-25 10:17:31.987237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.983 [2024-07-25 10:17:31.987265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.983 qpair failed and we were unable to recover it. 00:28:46.983 [2024-07-25 10:17:31.987489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.983 [2024-07-25 10:17:31.987518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.983 qpair failed and we were unable to recover it. 00:28:46.983 [2024-07-25 10:17:31.987716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.983 [2024-07-25 10:17:31.987741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.983 qpair failed and we were unable to recover it. 00:28:46.983 [2024-07-25 10:17:31.987948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.983 [2024-07-25 10:17:31.987979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.983 qpair failed and we were unable to recover it. 00:28:46.983 [2024-07-25 10:17:31.988154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.983 [2024-07-25 10:17:31.988183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.983 qpair failed and we were unable to recover it. 00:28:46.983 [2024-07-25 10:17:31.988349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.983 [2024-07-25 10:17:31.988376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.983 qpair failed and we were unable to recover it. 00:28:46.983 [2024-07-25 10:17:31.988529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.983 [2024-07-25 10:17:31.988555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.983 qpair failed and we were unable to recover it. 00:28:46.983 [2024-07-25 10:17:31.988763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.983 [2024-07-25 10:17:31.988791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.983 qpair failed and we were unable to recover it. 00:28:46.983 [2024-07-25 10:17:31.988963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.983 [2024-07-25 10:17:31.988991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.983 qpair failed and we were unable to recover it. 00:28:46.983 [2024-07-25 10:17:31.989180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.983 [2024-07-25 10:17:31.989208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.983 qpair failed and we were unable to recover it. 00:28:46.983 [2024-07-25 10:17:31.989382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.983 [2024-07-25 10:17:31.989407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.983 qpair failed and we were unable to recover it. 00:28:46.983 [2024-07-25 10:17:31.989591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.983 [2024-07-25 10:17:31.989617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.983 qpair failed and we were unable to recover it. 00:28:46.983 [2024-07-25 10:17:31.989879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.983 [2024-07-25 10:17:31.989907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.983 qpair failed and we were unable to recover it. 00:28:46.983 [2024-07-25 10:17:31.990093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.983 [2024-07-25 10:17:31.990120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.983 qpair failed and we were unable to recover it. 00:28:46.983 [2024-07-25 10:17:31.990322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.983 [2024-07-25 10:17:31.990365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.983 qpair failed and we were unable to recover it. 00:28:46.983 [2024-07-25 10:17:31.990521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.983 [2024-07-25 10:17:31.990546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.983 qpair failed and we were unable to recover it. 00:28:46.984 [2024-07-25 10:17:31.990698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.984 [2024-07-25 10:17:31.990742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.984 qpair failed and we were unable to recover it. 00:28:46.984 [2024-07-25 10:17:31.990887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.984 [2024-07-25 10:17:31.990916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.984 qpair failed and we were unable to recover it. 00:28:46.984 [2024-07-25 10:17:31.991100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.984 [2024-07-25 10:17:31.991124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.984 qpair failed and we were unable to recover it. 00:28:46.984 [2024-07-25 10:17:31.991340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.984 [2024-07-25 10:17:31.991368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.984 qpair failed and we were unable to recover it. 00:28:46.984 [2024-07-25 10:17:31.991544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.984 [2024-07-25 10:17:31.991570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.984 qpair failed and we were unable to recover it. 00:28:46.984 [2024-07-25 10:17:31.991766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.984 [2024-07-25 10:17:31.991794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.984 qpair failed and we were unable to recover it. 00:28:46.984 [2024-07-25 10:17:31.991996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.984 [2024-07-25 10:17:31.992021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.984 qpair failed and we were unable to recover it. 00:28:46.984 [2024-07-25 10:17:31.992257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.984 [2024-07-25 10:17:31.992286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.984 qpair failed and we were unable to recover it. 00:28:46.984 [2024-07-25 10:17:31.992490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.984 [2024-07-25 10:17:31.992516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.984 qpair failed and we were unable to recover it. 00:28:46.984 [2024-07-25 10:17:31.992704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.984 [2024-07-25 10:17:31.992746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.984 qpair failed and we were unable to recover it. 00:28:46.984 [2024-07-25 10:17:31.992926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.984 [2024-07-25 10:17:31.992951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.984 qpair failed and we were unable to recover it. 00:28:46.984 [2024-07-25 10:17:31.993119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.984 [2024-07-25 10:17:31.993147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.984 qpair failed and we were unable to recover it. 00:28:46.984 [2024-07-25 10:17:31.993360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.984 [2024-07-25 10:17:31.993388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.984 qpair failed and we were unable to recover it. 00:28:46.984 [2024-07-25 10:17:31.993610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.984 [2024-07-25 10:17:31.993634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.984 qpair failed and we were unable to recover it. 00:28:46.984 [2024-07-25 10:17:31.993791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.984 [2024-07-25 10:17:31.993814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.984 qpair failed and we were unable to recover it. 00:28:46.984 [2024-07-25 10:17:31.994046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.984 [2024-07-25 10:17:31.994074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.984 qpair failed and we were unable to recover it. 00:28:46.984 [2024-07-25 10:17:31.994224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.984 [2024-07-25 10:17:31.994252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.984 qpair failed and we were unable to recover it. 00:28:46.984 [2024-07-25 10:17:31.994437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.984 [2024-07-25 10:17:31.994465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.984 qpair failed and we were unable to recover it. 00:28:46.984 [2024-07-25 10:17:31.994627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.984 [2024-07-25 10:17:31.994651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.984 qpair failed and we were unable to recover it. 00:28:46.984 [2024-07-25 10:17:31.994827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.984 [2024-07-25 10:17:31.994855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.984 qpair failed and we were unable to recover it. 00:28:46.984 [2024-07-25 10:17:31.995041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.984 [2024-07-25 10:17:31.995070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.984 qpair failed and we were unable to recover it. 00:28:46.984 [2024-07-25 10:17:31.995248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.984 [2024-07-25 10:17:31.995276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.984 qpair failed and we were unable to recover it. 00:28:46.984 [2024-07-25 10:17:31.995409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.984 [2024-07-25 10:17:31.995442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.984 qpair failed and we were unable to recover it. 00:28:46.984 [2024-07-25 10:17:31.995628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.984 [2024-07-25 10:17:31.995653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.984 qpair failed and we were unable to recover it. 00:28:46.984 [2024-07-25 10:17:31.995843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.984 [2024-07-25 10:17:31.995871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.984 qpair failed and we were unable to recover it. 00:28:46.984 [2024-07-25 10:17:31.996076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.984 [2024-07-25 10:17:31.996109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.984 qpair failed and we were unable to recover it. 00:28:46.984 [2024-07-25 10:17:31.996294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.984 [2024-07-25 10:17:31.996316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.984 qpair failed and we were unable to recover it. 00:28:46.984 [2024-07-25 10:17:31.996510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.984 [2024-07-25 10:17:31.996539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.984 qpair failed and we were unable to recover it. 00:28:46.984 [2024-07-25 10:17:31.996742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.984 [2024-07-25 10:17:31.996772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.984 qpair failed and we were unable to recover it. 00:28:46.984 [2024-07-25 10:17:31.996974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.984 [2024-07-25 10:17:31.997003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.984 qpair failed and we were unable to recover it. 00:28:46.984 [2024-07-25 10:17:31.997238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.984 [2024-07-25 10:17:31.997262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.984 qpair failed and we were unable to recover it. 00:28:46.984 [2024-07-25 10:17:31.997489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.984 [2024-07-25 10:17:31.997518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.984 qpair failed and we were unable to recover it. 00:28:46.984 [2024-07-25 10:17:31.997671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.984 [2024-07-25 10:17:31.997700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.984 qpair failed and we were unable to recover it. 00:28:46.984 [2024-07-25 10:17:31.997915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.984 [2024-07-25 10:17:31.997943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.984 qpair failed and we were unable to recover it. 00:28:46.984 [2024-07-25 10:17:31.998105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.984 [2024-07-25 10:17:31.998128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.984 qpair failed and we were unable to recover it. 00:28:46.984 [2024-07-25 10:17:31.998332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.984 [2024-07-25 10:17:31.998360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.984 qpair failed and we were unable to recover it. 00:28:46.984 [2024-07-25 10:17:31.998556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.984 [2024-07-25 10:17:31.998581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.984 qpair failed and we were unable to recover it. 00:28:46.984 [2024-07-25 10:17:31.998750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.984 [2024-07-25 10:17:31.998791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.984 qpair failed and we were unable to recover it. 00:28:46.984 [2024-07-25 10:17:31.999013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.985 [2024-07-25 10:17:31.999036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.985 qpair failed and we were unable to recover it. 00:28:46.985 [2024-07-25 10:17:31.999244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.985 [2024-07-25 10:17:31.999273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.985 qpair failed and we were unable to recover it. 00:28:46.985 [2024-07-25 10:17:31.999411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.985 [2024-07-25 10:17:31.999449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.985 qpair failed and we were unable to recover it. 00:28:46.985 [2024-07-25 10:17:31.999627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.985 [2024-07-25 10:17:31.999655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.985 qpair failed and we were unable to recover it. 00:28:46.985 [2024-07-25 10:17:31.999834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.985 [2024-07-25 10:17:31.999857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.985 qpair failed and we were unable to recover it. 00:28:46.985 [2024-07-25 10:17:32.000036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.985 [2024-07-25 10:17:32.000065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.985 qpair failed and we were unable to recover it. 00:28:46.985 [2024-07-25 10:17:32.000269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.985 [2024-07-25 10:17:32.000297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.985 qpair failed and we were unable to recover it. 00:28:46.985 [2024-07-25 10:17:32.000504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.985 [2024-07-25 10:17:32.000533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.985 qpair failed and we were unable to recover it. 00:28:46.985 [2024-07-25 10:17:32.000756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.985 [2024-07-25 10:17:32.000779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.985 qpair failed and we were unable to recover it. 00:28:46.985 [2024-07-25 10:17:32.000994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.985 [2024-07-25 10:17:32.001022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.985 qpair failed and we were unable to recover it. 00:28:46.985 [2024-07-25 10:17:32.001172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.985 [2024-07-25 10:17:32.001200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.985 qpair failed and we were unable to recover it. 00:28:46.985 [2024-07-25 10:17:32.001382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.985 [2024-07-25 10:17:32.001410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.985 qpair failed and we were unable to recover it. 00:28:46.985 [2024-07-25 10:17:32.001612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.985 [2024-07-25 10:17:32.001636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.985 qpair failed and we were unable to recover it. 00:28:46.985 [2024-07-25 10:17:32.001819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.985 [2024-07-25 10:17:32.001847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.985 qpair failed and we were unable to recover it. 00:28:46.985 [2024-07-25 10:17:32.002051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.985 [2024-07-25 10:17:32.002084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.985 qpair failed and we were unable to recover it. 00:28:46.985 [2024-07-25 10:17:32.002215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.985 [2024-07-25 10:17:32.002244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.985 qpair failed and we were unable to recover it. 00:28:46.985 [2024-07-25 10:17:32.002450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.985 [2024-07-25 10:17:32.002475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.985 qpair failed and we were unable to recover it. 00:28:46.985 [2024-07-25 10:17:32.002650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.985 [2024-07-25 10:17:32.002679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.985 qpair failed and we were unable to recover it. 00:28:46.985 [2024-07-25 10:17:32.002869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.985 [2024-07-25 10:17:32.002897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.985 qpair failed and we were unable to recover it. 00:28:46.985 [2024-07-25 10:17:32.003071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.985 [2024-07-25 10:17:32.003100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.985 qpair failed and we were unable to recover it. 00:28:46.985 [2024-07-25 10:17:32.003310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.985 [2024-07-25 10:17:32.003333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.985 qpair failed and we were unable to recover it. 00:28:46.985 [2024-07-25 10:17:32.003509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.985 [2024-07-25 10:17:32.003534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.985 qpair failed and we were unable to recover it. 00:28:46.985 [2024-07-25 10:17:32.003721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.985 [2024-07-25 10:17:32.003750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.985 qpair failed and we were unable to recover it. 00:28:46.985 [2024-07-25 10:17:32.003923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.985 [2024-07-25 10:17:32.003951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.985 qpair failed and we were unable to recover it. 00:28:46.985 [2024-07-25 10:17:32.004163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.985 [2024-07-25 10:17:32.004186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.985 qpair failed and we were unable to recover it. 00:28:46.985 [2024-07-25 10:17:32.004368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.985 [2024-07-25 10:17:32.004396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.985 qpair failed and we were unable to recover it. 00:28:46.985 [2024-07-25 10:17:32.004588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.985 [2024-07-25 10:17:32.004613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.985 qpair failed and we were unable to recover it. 00:28:46.985 [2024-07-25 10:17:32.004849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.985 [2024-07-25 10:17:32.004878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.985 qpair failed and we were unable to recover it. 00:28:46.985 [2024-07-25 10:17:32.005077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.985 [2024-07-25 10:17:32.005101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.985 qpair failed and we were unable to recover it. 00:28:46.985 [2024-07-25 10:17:32.005312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.985 [2024-07-25 10:17:32.005340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.985 qpair failed and we were unable to recover it. 00:28:46.985 [2024-07-25 10:17:32.005533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.985 [2024-07-25 10:17:32.005562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.985 qpair failed and we were unable to recover it. 00:28:46.985 [2024-07-25 10:17:32.005723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.985 [2024-07-25 10:17:32.005751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.985 qpair failed and we were unable to recover it. 00:28:46.985 [2024-07-25 10:17:32.005938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.985 [2024-07-25 10:17:32.005962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.985 qpair failed and we were unable to recover it. 00:28:46.985 [2024-07-25 10:17:32.006163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.985 [2024-07-25 10:17:32.006192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.985 qpair failed and we were unable to recover it. 00:28:46.985 [2024-07-25 10:17:32.006409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.985 [2024-07-25 10:17:32.006447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.985 qpair failed and we were unable to recover it. 00:28:46.985 [2024-07-25 10:17:32.006584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.985 [2024-07-25 10:17:32.006612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.985 qpair failed and we were unable to recover it. 00:28:46.985 [2024-07-25 10:17:32.006816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.985 [2024-07-25 10:17:32.006854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.985 qpair failed and we were unable to recover it. 00:28:46.985 [2024-07-25 10:17:32.007068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.985 [2024-07-25 10:17:32.007096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.985 qpair failed and we were unable to recover it. 00:28:46.985 [2024-07-25 10:17:32.007234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.986 [2024-07-25 10:17:32.007263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.986 qpair failed and we were unable to recover it. 00:28:46.986 [2024-07-25 10:17:32.007452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.986 [2024-07-25 10:17:32.007482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.986 qpair failed and we were unable to recover it. 00:28:46.986 [2024-07-25 10:17:32.007656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.986 [2024-07-25 10:17:32.007681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.986 qpair failed and we were unable to recover it. 00:28:46.986 [2024-07-25 10:17:32.007880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.986 [2024-07-25 10:17:32.007908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.986 qpair failed and we were unable to recover it. 00:28:46.986 [2024-07-25 10:17:32.008133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.986 [2024-07-25 10:17:32.008162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.986 qpair failed and we were unable to recover it. 00:28:46.986 [2024-07-25 10:17:32.008337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.986 [2024-07-25 10:17:32.008365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.986 qpair failed and we were unable to recover it. 00:28:46.986 [2024-07-25 10:17:32.008522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.986 [2024-07-25 10:17:32.008548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.986 qpair failed and we were unable to recover it. 00:28:46.986 [2024-07-25 10:17:32.008727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.986 [2024-07-25 10:17:32.008751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.986 qpair failed and we were unable to recover it. 00:28:46.986 [2024-07-25 10:17:32.008937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.986 [2024-07-25 10:17:32.008965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.986 qpair failed and we were unable to recover it. 00:28:46.986 [2024-07-25 10:17:32.009175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.986 [2024-07-25 10:17:32.009203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.986 qpair failed and we were unable to recover it. 00:28:46.986 [2024-07-25 10:17:32.009371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.986 [2024-07-25 10:17:32.009393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.986 qpair failed and we were unable to recover it. 00:28:46.986 [2024-07-25 10:17:32.009601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.986 [2024-07-25 10:17:32.009627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.986 qpair failed and we were unable to recover it. 00:28:46.986 [2024-07-25 10:17:32.009864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.986 [2024-07-25 10:17:32.009893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.986 qpair failed and we were unable to recover it. 00:28:46.986 [2024-07-25 10:17:32.010025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.986 [2024-07-25 10:17:32.010053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.986 qpair failed and we were unable to recover it. 00:28:46.986 [2024-07-25 10:17:32.010215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.986 [2024-07-25 10:17:32.010262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.986 qpair failed and we were unable to recover it. 00:28:46.986 [2024-07-25 10:17:32.010442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.986 [2024-07-25 10:17:32.010485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.986 qpair failed and we were unable to recover it. 00:28:46.986 [2024-07-25 10:17:32.010708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.986 [2024-07-25 10:17:32.010732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.986 qpair failed and we were unable to recover it. 00:28:46.986 [2024-07-25 10:17:32.010955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.986 [2024-07-25 10:17:32.010983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.986 qpair failed and we were unable to recover it. 00:28:46.986 [2024-07-25 10:17:32.011191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.986 [2024-07-25 10:17:32.011214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.986 qpair failed and we were unable to recover it. 00:28:46.986 [2024-07-25 10:17:32.011451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.986 [2024-07-25 10:17:32.011480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.986 qpair failed and we were unable to recover it. 00:28:46.986 [2024-07-25 10:17:32.011629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.986 [2024-07-25 10:17:32.011658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.986 qpair failed and we were unable to recover it. 00:28:46.986 [2024-07-25 10:17:32.011830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.986 [2024-07-25 10:17:32.011859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.986 qpair failed and we were unable to recover it. 00:28:46.986 [2024-07-25 10:17:32.012067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.986 [2024-07-25 10:17:32.012090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.986 qpair failed and we were unable to recover it. 00:28:46.986 [2024-07-25 10:17:32.012305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.986 [2024-07-25 10:17:32.012333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.986 qpair failed and we were unable to recover it. 00:28:46.986 [2024-07-25 10:17:32.012467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.986 [2024-07-25 10:17:32.012496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.986 qpair failed and we were unable to recover it. 00:28:46.986 [2024-07-25 10:17:32.012683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.986 [2024-07-25 10:17:32.012711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.986 qpair failed and we were unable to recover it. 00:28:46.986 [2024-07-25 10:17:32.012883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.986 [2024-07-25 10:17:32.012906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.986 qpair failed and we were unable to recover it. 00:28:46.986 [2024-07-25 10:17:32.013079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.986 [2024-07-25 10:17:32.013103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.986 qpair failed and we were unable to recover it. 00:28:46.986 [2024-07-25 10:17:32.013317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.986 [2024-07-25 10:17:32.013345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.986 qpair failed and we were unable to recover it. 00:28:46.986 [2024-07-25 10:17:32.013546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.986 [2024-07-25 10:17:32.013575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.986 qpair failed and we were unable to recover it. 00:28:46.986 [2024-07-25 10:17:32.013752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.986 [2024-07-25 10:17:32.013775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.986 qpair failed and we were unable to recover it. 00:28:46.986 [2024-07-25 10:17:32.013958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.986 [2024-07-25 10:17:32.013999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.986 qpair failed and we were unable to recover it. 00:28:46.986 [2024-07-25 10:17:32.014177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.986 [2024-07-25 10:17:32.014206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.986 qpair failed and we were unable to recover it. 00:28:46.986 [2024-07-25 10:17:32.014379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.986 [2024-07-25 10:17:32.014407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.986 qpair failed and we were unable to recover it. 00:28:46.986 [2024-07-25 10:17:32.014594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.986 [2024-07-25 10:17:32.014618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.986 qpair failed and we were unable to recover it. 00:28:46.986 [2024-07-25 10:17:32.014841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.986 [2024-07-25 10:17:32.014869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.986 qpair failed and we were unable to recover it. 00:28:46.986 [2024-07-25 10:17:32.015080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.986 [2024-07-25 10:17:32.015108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.986 qpair failed and we were unable to recover it. 00:28:46.986 [2024-07-25 10:17:32.015259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.986 [2024-07-25 10:17:32.015287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.986 qpair failed and we were unable to recover it. 00:28:46.986 [2024-07-25 10:17:32.015488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.987 [2024-07-25 10:17:32.015512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.987 qpair failed and we were unable to recover it. 00:28:46.987 [2024-07-25 10:17:32.015697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.987 [2024-07-25 10:17:32.015725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.987 qpair failed and we were unable to recover it. 00:28:46.987 [2024-07-25 10:17:32.015925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.987 [2024-07-25 10:17:32.015953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.987 qpair failed and we were unable to recover it. 00:28:46.987 [2024-07-25 10:17:32.016170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.987 [2024-07-25 10:17:32.016199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.987 qpair failed and we were unable to recover it. 00:28:46.987 [2024-07-25 10:17:32.016441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.987 [2024-07-25 10:17:32.016466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.987 qpair failed and we were unable to recover it. 00:28:46.987 [2024-07-25 10:17:32.016660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.987 [2024-07-25 10:17:32.016688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.987 qpair failed and we were unable to recover it. 00:28:46.987 [2024-07-25 10:17:32.016860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.987 [2024-07-25 10:17:32.016893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.987 qpair failed and we were unable to recover it. 00:28:46.987 [2024-07-25 10:17:32.017074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.987 [2024-07-25 10:17:32.017102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.987 qpair failed and we were unable to recover it. 00:28:46.987 [2024-07-25 10:17:32.017322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.987 [2024-07-25 10:17:32.017350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.987 qpair failed and we were unable to recover it. 00:28:46.987 [2024-07-25 10:17:32.017564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.987 [2024-07-25 10:17:32.017588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.987 qpair failed and we were unable to recover it. 00:28:46.987 [2024-07-25 10:17:32.017791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.987 [2024-07-25 10:17:32.017819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.987 qpair failed and we were unable to recover it. 00:28:46.987 [2024-07-25 10:17:32.018018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.987 [2024-07-25 10:17:32.018047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.987 qpair failed and we were unable to recover it. 00:28:46.987 [2024-07-25 10:17:32.018230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.987 [2024-07-25 10:17:32.018252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.987 qpair failed and we were unable to recover it. 00:28:46.987 [2024-07-25 10:17:32.018445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.987 [2024-07-25 10:17:32.018487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.987 qpair failed and we were unable to recover it. 00:28:46.987 [2024-07-25 10:17:32.018677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.987 [2024-07-25 10:17:32.018717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.987 qpair failed and we were unable to recover it. 00:28:46.987 [2024-07-25 10:17:32.018854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.987 [2024-07-25 10:17:32.018882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.987 qpair failed and we were unable to recover it. 00:28:46.987 [2024-07-25 10:17:32.019026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.987 [2024-07-25 10:17:32.019050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.987 qpair failed and we were unable to recover it. 00:28:46.987 [2024-07-25 10:17:32.019185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.987 [2024-07-25 10:17:32.019225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.987 qpair failed and we were unable to recover it. 00:28:46.987 [2024-07-25 10:17:32.019416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.987 [2024-07-25 10:17:32.019458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.987 qpair failed and we were unable to recover it. 00:28:46.987 [2024-07-25 10:17:32.019615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.987 [2024-07-25 10:17:32.019646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.987 qpair failed and we were unable to recover it. 00:28:46.987 [2024-07-25 10:17:32.019867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.987 [2024-07-25 10:17:32.019890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.987 qpair failed and we were unable to recover it. 00:28:46.987 [2024-07-25 10:17:32.020122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.987 [2024-07-25 10:17:32.020151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.987 qpair failed and we were unable to recover it. 00:28:46.987 [2024-07-25 10:17:32.020316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.987 [2024-07-25 10:17:32.020344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.987 qpair failed and we were unable to recover it. 00:28:46.987 [2024-07-25 10:17:32.020515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.987 [2024-07-25 10:17:32.020544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.987 qpair failed and we were unable to recover it. 00:28:46.987 [2024-07-25 10:17:32.020740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.987 [2024-07-25 10:17:32.020763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.987 qpair failed and we were unable to recover it. 00:28:46.987 [2024-07-25 10:17:32.020979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.987 [2024-07-25 10:17:32.021007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.987 qpair failed and we were unable to recover it. 00:28:46.987 [2024-07-25 10:17:32.021182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.987 [2024-07-25 10:17:32.021211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.987 qpair failed and we were unable to recover it. 00:28:46.987 [2024-07-25 10:17:32.021335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.987 [2024-07-25 10:17:32.021362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.987 qpair failed and we were unable to recover it. 00:28:46.987 [2024-07-25 10:17:32.021536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.987 [2024-07-25 10:17:32.021575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.987 qpair failed and we were unable to recover it. 00:28:46.987 [2024-07-25 10:17:32.021779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.987 [2024-07-25 10:17:32.021807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.987 qpair failed and we were unable to recover it. 00:28:46.987 [2024-07-25 10:17:32.022076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.987 [2024-07-25 10:17:32.022104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.987 qpair failed and we were unable to recover it. 00:28:46.987 [2024-07-25 10:17:32.022306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.987 [2024-07-25 10:17:32.022334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.987 qpair failed and we were unable to recover it. 00:28:46.987 [2024-07-25 10:17:32.022490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.987 [2024-07-25 10:17:32.022513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.987 qpair failed and we were unable to recover it. 00:28:46.987 [2024-07-25 10:17:32.022720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.987 [2024-07-25 10:17:32.022753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.987 qpair failed and we were unable to recover it. 00:28:46.987 [2024-07-25 10:17:32.022934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.987 [2024-07-25 10:17:32.022962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.987 qpair failed and we were unable to recover it. 00:28:46.987 [2024-07-25 10:17:32.023134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.987 [2024-07-25 10:17:32.023161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.987 qpair failed and we were unable to recover it. 00:28:46.987 [2024-07-25 10:17:32.023366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.987 [2024-07-25 10:17:32.023389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.987 qpair failed and we were unable to recover it. 00:28:46.987 [2024-07-25 10:17:32.023627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.987 [2024-07-25 10:17:32.023652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.987 qpair failed and we were unable to recover it. 00:28:46.987 [2024-07-25 10:17:32.023856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.987 [2024-07-25 10:17:32.023885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.988 qpair failed and we were unable to recover it. 00:28:46.988 [2024-07-25 10:17:32.024088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.988 [2024-07-25 10:17:32.024116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.988 qpair failed and we were unable to recover it. 00:28:46.988 [2024-07-25 10:17:32.024288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.988 [2024-07-25 10:17:32.024312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.988 qpair failed and we were unable to recover it. 00:28:46.988 [2024-07-25 10:17:32.024446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.988 [2024-07-25 10:17:32.024473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.988 qpair failed and we were unable to recover it. 00:28:46.988 [2024-07-25 10:17:32.024607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.988 [2024-07-25 10:17:32.024633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.988 qpair failed and we were unable to recover it. 00:28:46.988 [2024-07-25 10:17:32.024807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.988 [2024-07-25 10:17:32.024835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.988 qpair failed and we were unable to recover it. 00:28:46.988 [2024-07-25 10:17:32.025019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.988 [2024-07-25 10:17:32.025042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.988 qpair failed and we were unable to recover it. 00:28:46.988 [2024-07-25 10:17:32.025238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.988 [2024-07-25 10:17:32.025266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.988 qpair failed and we were unable to recover it. 00:28:46.988 [2024-07-25 10:17:32.025474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.988 [2024-07-25 10:17:32.025503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.988 qpair failed and we were unable to recover it. 00:28:46.988 [2024-07-25 10:17:32.025711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.988 [2024-07-25 10:17:32.025739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.988 qpair failed and we were unable to recover it. 00:28:46.988 [2024-07-25 10:17:32.025934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.988 [2024-07-25 10:17:32.025957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.988 qpair failed and we were unable to recover it. 00:28:46.988 [2024-07-25 10:17:32.026150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.988 [2024-07-25 10:17:32.026177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.988 qpair failed and we were unable to recover it. 00:28:46.988 [2024-07-25 10:17:32.026386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.988 [2024-07-25 10:17:32.026414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.988 qpair failed and we were unable to recover it. 00:28:46.988 [2024-07-25 10:17:32.026596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.988 [2024-07-25 10:17:32.026620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.988 qpair failed and we were unable to recover it. 00:28:46.988 [2024-07-25 10:17:32.026813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.988 [2024-07-25 10:17:32.026836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.988 qpair failed and we were unable to recover it. 00:28:46.988 [2024-07-25 10:17:32.027034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.988 [2024-07-25 10:17:32.027062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.988 qpair failed and we were unable to recover it. 00:28:46.988 [2024-07-25 10:17:32.027281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.988 [2024-07-25 10:17:32.027309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.988 qpair failed and we were unable to recover it. 00:28:46.988 [2024-07-25 10:17:32.027454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.988 [2024-07-25 10:17:32.027483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.988 qpair failed and we were unable to recover it. 00:28:46.988 [2024-07-25 10:17:32.027666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.988 [2024-07-25 10:17:32.027689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.988 qpair failed and we were unable to recover it. 00:28:46.988 [2024-07-25 10:17:32.027820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.988 [2024-07-25 10:17:32.027862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.988 qpair failed and we were unable to recover it. 00:28:46.988 [2024-07-25 10:17:32.028052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.988 [2024-07-25 10:17:32.028080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.988 qpair failed and we were unable to recover it. 00:28:46.988 [2024-07-25 10:17:32.028233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.988 [2024-07-25 10:17:32.028260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.988 qpair failed and we were unable to recover it. 00:28:46.988 [2024-07-25 10:17:32.028439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.988 [2024-07-25 10:17:32.028463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.988 qpair failed and we were unable to recover it. 00:28:46.988 [2024-07-25 10:17:32.028695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.988 [2024-07-25 10:17:32.028723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.988 qpair failed and we were unable to recover it. 00:28:46.988 [2024-07-25 10:17:32.028922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.988 [2024-07-25 10:17:32.028950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.988 qpair failed and we were unable to recover it. 00:28:46.988 [2024-07-25 10:17:32.029137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.988 [2024-07-25 10:17:32.029165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.988 qpair failed and we were unable to recover it. 00:28:46.988 [2024-07-25 10:17:32.029375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.988 [2024-07-25 10:17:32.029398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.988 qpair failed and we were unable to recover it. 00:28:46.988 [2024-07-25 10:17:32.029597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.988 [2024-07-25 10:17:32.029622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.988 qpair failed and we were unable to recover it. 00:28:46.988 [2024-07-25 10:17:32.029782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.988 [2024-07-25 10:17:32.029810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.988 qpair failed and we were unable to recover it. 00:28:46.988 [2024-07-25 10:17:32.029938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.988 [2024-07-25 10:17:32.029966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.988 qpair failed and we were unable to recover it. 00:28:46.988 [2024-07-25 10:17:32.030161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.988 [2024-07-25 10:17:32.030185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.988 qpair failed and we were unable to recover it. 00:28:46.988 [2024-07-25 10:17:32.030403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.988 [2024-07-25 10:17:32.030440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.988 qpair failed and we were unable to recover it. 00:28:46.988 [2024-07-25 10:17:32.030642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.988 [2024-07-25 10:17:32.030667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.989 qpair failed and we were unable to recover it. 00:28:46.989 [2024-07-25 10:17:32.030833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.989 [2024-07-25 10:17:32.030862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.989 qpair failed and we were unable to recover it. 00:28:46.989 [2024-07-25 10:17:32.031043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.989 [2024-07-25 10:17:32.031066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.989 qpair failed and we were unable to recover it. 00:28:46.989 [2024-07-25 10:17:32.031257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.989 [2024-07-25 10:17:32.031286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.989 qpair failed and we were unable to recover it. 00:28:46.989 [2024-07-25 10:17:32.031505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.989 [2024-07-25 10:17:32.031530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.989 qpair failed and we were unable to recover it. 00:28:46.989 [2024-07-25 10:17:32.031746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.989 [2024-07-25 10:17:32.031786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.989 qpair failed and we were unable to recover it. 00:28:46.989 [2024-07-25 10:17:32.032014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.989 [2024-07-25 10:17:32.032037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.989 qpair failed and we were unable to recover it. 00:28:46.989 [2024-07-25 10:17:32.032170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.989 [2024-07-25 10:17:32.032204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.989 qpair failed and we were unable to recover it. 00:28:46.989 [2024-07-25 10:17:32.032380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.989 [2024-07-25 10:17:32.032417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.989 qpair failed and we were unable to recover it. 00:28:46.989 [2024-07-25 10:17:32.032601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.989 [2024-07-25 10:17:32.032625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.989 qpair failed and we were unable to recover it. 00:28:46.989 [2024-07-25 10:17:32.032808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.989 [2024-07-25 10:17:32.032831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.989 qpair failed and we were unable to recover it. 00:28:46.989 [2024-07-25 10:17:32.033065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.989 [2024-07-25 10:17:32.033093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.989 qpair failed and we were unable to recover it. 00:28:46.989 [2024-07-25 10:17:32.033296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.989 [2024-07-25 10:17:32.033325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.989 qpair failed and we were unable to recover it. 00:28:46.989 [2024-07-25 10:17:32.033494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.989 [2024-07-25 10:17:32.033523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.989 qpair failed and we were unable to recover it. 00:28:46.989 [2024-07-25 10:17:32.033695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.989 [2024-07-25 10:17:32.033718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.989 qpair failed and we were unable to recover it. 00:28:46.989 [2024-07-25 10:17:32.033898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.989 [2024-07-25 10:17:32.033926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.989 qpair failed and we were unable to recover it. 00:28:46.989 [2024-07-25 10:17:32.034129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.989 [2024-07-25 10:17:32.034157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.989 qpair failed and we were unable to recover it. 00:28:46.989 [2024-07-25 10:17:32.034369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.989 [2024-07-25 10:17:32.034397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.989 qpair failed and we were unable to recover it. 00:28:46.989 [2024-07-25 10:17:32.034577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.989 [2024-07-25 10:17:32.034601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.989 qpair failed and we were unable to recover it. 00:28:46.989 [2024-07-25 10:17:32.034811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.989 [2024-07-25 10:17:32.034839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.989 qpair failed and we were unable to recover it. 00:28:46.989 [2024-07-25 10:17:32.035048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.989 [2024-07-25 10:17:32.035076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.989 qpair failed and we were unable to recover it. 00:28:46.989 [2024-07-25 10:17:32.035244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.989 [2024-07-25 10:17:32.035273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.989 qpair failed and we were unable to recover it. 00:28:46.989 [2024-07-25 10:17:32.035478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.989 [2024-07-25 10:17:32.035501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.989 qpair failed and we were unable to recover it. 00:28:46.989 [2024-07-25 10:17:32.035676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.989 [2024-07-25 10:17:32.035704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.989 qpair failed and we were unable to recover it. 00:28:46.989 [2024-07-25 10:17:32.035918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.989 [2024-07-25 10:17:32.035946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.989 qpair failed and we were unable to recover it. 00:28:46.989 [2024-07-25 10:17:32.036143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.989 [2024-07-25 10:17:32.036171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.989 qpair failed and we were unable to recover it. 00:28:46.989 [2024-07-25 10:17:32.036339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.989 [2024-07-25 10:17:32.036362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.989 qpair failed and we were unable to recover it. 00:28:46.989 [2024-07-25 10:17:32.036541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.989 [2024-07-25 10:17:32.036566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.989 qpair failed and we were unable to recover it. 00:28:46.989 [2024-07-25 10:17:32.036734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.989 [2024-07-25 10:17:32.036762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.989 qpair failed and we were unable to recover it. 00:28:46.989 [2024-07-25 10:17:32.036934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.989 [2024-07-25 10:17:32.036962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.989 qpair failed and we were unable to recover it. 00:28:46.989 [2024-07-25 10:17:32.037135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.989 [2024-07-25 10:17:32.037158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.989 qpair failed and we were unable to recover it. 00:28:46.989 [2024-07-25 10:17:32.037376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.989 [2024-07-25 10:17:32.037408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.989 qpair failed and we were unable to recover it. 00:28:46.989 [2024-07-25 10:17:32.037624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.989 [2024-07-25 10:17:32.037647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.989 qpair failed and we were unable to recover it. 00:28:46.989 [2024-07-25 10:17:32.037863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.989 [2024-07-25 10:17:32.037891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.989 qpair failed and we were unable to recover it. 00:28:46.989 [2024-07-25 10:17:32.038050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.989 [2024-07-25 10:17:32.038073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.989 qpair failed and we were unable to recover it. 00:28:46.989 [2024-07-25 10:17:32.038246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.989 [2024-07-25 10:17:32.038274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.989 qpair failed and we were unable to recover it. 00:28:46.989 [2024-07-25 10:17:32.038485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.989 [2024-07-25 10:17:32.038513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.989 qpair failed and we were unable to recover it. 00:28:46.989 [2024-07-25 10:17:32.038727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.989 [2024-07-25 10:17:32.038755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.989 qpair failed and we were unable to recover it. 00:28:46.989 [2024-07-25 10:17:32.038940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.989 [2024-07-25 10:17:32.038964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.989 qpair failed and we were unable to recover it. 00:28:46.990 [2024-07-25 10:17:32.039141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.990 [2024-07-25 10:17:32.039169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.990 qpair failed and we were unable to recover it. 00:28:46.990 [2024-07-25 10:17:32.039367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.990 [2024-07-25 10:17:32.039396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.990 qpair failed and we were unable to recover it. 00:28:46.990 [2024-07-25 10:17:32.039580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.990 [2024-07-25 10:17:32.039605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.990 qpair failed and we were unable to recover it. 00:28:46.990 [2024-07-25 10:17:32.039761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.990 [2024-07-25 10:17:32.039784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.990 qpair failed and we were unable to recover it. 00:28:46.990 [2024-07-25 10:17:32.039949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.990 [2024-07-25 10:17:32.039977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.990 qpair failed and we were unable to recover it. 00:28:46.990 [2024-07-25 10:17:32.040179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.990 [2024-07-25 10:17:32.040207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.990 qpair failed and we were unable to recover it. 00:28:46.990 [2024-07-25 10:17:32.040412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.990 [2024-07-25 10:17:32.040450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.990 qpair failed and we were unable to recover it. 00:28:46.990 [2024-07-25 10:17:32.040627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.990 [2024-07-25 10:17:32.040650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.990 qpair failed and we were unable to recover it. 00:28:46.990 [2024-07-25 10:17:32.040831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.990 [2024-07-25 10:17:32.040859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.990 qpair failed and we were unable to recover it. 00:28:46.990 [2024-07-25 10:17:32.041076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.990 [2024-07-25 10:17:32.041104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.990 qpair failed and we were unable to recover it. 00:28:46.990 [2024-07-25 10:17:32.041284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.990 [2024-07-25 10:17:32.041312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.990 qpair failed and we were unable to recover it. 00:28:46.990 [2024-07-25 10:17:32.041485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.990 [2024-07-25 10:17:32.041509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.990 qpair failed and we were unable to recover it. 00:28:46.990 [2024-07-25 10:17:32.041729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.990 [2024-07-25 10:17:32.041757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.990 qpair failed and we were unable to recover it. 00:28:46.990 [2024-07-25 10:17:32.041890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.990 [2024-07-25 10:17:32.041918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.990 qpair failed and we were unable to recover it. 00:28:46.990 [2024-07-25 10:17:32.042082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.990 [2024-07-25 10:17:32.042110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.990 qpair failed and we were unable to recover it. 00:28:46.990 [2024-07-25 10:17:32.042277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.990 [2024-07-25 10:17:32.042299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.990 qpair failed and we were unable to recover it. 00:28:46.990 [2024-07-25 10:17:32.042460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.990 [2024-07-25 10:17:32.042503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.990 qpair failed and we were unable to recover it. 00:28:46.990 [2024-07-25 10:17:32.042658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.990 [2024-07-25 10:17:32.042687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.990 qpair failed and we were unable to recover it. 00:28:46.990 [2024-07-25 10:17:32.042897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.990 [2024-07-25 10:17:32.042926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.990 qpair failed and we were unable to recover it. 00:28:46.990 [2024-07-25 10:17:32.043099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.990 [2024-07-25 10:17:32.043126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.990 qpair failed and we were unable to recover it. 00:28:46.990 [2024-07-25 10:17:32.043318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.990 [2024-07-25 10:17:32.043347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.990 qpair failed and we were unable to recover it. 00:28:46.990 [2024-07-25 10:17:32.043519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.990 [2024-07-25 10:17:32.043550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.990 qpair failed and we were unable to recover it. 00:28:46.990 [2024-07-25 10:17:32.043688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.990 [2024-07-25 10:17:32.043717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.990 qpair failed and we were unable to recover it. 00:28:46.990 [2024-07-25 10:17:32.043902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.990 [2024-07-25 10:17:32.043925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.990 qpair failed and we were unable to recover it. 00:28:46.990 [2024-07-25 10:17:32.044115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.990 [2024-07-25 10:17:32.044144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.990 qpair failed and we were unable to recover it. 00:28:46.990 [2024-07-25 10:17:32.044361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.990 [2024-07-25 10:17:32.044390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.990 qpair failed and we were unable to recover it. 00:28:46.990 [2024-07-25 10:17:32.044594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.990 [2024-07-25 10:17:32.044620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.990 qpair failed and we were unable to recover it. 00:28:46.990 [2024-07-25 10:17:32.044816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.990 [2024-07-25 10:17:32.044839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.990 qpair failed and we were unable to recover it. 00:28:46.990 [2024-07-25 10:17:32.045035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.990 [2024-07-25 10:17:32.045064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.990 qpair failed and we were unable to recover it. 00:28:46.990 [2024-07-25 10:17:32.045292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.990 [2024-07-25 10:17:32.045327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.990 qpair failed and we were unable to recover it. 00:28:46.990 [2024-07-25 10:17:32.045551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.990 [2024-07-25 10:17:32.045581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.990 qpair failed and we were unable to recover it. 00:28:46.990 [2024-07-25 10:17:32.045798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.990 [2024-07-25 10:17:32.045821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.990 qpair failed and we were unable to recover it. 00:28:46.990 [2024-07-25 10:17:32.046078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.990 [2024-07-25 10:17:32.046110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.990 qpair failed and we were unable to recover it. 00:28:46.990 [2024-07-25 10:17:32.046364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.990 [2024-07-25 10:17:32.046393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.990 qpair failed and we were unable to recover it. 00:28:46.990 [2024-07-25 10:17:32.046582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.990 [2024-07-25 10:17:32.046607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.990 qpair failed and we were unable to recover it. 00:28:46.990 [2024-07-25 10:17:32.046807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.990 [2024-07-25 10:17:32.046830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.990 qpair failed and we were unable to recover it. 00:28:46.990 [2024-07-25 10:17:32.047024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.990 [2024-07-25 10:17:32.047052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.990 qpair failed and we were unable to recover it. 00:28:46.990 [2024-07-25 10:17:32.047269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.990 [2024-07-25 10:17:32.047299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.990 qpair failed and we were unable to recover it. 00:28:46.990 [2024-07-25 10:17:32.047514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.991 [2024-07-25 10:17:32.047543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.991 qpair failed and we were unable to recover it. 00:28:46.991 [2024-07-25 10:17:32.047717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.991 [2024-07-25 10:17:32.047741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.991 qpair failed and we were unable to recover it. 00:28:46.991 [2024-07-25 10:17:32.047909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.991 [2024-07-25 10:17:32.047937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.991 qpair failed and we were unable to recover it. 00:28:46.991 [2024-07-25 10:17:32.048113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.991 [2024-07-25 10:17:32.048142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.991 qpair failed and we were unable to recover it. 00:28:46.991 [2024-07-25 10:17:32.048392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.991 [2024-07-25 10:17:32.048425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.991 qpair failed and we were unable to recover it. 00:28:46.991 [2024-07-25 10:17:32.048652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.991 [2024-07-25 10:17:32.048675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.991 qpair failed and we were unable to recover it. 00:28:46.991 [2024-07-25 10:17:32.048890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.991 [2024-07-25 10:17:32.048919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.991 qpair failed and we were unable to recover it. 00:28:46.991 [2024-07-25 10:17:32.049123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.991 [2024-07-25 10:17:32.049152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.991 qpair failed and we were unable to recover it. 00:28:46.991 [2024-07-25 10:17:32.049319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.991 [2024-07-25 10:17:32.049352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.991 qpair failed and we were unable to recover it. 00:28:46.991 [2024-07-25 10:17:32.049518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.991 [2024-07-25 10:17:32.049543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.991 qpair failed and we were unable to recover it. 00:28:46.991 [2024-07-25 10:17:32.049745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.991 [2024-07-25 10:17:32.049774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.991 qpair failed and we were unable to recover it. 00:28:46.991 [2024-07-25 10:17:32.050076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.991 [2024-07-25 10:17:32.050105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.991 qpair failed and we were unable to recover it. 00:28:46.991 [2024-07-25 10:17:32.050329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.991 [2024-07-25 10:17:32.050357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.991 qpair failed and we were unable to recover it. 00:28:46.991 [2024-07-25 10:17:32.050545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.991 [2024-07-25 10:17:32.050570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.991 qpair failed and we were unable to recover it. 00:28:46.991 [2024-07-25 10:17:32.050818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.991 [2024-07-25 10:17:32.050847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.991 qpair failed and we were unable to recover it. 00:28:46.991 [2024-07-25 10:17:32.051078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.991 [2024-07-25 10:17:32.051108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.991 qpair failed and we were unable to recover it. 00:28:46.991 [2024-07-25 10:17:32.051292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.991 [2024-07-25 10:17:32.051321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.991 qpair failed and we were unable to recover it. 00:28:46.991 [2024-07-25 10:17:32.051534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.991 [2024-07-25 10:17:32.051559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.991 qpair failed and we were unable to recover it. 00:28:46.991 [2024-07-25 10:17:32.051777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.991 [2024-07-25 10:17:32.051805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.991 qpair failed and we were unable to recover it. 00:28:46.991 [2024-07-25 10:17:32.051983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.991 [2024-07-25 10:17:32.052020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.991 qpair failed and we were unable to recover it. 00:28:46.991 [2024-07-25 10:17:32.052276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.991 [2024-07-25 10:17:32.052305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.991 qpair failed and we were unable to recover it. 00:28:46.991 [2024-07-25 10:17:32.052564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.991 [2024-07-25 10:17:32.052588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.991 qpair failed and we were unable to recover it. 00:28:46.991 [2024-07-25 10:17:32.052863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.991 [2024-07-25 10:17:32.052891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.991 qpair failed and we were unable to recover it. 00:28:46.991 [2024-07-25 10:17:32.053207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.991 [2024-07-25 10:17:32.053236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.991 qpair failed and we were unable to recover it. 00:28:46.991 [2024-07-25 10:17:32.053524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.991 [2024-07-25 10:17:32.053554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.991 qpair failed and we were unable to recover it. 00:28:46.991 [2024-07-25 10:17:32.053806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.991 [2024-07-25 10:17:32.053829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.991 qpair failed and we were unable to recover it. 00:28:46.991 [2024-07-25 10:17:32.054115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.991 [2024-07-25 10:17:32.054153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.991 qpair failed and we were unable to recover it. 00:28:46.991 [2024-07-25 10:17:32.054423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.991 [2024-07-25 10:17:32.054463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.991 qpair failed and we were unable to recover it. 00:28:46.991 [2024-07-25 10:17:32.054667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.991 [2024-07-25 10:17:32.054696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.991 qpair failed and we were unable to recover it. 00:28:46.991 [2024-07-25 10:17:32.054913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.991 [2024-07-25 10:17:32.054937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.991 qpair failed and we were unable to recover it. 00:28:46.991 [2024-07-25 10:17:32.055191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.991 [2024-07-25 10:17:32.055219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.991 qpair failed and we were unable to recover it. 00:28:46.991 [2024-07-25 10:17:32.055500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.991 [2024-07-25 10:17:32.055530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.991 qpair failed and we were unable to recover it. 00:28:46.991 [2024-07-25 10:17:32.055700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.991 [2024-07-25 10:17:32.055744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.991 qpair failed and we were unable to recover it. 00:28:46.991 [2024-07-25 10:17:32.055921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.991 [2024-07-25 10:17:32.055945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.991 qpair failed and we were unable to recover it. 00:28:46.991 [2024-07-25 10:17:32.056211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.991 [2024-07-25 10:17:32.056240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.991 qpair failed and we were unable to recover it. 00:28:46.991 [2024-07-25 10:17:32.056436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.991 [2024-07-25 10:17:32.056466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.991 qpair failed and we were unable to recover it. 00:28:46.991 [2024-07-25 10:17:32.056654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.991 [2024-07-25 10:17:32.056683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.991 qpair failed and we were unable to recover it. 00:28:46.991 [2024-07-25 10:17:32.056893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.991 [2024-07-25 10:17:32.056916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.991 qpair failed and we were unable to recover it. 00:28:46.991 [2024-07-25 10:17:32.057177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.992 [2024-07-25 10:17:32.057206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.992 qpair failed and we were unable to recover it. 00:28:46.992 [2024-07-25 10:17:32.057501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.992 [2024-07-25 10:17:32.057530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.992 qpair failed and we were unable to recover it. 00:28:46.992 [2024-07-25 10:17:32.057718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.992 [2024-07-25 10:17:32.057747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.992 qpair failed and we were unable to recover it. 00:28:46.992 [2024-07-25 10:17:32.057916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.992 [2024-07-25 10:17:32.057940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.992 qpair failed and we were unable to recover it. 00:28:46.992 [2024-07-25 10:17:32.058201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.992 [2024-07-25 10:17:32.058230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.992 qpair failed and we were unable to recover it. 00:28:46.992 [2024-07-25 10:17:32.058520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.992 [2024-07-25 10:17:32.058550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.992 qpair failed and we were unable to recover it. 00:28:46.992 [2024-07-25 10:17:32.058758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.992 [2024-07-25 10:17:32.058787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.992 qpair failed and we were unable to recover it. 00:28:46.992 [2024-07-25 10:17:32.059019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.992 [2024-07-25 10:17:32.059043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.992 qpair failed and we were unable to recover it. 00:28:46.992 [2024-07-25 10:17:32.059261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.992 [2024-07-25 10:17:32.059296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.992 qpair failed and we were unable to recover it. 00:28:46.992 [2024-07-25 10:17:32.059484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.992 [2024-07-25 10:17:32.059508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.992 qpair failed and we were unable to recover it. 00:28:46.992 [2024-07-25 10:17:32.059729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.992 [2024-07-25 10:17:32.059757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.992 qpair failed and we were unable to recover it. 00:28:46.992 [2024-07-25 10:17:32.059935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.992 [2024-07-25 10:17:32.059959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.992 qpair failed and we were unable to recover it. 00:28:46.992 [2024-07-25 10:17:32.060139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.992 [2024-07-25 10:17:32.060168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.992 qpair failed and we were unable to recover it. 00:28:46.992 [2024-07-25 10:17:32.060360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.992 [2024-07-25 10:17:32.060389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.992 qpair failed and we were unable to recover it. 00:28:46.992 [2024-07-25 10:17:32.060655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.992 [2024-07-25 10:17:32.060680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.992 qpair failed and we were unable to recover it. 00:28:46.992 [2024-07-25 10:17:32.060956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.992 [2024-07-25 10:17:32.060996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.992 qpair failed and we were unable to recover it. 00:28:46.992 [2024-07-25 10:17:32.061221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.992 [2024-07-25 10:17:32.061250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.992 qpair failed and we were unable to recover it. 00:28:46.992 [2024-07-25 10:17:32.061489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.992 [2024-07-25 10:17:32.061519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.992 qpair failed and we were unable to recover it. 00:28:46.992 [2024-07-25 10:17:32.061720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.992 [2024-07-25 10:17:32.061748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.992 qpair failed and we were unable to recover it. 00:28:46.992 [2024-07-25 10:17:32.061930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.992 [2024-07-25 10:17:32.061954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.992 qpair failed and we were unable to recover it. 00:28:46.992 [2024-07-25 10:17:32.062114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.992 [2024-07-25 10:17:32.062142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.992 qpair failed and we were unable to recover it. 00:28:46.992 [2024-07-25 10:17:32.062355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.992 [2024-07-25 10:17:32.062384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.992 qpair failed and we were unable to recover it. 00:28:46.992 [2024-07-25 10:17:32.062600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.992 [2024-07-25 10:17:32.062625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.992 qpair failed and we were unable to recover it. 00:28:46.992 [2024-07-25 10:17:32.062827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.992 [2024-07-25 10:17:32.062851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.992 qpair failed and we were unable to recover it. 00:28:46.992 [2024-07-25 10:17:32.063099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.992 [2024-07-25 10:17:32.063132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.992 qpair failed and we were unable to recover it. 00:28:46.992 [2024-07-25 10:17:32.063416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.992 [2024-07-25 10:17:32.063455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.992 qpair failed and we were unable to recover it. 00:28:46.992 [2024-07-25 10:17:32.063757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.992 [2024-07-25 10:17:32.063790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.992 qpair failed and we were unable to recover it. 00:28:46.992 [2024-07-25 10:17:32.063998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.992 [2024-07-25 10:17:32.064021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.992 qpair failed and we were unable to recover it. 00:28:46.992 [2024-07-25 10:17:32.064270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.992 [2024-07-25 10:17:32.064299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.992 qpair failed and we were unable to recover it. 00:28:46.992 [2024-07-25 10:17:32.064557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.992 [2024-07-25 10:17:32.064587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.992 qpair failed and we were unable to recover it. 00:28:46.992 [2024-07-25 10:17:32.064846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.992 [2024-07-25 10:17:32.064875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.992 qpair failed and we were unable to recover it. 00:28:46.992 [2024-07-25 10:17:32.065134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.992 [2024-07-25 10:17:32.065158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.992 qpair failed and we were unable to recover it. 00:28:46.992 [2024-07-25 10:17:32.065481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.992 [2024-07-25 10:17:32.065510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.992 qpair failed and we were unable to recover it. 00:28:46.992 [2024-07-25 10:17:32.065727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.992 [2024-07-25 10:17:32.065756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.992 qpair failed and we were unable to recover it. 00:28:46.993 [2024-07-25 10:17:32.066044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.993 [2024-07-25 10:17:32.066078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.993 qpair failed and we were unable to recover it. 00:28:46.993 [2024-07-25 10:17:32.066375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.993 [2024-07-25 10:17:32.066398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.993 qpair failed and we were unable to recover it. 00:28:46.993 [2024-07-25 10:17:32.066668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.993 [2024-07-25 10:17:32.066692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.993 qpair failed and we were unable to recover it. 00:28:46.993 [2024-07-25 10:17:32.066957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.993 [2024-07-25 10:17:32.066986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.993 qpair failed and we were unable to recover it. 00:28:46.993 [2024-07-25 10:17:32.067187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.993 [2024-07-25 10:17:32.067220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.993 qpair failed and we were unable to recover it. 00:28:46.993 [2024-07-25 10:17:32.067383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.993 [2024-07-25 10:17:32.067407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.993 qpair failed and we were unable to recover it. 00:28:46.993 [2024-07-25 10:17:32.067609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.993 [2024-07-25 10:17:32.067635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.993 qpair failed and we were unable to recover it. 00:28:46.993 [2024-07-25 10:17:32.067915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.993 [2024-07-25 10:17:32.067943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.993 qpair failed and we were unable to recover it. 00:28:46.993 [2024-07-25 10:17:32.068161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.993 [2024-07-25 10:17:32.068189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.993 qpair failed and we were unable to recover it. 00:28:46.993 [2024-07-25 10:17:32.068442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.993 [2024-07-25 10:17:32.068475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.993 qpair failed and we were unable to recover it. 00:28:46.993 [2024-07-25 10:17:32.068658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.993 [2024-07-25 10:17:32.068682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.993 qpair failed and we were unable to recover it. 00:28:46.993 [2024-07-25 10:17:32.068879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.993 [2024-07-25 10:17:32.068907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.993 qpair failed and we were unable to recover it. 00:28:46.993 [2024-07-25 10:17:32.069117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.993 [2024-07-25 10:17:32.069145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.993 qpair failed and we were unable to recover it. 00:28:46.993 [2024-07-25 10:17:32.069353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.993 [2024-07-25 10:17:32.069375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.993 qpair failed and we were unable to recover it. 00:28:46.993 [2024-07-25 10:17:32.069586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.993 [2024-07-25 10:17:32.069611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.993 qpair failed and we were unable to recover it. 00:28:46.993 [2024-07-25 10:17:32.069764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.993 [2024-07-25 10:17:32.069792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.993 qpair failed and we were unable to recover it. 00:28:46.993 [2024-07-25 10:17:32.069992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.993 [2024-07-25 10:17:32.070020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.993 qpair failed and we were unable to recover it. 00:28:46.993 [2024-07-25 10:17:32.070233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.993 [2024-07-25 10:17:32.070256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.993 qpair failed and we were unable to recover it. 00:28:46.993 [2024-07-25 10:17:32.070514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.993 [2024-07-25 10:17:32.070543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.993 qpair failed and we were unable to recover it. 00:28:46.993 [2024-07-25 10:17:32.070762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.993 [2024-07-25 10:17:32.070791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.993 qpair failed and we were unable to recover it. 00:28:46.993 [2024-07-25 10:17:32.071039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.993 [2024-07-25 10:17:32.071067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.993 qpair failed and we were unable to recover it. 00:28:46.993 [2024-07-25 10:17:32.071280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.993 [2024-07-25 10:17:32.071303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.993 qpair failed and we were unable to recover it. 00:28:46.993 [2024-07-25 10:17:32.071543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.993 [2024-07-25 10:17:32.071572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.993 qpair failed and we were unable to recover it. 00:28:46.993 [2024-07-25 10:17:32.071729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.993 [2024-07-25 10:17:32.071757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.993 qpair failed and we were unable to recover it. 00:28:46.993 [2024-07-25 10:17:32.072016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.993 [2024-07-25 10:17:32.072044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.993 qpair failed and we were unable to recover it. 00:28:46.993 [2024-07-25 10:17:32.072325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.993 [2024-07-25 10:17:32.072348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.993 qpair failed and we were unable to recover it. 00:28:46.993 [2024-07-25 10:17:32.072553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.993 [2024-07-25 10:17:32.072582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.993 qpair failed and we were unable to recover it. 00:28:46.993 [2024-07-25 10:17:32.072779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.993 [2024-07-25 10:17:32.072807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.993 qpair failed and we were unable to recover it. 00:28:46.993 [2024-07-25 10:17:32.072982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.993 [2024-07-25 10:17:32.073022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.993 qpair failed and we were unable to recover it. 00:28:46.993 [2024-07-25 10:17:32.073276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.993 [2024-07-25 10:17:32.073299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.993 qpair failed and we were unable to recover it. 00:28:46.993 [2024-07-25 10:17:32.073543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.993 [2024-07-25 10:17:32.073572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.993 qpair failed and we were unable to recover it. 00:28:46.993 [2024-07-25 10:17:32.073874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.993 [2024-07-25 10:17:32.073906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.993 qpair failed and we were unable to recover it. 00:28:46.993 [2024-07-25 10:17:32.074173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.993 [2024-07-25 10:17:32.074201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.993 qpair failed and we were unable to recover it. 00:28:46.993 [2024-07-25 10:17:32.074356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.993 [2024-07-25 10:17:32.074379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.993 qpair failed and we were unable to recover it. 00:28:46.993 [2024-07-25 10:17:32.074619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.993 [2024-07-25 10:17:32.074644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.993 qpair failed and we were unable to recover it. 00:28:46.993 [2024-07-25 10:17:32.074811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.993 [2024-07-25 10:17:32.074840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.993 qpair failed and we were unable to recover it. 00:28:46.993 [2024-07-25 10:17:32.075038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.993 [2024-07-25 10:17:32.075066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.993 qpair failed and we were unable to recover it. 00:28:46.993 [2024-07-25 10:17:32.075249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.994 [2024-07-25 10:17:32.075280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.994 qpair failed and we were unable to recover it. 00:28:46.994 [2024-07-25 10:17:32.075555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.994 [2024-07-25 10:17:32.075585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.994 qpair failed and we were unable to recover it. 00:28:46.994 [2024-07-25 10:17:32.075835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.994 [2024-07-25 10:17:32.075864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.994 qpair failed and we were unable to recover it. 00:28:46.994 [2024-07-25 10:17:32.076166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.994 [2024-07-25 10:17:32.076194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.994 qpair failed and we were unable to recover it. 00:28:46.994 [2024-07-25 10:17:32.076448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.994 [2024-07-25 10:17:32.076477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.994 qpair failed and we were unable to recover it. 00:28:46.994 [2024-07-25 10:17:32.076696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.994 [2024-07-25 10:17:32.076724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.994 qpair failed and we were unable to recover it. 00:28:46.994 [2024-07-25 10:17:32.076972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.994 [2024-07-25 10:17:32.077000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.994 qpair failed and we were unable to recover it. 00:28:46.994 [2024-07-25 10:17:32.077206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.994 [2024-07-25 10:17:32.077234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.994 qpair failed and we were unable to recover it. 00:28:46.994 [2024-07-25 10:17:32.077512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.994 [2024-07-25 10:17:32.077536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.994 qpair failed and we were unable to recover it. 00:28:46.994 [2024-07-25 10:17:32.077736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.994 [2024-07-25 10:17:32.077764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.994 qpair failed and we were unable to recover it. 00:28:46.994 [2024-07-25 10:17:32.077944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.994 [2024-07-25 10:17:32.077991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.994 qpair failed and we were unable to recover it. 00:28:46.994 [2024-07-25 10:17:32.078202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.994 [2024-07-25 10:17:32.078230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.994 qpair failed and we were unable to recover it. 00:28:46.994 [2024-07-25 10:17:32.078452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.994 [2024-07-25 10:17:32.078476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.994 qpair failed and we were unable to recover it. 00:28:46.994 [2024-07-25 10:17:32.078668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.994 [2024-07-25 10:17:32.078696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.994 qpair failed and we were unable to recover it. 00:28:46.994 [2024-07-25 10:17:32.078897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.994 [2024-07-25 10:17:32.078925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.994 qpair failed and we were unable to recover it. 00:28:46.994 [2024-07-25 10:17:32.079099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.994 [2024-07-25 10:17:32.079128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.994 qpair failed and we were unable to recover it. 00:28:46.994 [2024-07-25 10:17:32.079377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.994 [2024-07-25 10:17:32.079400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.994 qpair failed and we were unable to recover it. 00:28:46.994 [2024-07-25 10:17:32.079627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.994 [2024-07-25 10:17:32.079651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.994 qpair failed and we were unable to recover it. 00:28:46.994 [2024-07-25 10:17:32.079891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.994 [2024-07-25 10:17:32.079919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.994 qpair failed and we were unable to recover it. 00:28:46.994 [2024-07-25 10:17:32.080097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.994 [2024-07-25 10:17:32.080124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.994 qpair failed and we were unable to recover it. 00:28:46.994 [2024-07-25 10:17:32.080314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.994 [2024-07-25 10:17:32.080336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.994 qpair failed and we were unable to recover it. 00:28:46.994 [2024-07-25 10:17:32.080506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.994 [2024-07-25 10:17:32.080535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.994 qpair failed and we were unable to recover it. 00:28:46.994 [2024-07-25 10:17:32.080817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.994 [2024-07-25 10:17:32.080846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.994 qpair failed and we were unable to recover it. 00:28:46.994 [2024-07-25 10:17:32.081120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.994 [2024-07-25 10:17:32.081148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.994 qpair failed and we were unable to recover it. 00:28:46.994 [2024-07-25 10:17:32.081331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.994 [2024-07-25 10:17:32.081354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.994 qpair failed and we were unable to recover it. 00:28:46.994 [2024-07-25 10:17:32.081609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.994 [2024-07-25 10:17:32.081633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.994 qpair failed and we were unable to recover it. 00:28:46.994 [2024-07-25 10:17:32.081795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.994 [2024-07-25 10:17:32.081823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.994 qpair failed and we were unable to recover it. 00:28:46.994 [2024-07-25 10:17:32.082040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.994 [2024-07-25 10:17:32.082068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.994 qpair failed and we were unable to recover it. 00:28:46.994 [2024-07-25 10:17:32.082247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.994 [2024-07-25 10:17:32.082270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.994 qpair failed and we were unable to recover it. 00:28:46.994 [2024-07-25 10:17:32.082539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.994 [2024-07-25 10:17:32.082567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.994 qpair failed and we were unable to recover it. 00:28:46.994 [2024-07-25 10:17:32.082778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.994 [2024-07-25 10:17:32.082807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.994 qpair failed and we were unable to recover it. 00:28:46.994 [2024-07-25 10:17:32.083009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.994 [2024-07-25 10:17:32.083037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.994 qpair failed and we were unable to recover it. 00:28:46.994 [2024-07-25 10:17:32.083256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.994 [2024-07-25 10:17:32.083279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.994 qpair failed and we were unable to recover it. 00:28:46.994 [2024-07-25 10:17:32.083444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.994 [2024-07-25 10:17:32.083473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.994 qpair failed and we were unable to recover it. 00:28:46.994 [2024-07-25 10:17:32.083651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.994 [2024-07-25 10:17:32.083680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.994 qpair failed and we were unable to recover it. 00:28:46.994 [2024-07-25 10:17:32.083881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.994 [2024-07-25 10:17:32.083909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.994 qpair failed and we were unable to recover it. 00:28:46.994 [2024-07-25 10:17:32.084084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.994 [2024-07-25 10:17:32.084107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.994 qpair failed and we were unable to recover it. 00:28:46.994 [2024-07-25 10:17:32.084307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.994 [2024-07-25 10:17:32.084335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.994 qpair failed and we were unable to recover it. 00:28:46.994 [2024-07-25 10:17:32.084538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.994 [2024-07-25 10:17:32.084567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.995 qpair failed and we were unable to recover it. 00:28:46.995 [2024-07-25 10:17:32.084835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.995 [2024-07-25 10:17:32.084863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.995 qpair failed and we were unable to recover it. 00:28:46.995 [2024-07-25 10:17:32.085123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.995 [2024-07-25 10:17:32.085146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.995 qpair failed and we were unable to recover it. 00:28:46.995 [2024-07-25 10:17:32.085359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.995 [2024-07-25 10:17:32.085387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.995 qpair failed and we were unable to recover it. 00:28:46.995 [2024-07-25 10:17:32.085552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.995 [2024-07-25 10:17:32.085576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.995 qpair failed and we were unable to recover it. 00:28:46.995 [2024-07-25 10:17:32.085764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.995 [2024-07-25 10:17:32.085792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.995 qpair failed and we were unable to recover it. 00:28:46.995 [2024-07-25 10:17:32.086000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.995 [2024-07-25 10:17:32.086023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.995 qpair failed and we were unable to recover it. 00:28:46.995 [2024-07-25 10:17:32.086214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.995 [2024-07-25 10:17:32.086242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.995 qpair failed and we were unable to recover it. 00:28:46.995 [2024-07-25 10:17:32.086454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.995 [2024-07-25 10:17:32.086483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.995 qpair failed and we were unable to recover it. 00:28:46.995 [2024-07-25 10:17:32.086658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.995 [2024-07-25 10:17:32.086686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.995 qpair failed and we were unable to recover it. 00:28:46.995 [2024-07-25 10:17:32.086903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.995 [2024-07-25 10:17:32.086926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.995 qpair failed and we were unable to recover it. 00:28:46.995 [2024-07-25 10:17:32.087142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.995 [2024-07-25 10:17:32.087170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.995 qpair failed and we were unable to recover it. 00:28:46.995 [2024-07-25 10:17:32.087345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.995 [2024-07-25 10:17:32.087373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.995 qpair failed and we were unable to recover it. 00:28:46.995 [2024-07-25 10:17:32.087558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.995 [2024-07-25 10:17:32.087583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.995 qpair failed and we were unable to recover it. 00:28:46.995 [2024-07-25 10:17:32.087743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.995 [2024-07-25 10:17:32.087766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.995 qpair failed and we were unable to recover it. 00:28:46.995 [2024-07-25 10:17:32.087944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.995 [2024-07-25 10:17:32.087972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.995 qpair failed and we were unable to recover it. 00:28:46.995 [2024-07-25 10:17:32.088153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.995 [2024-07-25 10:17:32.088181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.995 qpair failed and we were unable to recover it. 00:28:46.995 [2024-07-25 10:17:32.088379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.995 [2024-07-25 10:17:32.088406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.995 qpair failed and we were unable to recover it. 00:28:46.995 [2024-07-25 10:17:32.088602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.995 [2024-07-25 10:17:32.088627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.995 qpair failed and we were unable to recover it. 00:28:46.995 [2024-07-25 10:17:32.088849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.995 [2024-07-25 10:17:32.088877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.995 qpair failed and we were unable to recover it. 00:28:46.995 [2024-07-25 10:17:32.089084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.995 [2024-07-25 10:17:32.089112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.995 qpair failed and we were unable to recover it. 00:28:46.995 [2024-07-25 10:17:32.089299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.995 [2024-07-25 10:17:32.089328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.995 qpair failed and we were unable to recover it. 00:28:46.995 [2024-07-25 10:17:32.089540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.995 [2024-07-25 10:17:32.089566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.995 qpair failed and we were unable to recover it. 00:28:46.995 [2024-07-25 10:17:32.089781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.995 [2024-07-25 10:17:32.089810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.995 qpair failed and we were unable to recover it. 00:28:46.995 [2024-07-25 10:17:32.090014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.995 [2024-07-25 10:17:32.090046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.995 qpair failed and we were unable to recover it. 00:28:46.995 [2024-07-25 10:17:32.090247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.995 [2024-07-25 10:17:32.090275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.995 qpair failed and we were unable to recover it. 00:28:46.995 [2024-07-25 10:17:32.090485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.995 [2024-07-25 10:17:32.090510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.995 qpair failed and we were unable to recover it. 00:28:46.995 [2024-07-25 10:17:32.090711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.995 [2024-07-25 10:17:32.090739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.995 qpair failed and we were unable to recover it. 00:28:46.995 [2024-07-25 10:17:32.090912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.995 [2024-07-25 10:17:32.090940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.995 qpair failed and we were unable to recover it. 00:28:46.995 [2024-07-25 10:17:32.091105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.995 [2024-07-25 10:17:32.091133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.995 qpair failed and we were unable to recover it. 00:28:46.995 [2024-07-25 10:17:32.091347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.995 [2024-07-25 10:17:32.091370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.995 qpair failed and we were unable to recover it. 00:28:46.995 [2024-07-25 10:17:32.091583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.995 [2024-07-25 10:17:32.091607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.995 qpair failed and we were unable to recover it. 00:28:46.995 [2024-07-25 10:17:32.091805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.995 [2024-07-25 10:17:32.091834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.995 qpair failed and we were unable to recover it. 00:28:46.995 [2024-07-25 10:17:32.092017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.995 [2024-07-25 10:17:32.092045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.995 qpair failed and we were unable to recover it. 00:28:46.995 [2024-07-25 10:17:32.092209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.995 [2024-07-25 10:17:32.092232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.995 qpair failed and we were unable to recover it. 00:28:46.995 [2024-07-25 10:17:32.092416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.995 [2024-07-25 10:17:32.092453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.995 qpair failed and we were unable to recover it. 00:28:46.995 [2024-07-25 10:17:32.092633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.995 [2024-07-25 10:17:32.092662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.995 qpair failed and we were unable to recover it. 00:28:46.995 [2024-07-25 10:17:32.092831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.995 [2024-07-25 10:17:32.092859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.995 qpair failed and we were unable to recover it. 00:28:46.995 [2024-07-25 10:17:32.093017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.995 [2024-07-25 10:17:32.093040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.995 qpair failed and we were unable to recover it. 00:28:46.996 [2024-07-25 10:17:32.093218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.996 [2024-07-25 10:17:32.093245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.996 qpair failed and we were unable to recover it. 00:28:46.996 [2024-07-25 10:17:32.093453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.996 [2024-07-25 10:17:32.093483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.996 qpair failed and we were unable to recover it. 00:28:46.996 [2024-07-25 10:17:32.093632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.996 [2024-07-25 10:17:32.093660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.996 qpair failed and we were unable to recover it. 00:28:46.996 [2024-07-25 10:17:32.093833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.996 [2024-07-25 10:17:32.093856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.996 qpair failed and we were unable to recover it. 00:28:46.996 [2024-07-25 10:17:32.094064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.996 [2024-07-25 10:17:32.094091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.996 qpair failed and we were unable to recover it. 00:28:46.996 [2024-07-25 10:17:32.094293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.996 [2024-07-25 10:17:32.094321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.996 qpair failed and we were unable to recover it. 00:28:46.996 [2024-07-25 10:17:32.094500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.996 [2024-07-25 10:17:32.094529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.996 qpair failed and we were unable to recover it. 00:28:46.996 [2024-07-25 10:17:32.094714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.996 [2024-07-25 10:17:32.094737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.996 qpair failed and we were unable to recover it. 00:28:46.996 [2024-07-25 10:17:32.094950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.996 [2024-07-25 10:17:32.094978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.996 qpair failed and we were unable to recover it. 00:28:46.996 [2024-07-25 10:17:32.095189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.996 [2024-07-25 10:17:32.095217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.996 qpair failed and we were unable to recover it. 00:28:46.996 [2024-07-25 10:17:32.095380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.996 [2024-07-25 10:17:32.095408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.996 qpair failed and we were unable to recover it. 00:28:46.996 [2024-07-25 10:17:32.095579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.996 [2024-07-25 10:17:32.095602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.996 qpair failed and we were unable to recover it. 00:28:46.996 [2024-07-25 10:17:32.095817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.996 [2024-07-25 10:17:32.095849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.996 qpair failed and we were unable to recover it. 00:28:46.996 [2024-07-25 10:17:32.096033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.996 [2024-07-25 10:17:32.096061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.996 qpair failed and we were unable to recover it. 00:28:46.996 [2024-07-25 10:17:32.096261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.996 [2024-07-25 10:17:32.096289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.996 qpair failed and we were unable to recover it. 00:28:46.996 [2024-07-25 10:17:32.096523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.996 [2024-07-25 10:17:32.096547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.996 qpair failed and we were unable to recover it. 00:28:46.996 [2024-07-25 10:17:32.096771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.996 [2024-07-25 10:17:32.096800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.996 qpair failed and we were unable to recover it. 00:28:46.996 [2024-07-25 10:17:32.097012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.996 [2024-07-25 10:17:32.097040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.996 qpair failed and we were unable to recover it. 00:28:46.996 [2024-07-25 10:17:32.097226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.996 [2024-07-25 10:17:32.097254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.996 qpair failed and we were unable to recover it. 00:28:46.996 [2024-07-25 10:17:32.097470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.996 [2024-07-25 10:17:32.097494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.996 qpair failed and we were unable to recover it. 00:28:46.996 [2024-07-25 10:17:32.097659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.996 [2024-07-25 10:17:32.097687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.996 qpair failed and we were unable to recover it. 00:28:46.996 [2024-07-25 10:17:32.097877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.996 [2024-07-25 10:17:32.097905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.996 qpair failed and we were unable to recover it. 00:28:46.996 [2024-07-25 10:17:32.098110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.996 [2024-07-25 10:17:32.098138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.996 qpair failed and we were unable to recover it. 00:28:46.996 [2024-07-25 10:17:32.098321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.996 [2024-07-25 10:17:32.098350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.996 qpair failed and we were unable to recover it. 00:28:46.996 [2024-07-25 10:17:32.098550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.996 [2024-07-25 10:17:32.098574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.996 qpair failed and we were unable to recover it. 00:28:46.996 [2024-07-25 10:17:32.098745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.996 [2024-07-25 10:17:32.098773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.996 qpair failed and we were unable to recover it. 00:28:46.996 [2024-07-25 10:17:32.098964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.996 [2024-07-25 10:17:32.098992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.996 qpair failed and we were unable to recover it. 00:28:46.996 [2024-07-25 10:17:32.099202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.996 [2024-07-25 10:17:32.099225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.996 qpair failed and we were unable to recover it. 00:28:46.996 [2024-07-25 10:17:32.099441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.996 [2024-07-25 10:17:32.099483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.996 qpair failed and we were unable to recover it. 00:28:46.996 [2024-07-25 10:17:32.099675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.996 [2024-07-25 10:17:32.099700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.996 qpair failed and we were unable to recover it. 00:28:46.996 [2024-07-25 10:17:32.099899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.996 [2024-07-25 10:17:32.099927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.996 qpair failed and we were unable to recover it. 00:28:46.996 [2024-07-25 10:17:32.100088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.996 [2024-07-25 10:17:32.100111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.996 qpair failed and we were unable to recover it. 00:28:46.996 [2024-07-25 10:17:32.100329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.996 [2024-07-25 10:17:32.100357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.996 qpair failed and we were unable to recover it. 00:28:46.996 [2024-07-25 10:17:32.100556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.996 [2024-07-25 10:17:32.100581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.996 qpair failed and we were unable to recover it. 00:28:46.996 [2024-07-25 10:17:32.100795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.996 [2024-07-25 10:17:32.100823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.996 qpair failed and we were unable to recover it. 00:28:46.996 [2024-07-25 10:17:32.101039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.996 [2024-07-25 10:17:32.101062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.996 qpair failed and we were unable to recover it. 00:28:46.996 [2024-07-25 10:17:32.101254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.996 [2024-07-25 10:17:32.101305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.996 qpair failed and we were unable to recover it. 00:28:46.996 [2024-07-25 10:17:32.101499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.996 [2024-07-25 10:17:32.101523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.996 qpair failed and we were unable to recover it. 00:28:46.996 [2024-07-25 10:17:32.101721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.997 [2024-07-25 10:17:32.101759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.997 qpair failed and we were unable to recover it. 00:28:46.997 [2024-07-25 10:17:32.101939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.997 [2024-07-25 10:17:32.101976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.997 qpair failed and we were unable to recover it. 00:28:46.997 [2024-07-25 10:17:32.102187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.997 [2024-07-25 10:17:32.102215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.997 qpair failed and we were unable to recover it. 00:28:46.997 [2024-07-25 10:17:32.102362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.997 [2024-07-25 10:17:32.102390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.997 qpair failed and we were unable to recover it. 00:28:46.997 [2024-07-25 10:17:32.102546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.997 [2024-07-25 10:17:32.102572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.997 qpair failed and we were unable to recover it. 00:28:46.997 [2024-07-25 10:17:32.102727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.997 [2024-07-25 10:17:32.102767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.997 qpair failed and we were unable to recover it. 00:28:46.997 [2024-07-25 10:17:32.102989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.997 [2024-07-25 10:17:32.103017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.997 qpair failed and we were unable to recover it. 00:28:46.997 [2024-07-25 10:17:32.103203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.997 [2024-07-25 10:17:32.103231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.997 qpair failed and we were unable to recover it. 00:28:46.997 [2024-07-25 10:17:32.103417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.997 [2024-07-25 10:17:32.103471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.997 qpair failed and we were unable to recover it. 00:28:46.997 [2024-07-25 10:17:32.103655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.997 [2024-07-25 10:17:32.103680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.997 qpair failed and we were unable to recover it. 00:28:46.997 [2024-07-25 10:17:32.103885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.997 [2024-07-25 10:17:32.103913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.997 qpair failed and we were unable to recover it. 00:28:46.997 [2024-07-25 10:17:32.104130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.997 [2024-07-25 10:17:32.104158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.997 qpair failed and we were unable to recover it. 00:28:46.997 [2024-07-25 10:17:32.104321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.997 [2024-07-25 10:17:32.104349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.997 qpair failed and we were unable to recover it. 00:28:46.997 [2024-07-25 10:17:32.104531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.997 [2024-07-25 10:17:32.104557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.997 qpair failed and we were unable to recover it. 00:28:46.997 [2024-07-25 10:17:32.104773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.997 [2024-07-25 10:17:32.104801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.997 qpair failed and we were unable to recover it. 00:28:46.997 [2024-07-25 10:17:32.104993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.997 [2024-07-25 10:17:32.105022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.997 qpair failed and we were unable to recover it. 00:28:46.997 [2024-07-25 10:17:32.105204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.997 [2024-07-25 10:17:32.105232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.997 qpair failed and we were unable to recover it. 00:28:46.997 [2024-07-25 10:17:32.105443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.997 [2024-07-25 10:17:32.105470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.997 qpair failed and we were unable to recover it. 00:28:46.997 [2024-07-25 10:17:32.105647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:46.997 [2024-07-25 10:17:32.105675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:46.997 qpair failed and we were unable to recover it. 00:28:47.277 [2024-07-25 10:17:32.105845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.277 [2024-07-25 10:17:32.105874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.277 qpair failed and we were unable to recover it. 00:28:47.277 [2024-07-25 10:17:32.106053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.277 [2024-07-25 10:17:32.106082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.277 qpair failed and we were unable to recover it. 00:28:47.277 [2024-07-25 10:17:32.106295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.277 [2024-07-25 10:17:32.106319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.277 qpair failed and we were unable to recover it. 00:28:47.277 [2024-07-25 10:17:32.106538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.277 [2024-07-25 10:17:32.106567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.277 qpair failed and we were unable to recover it. 00:28:47.277 [2024-07-25 10:17:32.106740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.277 [2024-07-25 10:17:32.106769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.277 qpair failed and we were unable to recover it. 00:28:47.277 [2024-07-25 10:17:32.106968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.277 [2024-07-25 10:17:32.106996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.277 qpair failed and we were unable to recover it. 00:28:47.277 [2024-07-25 10:17:32.107201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.277 [2024-07-25 10:17:32.107226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.277 qpair failed and we were unable to recover it. 00:28:47.277 [2024-07-25 10:17:32.107402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.277 [2024-07-25 10:17:32.107440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.277 qpair failed and we were unable to recover it. 00:28:47.277 [2024-07-25 10:17:32.107634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.277 [2024-07-25 10:17:32.107660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.277 qpair failed and we were unable to recover it. 00:28:47.277 [2024-07-25 10:17:32.107830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.277 [2024-07-25 10:17:32.107858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.277 qpair failed and we were unable to recover it. 00:28:47.277 [2024-07-25 10:17:32.108047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.277 [2024-07-25 10:17:32.108073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.277 qpair failed and we were unable to recover it. 00:28:47.277 [2024-07-25 10:17:32.108264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.277 [2024-07-25 10:17:32.108292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.277 qpair failed and we were unable to recover it. 00:28:47.277 [2024-07-25 10:17:32.108472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.277 [2024-07-25 10:17:32.108501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.277 qpair failed and we were unable to recover it. 00:28:47.277 [2024-07-25 10:17:32.108679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.277 [2024-07-25 10:17:32.108707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.277 qpair failed and we were unable to recover it. 00:28:47.277 [2024-07-25 10:17:32.108887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.277 [2024-07-25 10:17:32.108912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.277 qpair failed and we were unable to recover it. 00:28:47.277 [2024-07-25 10:17:32.109120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.277 [2024-07-25 10:17:32.109148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.277 qpair failed and we were unable to recover it. 00:28:47.277 [2024-07-25 10:17:32.109332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.277 [2024-07-25 10:17:32.109360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.277 qpair failed and we were unable to recover it. 00:28:47.277 [2024-07-25 10:17:32.109543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.277 [2024-07-25 10:17:32.109584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.277 qpair failed and we were unable to recover it. 00:28:47.277 [2024-07-25 10:17:32.109749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.277 [2024-07-25 10:17:32.109774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.277 qpair failed and we were unable to recover it. 00:28:47.277 [2024-07-25 10:17:32.109985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.278 [2024-07-25 10:17:32.110013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.278 qpair failed and we were unable to recover it. 00:28:47.278 [2024-07-25 10:17:32.110225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.278 [2024-07-25 10:17:32.110253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.278 qpair failed and we were unable to recover it. 00:28:47.278 [2024-07-25 10:17:32.110446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.278 [2024-07-25 10:17:32.110476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.278 qpair failed and we were unable to recover it. 00:28:47.278 [2024-07-25 10:17:32.110675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.278 [2024-07-25 10:17:32.110714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.278 qpair failed and we were unable to recover it. 00:28:47.278 [2024-07-25 10:17:32.110884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.278 [2024-07-25 10:17:32.110912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.278 qpair failed and we were unable to recover it. 00:28:47.278 [2024-07-25 10:17:32.111085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.278 [2024-07-25 10:17:32.111114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.278 qpair failed and we were unable to recover it. 00:28:47.278 [2024-07-25 10:17:32.111258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.278 [2024-07-25 10:17:32.111286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.278 qpair failed and we were unable to recover it. 00:28:47.278 [2024-07-25 10:17:32.111474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.278 [2024-07-25 10:17:32.111497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.278 qpair failed and we were unable to recover it. 00:28:47.278 [2024-07-25 10:17:32.111665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.278 [2024-07-25 10:17:32.111693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.278 qpair failed and we were unable to recover it. 00:28:47.278 [2024-07-25 10:17:32.111905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.278 [2024-07-25 10:17:32.111934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.278 qpair failed and we were unable to recover it. 00:28:47.278 [2024-07-25 10:17:32.112121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.278 [2024-07-25 10:17:32.112149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.278 qpair failed and we were unable to recover it. 00:28:47.278 [2024-07-25 10:17:32.112322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.278 [2024-07-25 10:17:32.112345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.278 qpair failed and we were unable to recover it. 00:28:47.278 [2024-07-25 10:17:32.112523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.278 [2024-07-25 10:17:32.112552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.278 qpair failed and we were unable to recover it. 00:28:47.278 [2024-07-25 10:17:32.112762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.278 [2024-07-25 10:17:32.112791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.278 qpair failed and we were unable to recover it. 00:28:47.278 [2024-07-25 10:17:32.112998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.278 [2024-07-25 10:17:32.113026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.278 qpair failed and we were unable to recover it. 00:28:47.278 [2024-07-25 10:17:32.113205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.278 [2024-07-25 10:17:32.113228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.278 qpair failed and we were unable to recover it. 00:28:47.278 [2024-07-25 10:17:32.113402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.278 [2024-07-25 10:17:32.113438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.278 qpair failed and we were unable to recover it. 00:28:47.278 [2024-07-25 10:17:32.113629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.278 [2024-07-25 10:17:32.113653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.278 qpair failed and we were unable to recover it. 00:28:47.278 [2024-07-25 10:17:32.113827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.278 [2024-07-25 10:17:32.113855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.278 qpair failed and we were unable to recover it. 00:28:47.278 [2024-07-25 10:17:32.114050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.278 [2024-07-25 10:17:32.114073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.278 qpair failed and we were unable to recover it. 00:28:47.278 [2024-07-25 10:17:32.114266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.278 [2024-07-25 10:17:32.114294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.278 qpair failed and we were unable to recover it. 00:28:47.278 [2024-07-25 10:17:32.114497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.278 [2024-07-25 10:17:32.114526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.278 qpair failed and we were unable to recover it. 00:28:47.278 [2024-07-25 10:17:32.114726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.278 [2024-07-25 10:17:32.114754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.278 qpair failed and we were unable to recover it. 00:28:47.278 [2024-07-25 10:17:32.114928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.278 [2024-07-25 10:17:32.114951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.278 qpair failed and we were unable to recover it. 00:28:47.278 [2024-07-25 10:17:32.115135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.278 [2024-07-25 10:17:32.115163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.278 qpair failed and we were unable to recover it. 00:28:47.278 [2024-07-25 10:17:32.115324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.278 [2024-07-25 10:17:32.115352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.278 qpair failed and we were unable to recover it. 00:28:47.278 [2024-07-25 10:17:32.115502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.278 [2024-07-25 10:17:32.115531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.278 qpair failed and we were unable to recover it. 00:28:47.278 [2024-07-25 10:17:32.115729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.278 [2024-07-25 10:17:32.115768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.278 qpair failed and we were unable to recover it. 00:28:47.278 [2024-07-25 10:17:32.115905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.278 [2024-07-25 10:17:32.115933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.278 qpair failed and we were unable to recover it. 00:28:47.278 [2024-07-25 10:17:32.116107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.278 [2024-07-25 10:17:32.116135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.278 qpair failed and we were unable to recover it. 00:28:47.278 [2024-07-25 10:17:32.116292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.278 [2024-07-25 10:17:32.116320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.278 qpair failed and we were unable to recover it. 00:28:47.278 [2024-07-25 10:17:32.116508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.278 [2024-07-25 10:17:32.116535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.278 qpair failed and we were unable to recover it. 00:28:47.278 [2024-07-25 10:17:32.116759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.278 [2024-07-25 10:17:32.116787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.278 qpair failed and we were unable to recover it. 00:28:47.278 [2024-07-25 10:17:32.117000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.278 [2024-07-25 10:17:32.117028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.278 qpair failed and we were unable to recover it. 00:28:47.278 [2024-07-25 10:17:32.117191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.278 [2024-07-25 10:17:32.117219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.278 qpair failed and we were unable to recover it. 00:28:47.278 [2024-07-25 10:17:32.117446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.278 [2024-07-25 10:17:32.117470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.278 qpair failed and we were unable to recover it. 00:28:47.278 [2024-07-25 10:17:32.117630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.278 [2024-07-25 10:17:32.117658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.278 qpair failed and we were unable to recover it. 00:28:47.278 [2024-07-25 10:17:32.117861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.278 [2024-07-25 10:17:32.117890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.278 qpair failed and we were unable to recover it. 00:28:47.279 [2024-07-25 10:17:32.118044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.279 [2024-07-25 10:17:32.118072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.279 qpair failed and we were unable to recover it. 00:28:47.279 [2024-07-25 10:17:32.118256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.279 [2024-07-25 10:17:32.118285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.279 qpair failed and we were unable to recover it. 00:28:47.279 [2024-07-25 10:17:32.118495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.279 [2024-07-25 10:17:32.118524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.279 qpair failed and we were unable to recover it. 00:28:47.279 [2024-07-25 10:17:32.118696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.279 [2024-07-25 10:17:32.118724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.279 qpair failed and we were unable to recover it. 00:28:47.279 [2024-07-25 10:17:32.118926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.279 [2024-07-25 10:17:32.118954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.279 qpair failed and we were unable to recover it. 00:28:47.279 [2024-07-25 10:17:32.119135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.279 [2024-07-25 10:17:32.119158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.279 qpair failed and we were unable to recover it. 00:28:47.279 [2024-07-25 10:17:32.119340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.279 [2024-07-25 10:17:32.119368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.279 qpair failed and we were unable to recover it. 00:28:47.279 [2024-07-25 10:17:32.119589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.279 [2024-07-25 10:17:32.119613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.279 qpair failed and we were unable to recover it. 00:28:47.279 [2024-07-25 10:17:32.119838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.279 [2024-07-25 10:17:32.119866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.279 qpair failed and we were unable to recover it. 00:28:47.279 [2024-07-25 10:17:32.120040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.279 [2024-07-25 10:17:32.120063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.279 qpair failed and we were unable to recover it. 00:28:47.279 [2024-07-25 10:17:32.120245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.279 [2024-07-25 10:17:32.120273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.279 qpair failed and we were unable to recover it. 00:28:47.279 [2024-07-25 10:17:32.120476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.279 [2024-07-25 10:17:32.120505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.279 qpair failed and we were unable to recover it. 00:28:47.279 [2024-07-25 10:17:32.120675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.279 [2024-07-25 10:17:32.120703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.279 qpair failed and we were unable to recover it. 00:28:47.279 [2024-07-25 10:17:32.120881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.279 [2024-07-25 10:17:32.120904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.279 qpair failed and we were unable to recover it. 00:28:47.279 [2024-07-25 10:17:32.121061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.279 [2024-07-25 10:17:32.121084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.279 qpair failed and we were unable to recover it. 00:28:47.279 [2024-07-25 10:17:32.121269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.279 [2024-07-25 10:17:32.121297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.279 qpair failed and we were unable to recover it. 00:28:47.279 [2024-07-25 10:17:32.121471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.279 [2024-07-25 10:17:32.121500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.279 qpair failed and we were unable to recover it. 00:28:47.279 [2024-07-25 10:17:32.121679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.279 [2024-07-25 10:17:32.121717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.279 qpair failed and we were unable to recover it. 00:28:47.279 [2024-07-25 10:17:32.121894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.279 [2024-07-25 10:17:32.121922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.279 qpair failed and we were unable to recover it. 00:28:47.279 [2024-07-25 10:17:32.122098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.279 [2024-07-25 10:17:32.122126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.279 qpair failed and we were unable to recover it. 00:28:47.279 [2024-07-25 10:17:32.122300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.279 [2024-07-25 10:17:32.122332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.279 qpair failed and we were unable to recover it. 00:28:47.279 [2024-07-25 10:17:32.122514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.279 [2024-07-25 10:17:32.122539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.279 qpair failed and we were unable to recover it. 00:28:47.279 [2024-07-25 10:17:32.122734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.279 [2024-07-25 10:17:32.122762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.279 qpair failed and we were unable to recover it. 00:28:47.279 [2024-07-25 10:17:32.122906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.279 [2024-07-25 10:17:32.122934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.279 qpair failed and we were unable to recover it. 00:28:47.279 [2024-07-25 10:17:32.123119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.279 [2024-07-25 10:17:32.123148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.279 qpair failed and we were unable to recover it. 00:28:47.279 [2024-07-25 10:17:32.123320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.279 [2024-07-25 10:17:32.123343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.279 qpair failed and we were unable to recover it. 00:28:47.279 [2024-07-25 10:17:32.123527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.279 [2024-07-25 10:17:32.123556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.279 qpair failed and we were unable to recover it. 00:28:47.279 [2024-07-25 10:17:32.123738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.279 [2024-07-25 10:17:32.123767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.279 qpair failed and we were unable to recover it. 00:28:47.279 [2024-07-25 10:17:32.123972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.279 [2024-07-25 10:17:32.124001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.279 qpair failed and we were unable to recover it. 00:28:47.279 [2024-07-25 10:17:32.124215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.279 [2024-07-25 10:17:32.124238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.279 qpair failed and we were unable to recover it. 00:28:47.279 [2024-07-25 10:17:32.124442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.279 [2024-07-25 10:17:32.124471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.279 qpair failed and we were unable to recover it. 00:28:47.279 [2024-07-25 10:17:32.124694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.279 [2024-07-25 10:17:32.124733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.279 qpair failed and we were unable to recover it. 00:28:47.279 [2024-07-25 10:17:32.124899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.279 [2024-07-25 10:17:32.124927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.279 qpair failed and we were unable to recover it. 00:28:47.279 [2024-07-25 10:17:32.125104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.279 [2024-07-25 10:17:32.125127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.279 qpair failed and we were unable to recover it. 00:28:47.279 [2024-07-25 10:17:32.125313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.279 [2024-07-25 10:17:32.125341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.279 qpair failed and we were unable to recover it. 00:28:47.279 [2024-07-25 10:17:32.125520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.279 [2024-07-25 10:17:32.125549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.279 qpair failed and we were unable to recover it. 00:28:47.279 [2024-07-25 10:17:32.125726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.279 [2024-07-25 10:17:32.125754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.279 qpair failed and we were unable to recover it. 00:28:47.279 [2024-07-25 10:17:32.125924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.279 [2024-07-25 10:17:32.125947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.279 qpair failed and we were unable to recover it. 00:28:47.280 [2024-07-25 10:17:32.126126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.280 [2024-07-25 10:17:32.126154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.280 qpair failed and we were unable to recover it. 00:28:47.280 [2024-07-25 10:17:32.126329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.280 [2024-07-25 10:17:32.126358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.280 qpair failed and we were unable to recover it. 00:28:47.280 [2024-07-25 10:17:32.126548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.280 [2024-07-25 10:17:32.126572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.280 qpair failed and we were unable to recover it. 00:28:47.280 [2024-07-25 10:17:32.126738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.280 [2024-07-25 10:17:32.126761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.280 qpair failed and we were unable to recover it. 00:28:47.280 [2024-07-25 10:17:32.126980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.280 [2024-07-25 10:17:32.127008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.280 qpair failed and we were unable to recover it. 00:28:47.280 [2024-07-25 10:17:32.127185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.280 [2024-07-25 10:17:32.127213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.280 qpair failed and we were unable to recover it. 00:28:47.280 [2024-07-25 10:17:32.127358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.280 [2024-07-25 10:17:32.127387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.280 qpair failed and we were unable to recover it. 00:28:47.280 [2024-07-25 10:17:32.127582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.280 [2024-07-25 10:17:32.127606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.280 qpair failed and we were unable to recover it. 00:28:47.280 [2024-07-25 10:17:32.127828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.280 [2024-07-25 10:17:32.127855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.280 qpair failed and we were unable to recover it. 00:28:47.280 [2024-07-25 10:17:32.128001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.280 [2024-07-25 10:17:32.128029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.280 qpair failed and we were unable to recover it. 00:28:47.280 [2024-07-25 10:17:32.128242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.280 [2024-07-25 10:17:32.128270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.280 qpair failed and we were unable to recover it. 00:28:47.280 [2024-07-25 10:17:32.128444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.280 [2024-07-25 10:17:32.128468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.280 qpair failed and we were unable to recover it. 00:28:47.280 [2024-07-25 10:17:32.128647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.280 [2024-07-25 10:17:32.128674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.280 qpair failed and we were unable to recover it. 00:28:47.280 [2024-07-25 10:17:32.128872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.280 [2024-07-25 10:17:32.128901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.280 qpair failed and we were unable to recover it. 00:28:47.280 [2024-07-25 10:17:32.129070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.280 [2024-07-25 10:17:32.129098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.280 qpair failed and we were unable to recover it. 00:28:47.280 [2024-07-25 10:17:32.129263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.280 [2024-07-25 10:17:32.129286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.280 qpair failed and we were unable to recover it. 00:28:47.280 [2024-07-25 10:17:32.129511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.280 [2024-07-25 10:17:32.129540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.280 qpair failed and we were unable to recover it. 00:28:47.280 [2024-07-25 10:17:32.129744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.280 [2024-07-25 10:17:32.129773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.280 qpair failed and we were unable to recover it. 00:28:47.280 [2024-07-25 10:17:32.129972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.280 [2024-07-25 10:17:32.130000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.280 qpair failed and we were unable to recover it. 00:28:47.280 [2024-07-25 10:17:32.130168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.280 [2024-07-25 10:17:32.130192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.280 qpair failed and we were unable to recover it. 00:28:47.280 [2024-07-25 10:17:32.130359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.280 [2024-07-25 10:17:32.130387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.280 qpair failed and we were unable to recover it. 00:28:47.280 [2024-07-25 10:17:32.130557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.280 [2024-07-25 10:17:32.130581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.280 qpair failed and we were unable to recover it. 00:28:47.280 [2024-07-25 10:17:32.130785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.280 [2024-07-25 10:17:32.130813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.280 qpair failed and we were unable to recover it. 00:28:47.280 [2024-07-25 10:17:32.131042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.280 [2024-07-25 10:17:32.131064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.280 qpair failed and we were unable to recover it. 00:28:47.280 [2024-07-25 10:17:32.131216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.280 [2024-07-25 10:17:32.131244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.280 qpair failed and we were unable to recover it. 00:28:47.280 [2024-07-25 10:17:32.131436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.280 [2024-07-25 10:17:32.131465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.280 qpair failed and we were unable to recover it. 00:28:47.280 [2024-07-25 10:17:32.131640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.280 [2024-07-25 10:17:32.131668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.280 qpair failed and we were unable to recover it. 00:28:47.280 [2024-07-25 10:17:32.131834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.280 [2024-07-25 10:17:32.131857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.280 qpair failed and we were unable to recover it. 00:28:47.280 [2024-07-25 10:17:32.132063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.280 [2024-07-25 10:17:32.132090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.280 qpair failed and we were unable to recover it. 00:28:47.280 [2024-07-25 10:17:32.132270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.280 [2024-07-25 10:17:32.132297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.280 qpair failed and we were unable to recover it. 00:28:47.280 [2024-07-25 10:17:32.132509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.280 [2024-07-25 10:17:32.132538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.280 qpair failed and we were unable to recover it. 00:28:47.280 [2024-07-25 10:17:32.132706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.280 [2024-07-25 10:17:32.132734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.280 qpair failed and we were unable to recover it. 00:28:47.280 [2024-07-25 10:17:32.132902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.280 [2024-07-25 10:17:32.132936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.280 qpair failed and we were unable to recover it. 00:28:47.280 [2024-07-25 10:17:32.133149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.280 [2024-07-25 10:17:32.133178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.280 qpair failed and we were unable to recover it. 00:28:47.280 [2024-07-25 10:17:32.133377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.280 [2024-07-25 10:17:32.133406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.280 qpair failed and we were unable to recover it. 00:28:47.280 [2024-07-25 10:17:32.133615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.280 [2024-07-25 10:17:32.133640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.280 qpair failed and we were unable to recover it. 00:28:47.280 [2024-07-25 10:17:32.133904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.280 [2024-07-25 10:17:32.133933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.280 qpair failed and we were unable to recover it. 00:28:47.280 [2024-07-25 10:17:32.134148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.280 [2024-07-25 10:17:32.134177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.281 qpair failed and we were unable to recover it. 00:28:47.281 [2024-07-25 10:17:32.134402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.281 [2024-07-25 10:17:32.134438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.281 qpair failed and we were unable to recover it. 00:28:47.281 [2024-07-25 10:17:32.134663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.281 [2024-07-25 10:17:32.134687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.281 qpair failed and we were unable to recover it. 00:28:47.281 [2024-07-25 10:17:32.135008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.281 [2024-07-25 10:17:32.135072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.281 qpair failed and we were unable to recover it. 00:28:47.281 [2024-07-25 10:17:32.135375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.281 [2024-07-25 10:17:32.135404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.281 qpair failed and we were unable to recover it. 00:28:47.281 [2024-07-25 10:17:32.135596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.281 [2024-07-25 10:17:32.135620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.281 qpair failed and we were unable to recover it. 00:28:47.281 [2024-07-25 10:17:32.135873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.281 [2024-07-25 10:17:32.135897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.281 qpair failed and we were unable to recover it. 00:28:47.281 [2024-07-25 10:17:32.136217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.281 [2024-07-25 10:17:32.136246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.281 qpair failed and we were unable to recover it. 00:28:47.281 [2024-07-25 10:17:32.136532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.281 [2024-07-25 10:17:32.136562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.281 qpair failed and we were unable to recover it. 00:28:47.281 [2024-07-25 10:17:32.136770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.281 [2024-07-25 10:17:32.136799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.281 qpair failed and we were unable to recover it. 00:28:47.281 [2024-07-25 10:17:32.136981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.281 [2024-07-25 10:17:32.137004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.281 qpair failed and we were unable to recover it. 00:28:47.281 [2024-07-25 10:17:32.137187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.281 [2024-07-25 10:17:32.137226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.281 qpair failed and we were unable to recover it. 00:28:47.281 [2024-07-25 10:17:32.137492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.281 [2024-07-25 10:17:32.137522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.281 qpair failed and we were unable to recover it. 00:28:47.281 [2024-07-25 10:17:32.137698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.281 [2024-07-25 10:17:32.137731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.281 qpair failed and we were unable to recover it. 00:28:47.281 [2024-07-25 10:17:32.137981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.281 [2024-07-25 10:17:32.138004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.281 qpair failed and we were unable to recover it. 00:28:47.281 [2024-07-25 10:17:32.138298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.281 [2024-07-25 10:17:32.138326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.281 qpair failed and we were unable to recover it. 00:28:47.281 [2024-07-25 10:17:32.138546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.281 [2024-07-25 10:17:32.138576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.281 qpair failed and we were unable to recover it. 00:28:47.281 [2024-07-25 10:17:32.138786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.281 [2024-07-25 10:17:32.138815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.281 qpair failed and we were unable to recover it. 00:28:47.281 [2024-07-25 10:17:32.139028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.281 [2024-07-25 10:17:32.139052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.281 qpair failed and we were unable to recover it. 00:28:47.281 [2024-07-25 10:17:32.139224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.281 [2024-07-25 10:17:32.139253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.281 qpair failed and we were unable to recover it. 00:28:47.281 [2024-07-25 10:17:32.139477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.281 [2024-07-25 10:17:32.139507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.281 qpair failed and we were unable to recover it. 00:28:47.281 [2024-07-25 10:17:32.139678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.281 [2024-07-25 10:17:32.139707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.281 qpair failed and we were unable to recover it. 00:28:47.281 [2024-07-25 10:17:32.139881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.281 [2024-07-25 10:17:32.139904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.281 qpair failed and we were unable to recover it. 00:28:47.281 [2024-07-25 10:17:32.140056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.281 [2024-07-25 10:17:32.140085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.281 qpair failed and we were unable to recover it. 00:28:47.281 [2024-07-25 10:17:32.140289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.281 [2024-07-25 10:17:32.140318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.281 qpair failed and we were unable to recover it. 00:28:47.281 [2024-07-25 10:17:32.140481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.281 [2024-07-25 10:17:32.140510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.281 qpair failed and we were unable to recover it. 00:28:47.281 [2024-07-25 10:17:32.140707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.281 [2024-07-25 10:17:32.140746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.281 qpair failed and we were unable to recover it. 00:28:47.281 [2024-07-25 10:17:32.140950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.281 [2024-07-25 10:17:32.140979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.281 qpair failed and we were unable to recover it. 00:28:47.281 [2024-07-25 10:17:32.141194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.281 [2024-07-25 10:17:32.141223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.281 qpair failed and we were unable to recover it. 00:28:47.281 [2024-07-25 10:17:32.141520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.281 [2024-07-25 10:17:32.141550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.281 qpair failed and we were unable to recover it. 00:28:47.281 [2024-07-25 10:17:32.141734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.281 [2024-07-25 10:17:32.141758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.281 qpair failed and we were unable to recover it. 00:28:47.281 [2024-07-25 10:17:32.142008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.281 [2024-07-25 10:17:32.142037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.281 qpair failed and we were unable to recover it. 00:28:47.281 [2024-07-25 10:17:32.142276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.281 [2024-07-25 10:17:32.142306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.281 qpair failed and we were unable to recover it. 00:28:47.281 [2024-07-25 10:17:32.142556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.281 [2024-07-25 10:17:32.142586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.281 qpair failed and we were unable to recover it. 00:28:47.281 [2024-07-25 10:17:32.142868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.281 [2024-07-25 10:17:32.142891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.281 qpair failed and we were unable to recover it. 00:28:47.281 [2024-07-25 10:17:32.143071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.281 [2024-07-25 10:17:32.143100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.281 qpair failed and we were unable to recover it. 00:28:47.281 [2024-07-25 10:17:32.143277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.281 [2024-07-25 10:17:32.143305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.281 qpair failed and we were unable to recover it. 00:28:47.281 [2024-07-25 10:17:32.143475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.281 [2024-07-25 10:17:32.143505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.281 qpair failed and we were unable to recover it. 00:28:47.281 [2024-07-25 10:17:32.143656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.281 [2024-07-25 10:17:32.143681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.282 qpair failed and we were unable to recover it. 00:28:47.282 [2024-07-25 10:17:32.143867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.282 [2024-07-25 10:17:32.143898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.282 qpair failed and we were unable to recover it. 00:28:47.282 [2024-07-25 10:17:32.144080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.282 [2024-07-25 10:17:32.144119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.282 qpair failed and we were unable to recover it. 00:28:47.282 [2024-07-25 10:17:32.144378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.282 [2024-07-25 10:17:32.144407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.282 qpair failed and we were unable to recover it. 00:28:47.282 [2024-07-25 10:17:32.144644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.282 [2024-07-25 10:17:32.144668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.282 qpair failed and we were unable to recover it. 00:28:47.282 [2024-07-25 10:17:32.144905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.282 [2024-07-25 10:17:32.144934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.282 qpair failed and we were unable to recover it. 00:28:47.282 [2024-07-25 10:17:32.145246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.282 [2024-07-25 10:17:32.145275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.282 qpair failed and we were unable to recover it. 00:28:47.282 [2024-07-25 10:17:32.145546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.282 [2024-07-25 10:17:32.145571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.282 qpair failed and we were unable to recover it. 00:28:47.282 [2024-07-25 10:17:32.145807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.282 [2024-07-25 10:17:32.145830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.282 qpair failed and we were unable to recover it. 00:28:47.282 [2024-07-25 10:17:32.146099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.282 [2024-07-25 10:17:32.146127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.282 qpair failed and we were unable to recover it. 00:28:47.282 [2024-07-25 10:17:32.146306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.282 [2024-07-25 10:17:32.146335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.282 qpair failed and we were unable to recover it. 00:28:47.282 [2024-07-25 10:17:32.146513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.282 [2024-07-25 10:17:32.146542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.282 qpair failed and we were unable to recover it. 00:28:47.282 [2024-07-25 10:17:32.146716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.282 [2024-07-25 10:17:32.146754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.282 qpair failed and we were unable to recover it. 00:28:47.282 [2024-07-25 10:17:32.146950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.282 [2024-07-25 10:17:32.146979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.282 qpair failed and we were unable to recover it. 00:28:47.282 [2024-07-25 10:17:32.147216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.282 [2024-07-25 10:17:32.147245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.282 qpair failed and we were unable to recover it. 00:28:47.282 [2024-07-25 10:17:32.147501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.282 [2024-07-25 10:17:32.147531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.282 qpair failed and we were unable to recover it. 00:28:47.282 [2024-07-25 10:17:32.147726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.282 [2024-07-25 10:17:32.147751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.282 qpair failed and we were unable to recover it. 00:28:47.282 [2024-07-25 10:17:32.147959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.282 [2024-07-25 10:17:32.147988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.282 qpair failed and we were unable to recover it. 00:28:47.282 [2024-07-25 10:17:32.148214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.282 [2024-07-25 10:17:32.148242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.282 qpair failed and we were unable to recover it. 00:28:47.282 [2024-07-25 10:17:32.148396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.282 [2024-07-25 10:17:32.148423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.282 qpair failed and we were unable to recover it. 00:28:47.282 [2024-07-25 10:17:32.148629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.282 [2024-07-25 10:17:32.148654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.282 qpair failed and we were unable to recover it. 00:28:47.282 [2024-07-25 10:17:32.148857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.282 [2024-07-25 10:17:32.148885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.282 qpair failed and we were unable to recover it. 00:28:47.282 [2024-07-25 10:17:32.149039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.282 [2024-07-25 10:17:32.149068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.282 qpair failed and we were unable to recover it. 00:28:47.282 [2024-07-25 10:17:32.149271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.282 [2024-07-25 10:17:32.149299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.282 qpair failed and we were unable to recover it. 00:28:47.282 [2024-07-25 10:17:32.149480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.282 [2024-07-25 10:17:32.149505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.282 qpair failed and we were unable to recover it. 00:28:47.282 [2024-07-25 10:17:32.149738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.282 [2024-07-25 10:17:32.149767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.282 qpair failed and we were unable to recover it. 00:28:47.282 [2024-07-25 10:17:32.150024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.282 [2024-07-25 10:17:32.150056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.282 qpair failed and we were unable to recover it. 00:28:47.282 [2024-07-25 10:17:32.150243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.282 [2024-07-25 10:17:32.150273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.282 qpair failed and we were unable to recover it. 00:28:47.282 [2024-07-25 10:17:32.150472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.282 [2024-07-25 10:17:32.150496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.282 qpair failed and we were unable to recover it. 00:28:47.282 [2024-07-25 10:17:32.150736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.282 [2024-07-25 10:17:32.150769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.282 qpair failed and we were unable to recover it. 00:28:47.282 [2024-07-25 10:17:32.151048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.282 [2024-07-25 10:17:32.151081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.282 qpair failed and we were unable to recover it. 00:28:47.283 [2024-07-25 10:17:32.151275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.283 [2024-07-25 10:17:32.151304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.283 qpair failed and we were unable to recover it. 00:28:47.283 [2024-07-25 10:17:32.151548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.283 [2024-07-25 10:17:32.151573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.283 qpair failed and we were unable to recover it. 00:28:47.283 [2024-07-25 10:17:32.151771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.283 [2024-07-25 10:17:32.151800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.283 qpair failed and we were unable to recover it. 00:28:47.283 [2024-07-25 10:17:32.152076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.283 [2024-07-25 10:17:32.152107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.283 qpair failed and we were unable to recover it. 00:28:47.283 [2024-07-25 10:17:32.152315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.283 [2024-07-25 10:17:32.152344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.283 qpair failed and we were unable to recover it. 00:28:47.283 [2024-07-25 10:17:32.152524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.283 [2024-07-25 10:17:32.152548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.283 qpair failed and we were unable to recover it. 00:28:47.283 [2024-07-25 10:17:32.152765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.283 [2024-07-25 10:17:32.152793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.283 qpair failed and we were unable to recover it. 00:28:47.283 [2024-07-25 10:17:32.153063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.283 [2024-07-25 10:17:32.153100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.283 qpair failed and we were unable to recover it. 00:28:47.283 [2024-07-25 10:17:32.153297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.283 [2024-07-25 10:17:32.153326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.283 qpair failed and we were unable to recover it. 00:28:47.283 [2024-07-25 10:17:32.153553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.283 [2024-07-25 10:17:32.153576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.283 qpair failed and we were unable to recover it. 00:28:47.283 [2024-07-25 10:17:32.153746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.283 [2024-07-25 10:17:32.153774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.283 qpair failed and we were unable to recover it. 00:28:47.283 [2024-07-25 10:17:32.153945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.283 [2024-07-25 10:17:32.153973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.283 qpair failed and we were unable to recover it. 00:28:47.283 [2024-07-25 10:17:32.154150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.283 [2024-07-25 10:17:32.154178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.283 qpair failed and we were unable to recover it. 00:28:47.283 [2024-07-25 10:17:32.154410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.283 [2024-07-25 10:17:32.154447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.283 qpair failed and we were unable to recover it. 00:28:47.283 [2024-07-25 10:17:32.154632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.283 [2024-07-25 10:17:32.154656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.283 qpair failed and we were unable to recover it. 00:28:47.283 [2024-07-25 10:17:32.154892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.283 [2024-07-25 10:17:32.154921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.283 qpair failed and we were unable to recover it. 00:28:47.283 [2024-07-25 10:17:32.155179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.283 [2024-07-25 10:17:32.155208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.283 qpair failed and we were unable to recover it. 00:28:47.283 [2024-07-25 10:17:32.155442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.283 [2024-07-25 10:17:32.155491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.283 qpair failed and we were unable to recover it. 00:28:47.283 [2024-07-25 10:17:32.155702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.283 [2024-07-25 10:17:32.155743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.283 qpair failed and we were unable to recover it. 00:28:47.283 [2024-07-25 10:17:32.155960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.283 [2024-07-25 10:17:32.155988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.283 qpair failed and we were unable to recover it. 00:28:47.283 [2024-07-25 10:17:32.156229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.283 [2024-07-25 10:17:32.156257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.283 qpair failed and we were unable to recover it. 00:28:47.283 [2024-07-25 10:17:32.156592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.283 [2024-07-25 10:17:32.156629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.283 qpair failed and we were unable to recover it. 00:28:47.283 [2024-07-25 10:17:32.156811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.283 [2024-07-25 10:17:32.156839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.283 qpair failed and we were unable to recover it. 00:28:47.283 [2024-07-25 10:17:32.157077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.283 [2024-07-25 10:17:32.157105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.283 qpair failed and we were unable to recover it. 00:28:47.283 [2024-07-25 10:17:32.157400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.283 [2024-07-25 10:17:32.157437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.283 qpair failed and we were unable to recover it. 00:28:47.283 [2024-07-25 10:17:32.157630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.283 [2024-07-25 10:17:32.157653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.283 qpair failed and we were unable to recover it. 00:28:47.283 [2024-07-25 10:17:32.157898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.283 [2024-07-25 10:17:32.157927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.283 qpair failed and we were unable to recover it. 00:28:47.283 [2024-07-25 10:17:32.158209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.283 [2024-07-25 10:17:32.158240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.283 qpair failed and we were unable to recover it. 00:28:47.283 [2024-07-25 10:17:32.158520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.283 [2024-07-25 10:17:32.158549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.283 qpair failed and we were unable to recover it. 00:28:47.283 [2024-07-25 10:17:32.158782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.283 [2024-07-25 10:17:32.158805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.283 qpair failed and we were unable to recover it. 00:28:47.283 [2024-07-25 10:17:32.158972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.283 [2024-07-25 10:17:32.159001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.283 qpair failed and we were unable to recover it. 00:28:47.283 [2024-07-25 10:17:32.159163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.283 [2024-07-25 10:17:32.159191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.283 qpair failed and we were unable to recover it. 00:28:47.283 [2024-07-25 10:17:32.159468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.283 [2024-07-25 10:17:32.159496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.283 qpair failed and we were unable to recover it. 00:28:47.283 [2024-07-25 10:17:32.159705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.283 [2024-07-25 10:17:32.159728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.283 qpair failed and we were unable to recover it. 00:28:47.283 [2024-07-25 10:17:32.159896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.283 [2024-07-25 10:17:32.159924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.283 qpair failed and we were unable to recover it. 00:28:47.283 [2024-07-25 10:17:32.160148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.283 [2024-07-25 10:17:32.160176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.283 qpair failed and we were unable to recover it. 00:28:47.283 [2024-07-25 10:17:32.160470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.283 [2024-07-25 10:17:32.160494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.283 qpair failed and we were unable to recover it. 00:28:47.283 [2024-07-25 10:17:32.160693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.283 [2024-07-25 10:17:32.160716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.283 qpair failed and we were unable to recover it. 00:28:47.284 [2024-07-25 10:17:32.160887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.284 [2024-07-25 10:17:32.160916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.284 qpair failed and we were unable to recover it. 00:28:47.284 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 552804 Killed "${NVMF_APP[@]}" "$@" 00:28:47.284 [2024-07-25 10:17:32.161112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.284 [2024-07-25 10:17:32.161141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.284 qpair failed and we were unable to recover it. 00:28:47.284 [2024-07-25 10:17:32.161285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.284 [2024-07-25 10:17:32.161313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.284 10:17:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:28:47.284 qpair failed and we were unable to recover it. 00:28:47.284 [2024-07-25 10:17:32.161500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.284 [2024-07-25 10:17:32.161524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.284 qpair failed and we were unable to recover it. 00:28:47.284 10:17:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:28:47.284 10:17:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:47.284 [2024-07-25 10:17:32.161713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.284 [2024-07-25 10:17:32.161742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.284 qpair failed and we were unable to recover it. 00:28:47.284 10:17:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:47.284 [2024-07-25 10:17:32.161903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.284 [2024-07-25 10:17:32.161932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.284 qpair failed and we were unable to recover it. 00:28:47.284 10:17:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:47.284 [2024-07-25 10:17:32.162085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.284 [2024-07-25 10:17:32.162113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.284 qpair failed and we were unable to recover it. 00:28:47.284 [2024-07-25 10:17:32.162293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.284 [2024-07-25 10:17:32.162316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.284 qpair failed and we were unable to recover it. 00:28:47.284 [2024-07-25 10:17:32.162523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.284 [2024-07-25 10:17:32.162552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.284 qpair failed and we were unable to recover it. 00:28:47.284 [2024-07-25 10:17:32.162692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.284 [2024-07-25 10:17:32.162721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.284 qpair failed and we were unable to recover it. 00:28:47.284 [2024-07-25 10:17:32.162909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.284 [2024-07-25 10:17:32.162937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.284 qpair failed and we were unable to recover it. 00:28:47.284 [2024-07-25 10:17:32.163107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.284 [2024-07-25 10:17:32.163130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.284 qpair failed and we were unable to recover it. 00:28:47.284 [2024-07-25 10:17:32.163313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.284 [2024-07-25 10:17:32.163342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.284 qpair failed and we were unable to recover it. 00:28:47.284 [2024-07-25 10:17:32.163577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.284 [2024-07-25 10:17:32.163627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.284 qpair failed and we were unable to recover it. 00:28:47.284 [2024-07-25 10:17:32.163813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.284 [2024-07-25 10:17:32.163841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.284 qpair failed and we were unable to recover it. 00:28:47.284 [2024-07-25 10:17:32.164027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.284 [2024-07-25 10:17:32.164050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.284 qpair failed and we were unable to recover it. 00:28:47.284 [2024-07-25 10:17:32.164255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.284 [2024-07-25 10:17:32.164284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.284 qpair failed and we were unable to recover it. 00:28:47.284 [2024-07-25 10:17:32.164503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.284 [2024-07-25 10:17:32.164533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.284 qpair failed and we were unable to recover it. 00:28:47.284 [2024-07-25 10:17:32.164716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.284 [2024-07-25 10:17:32.164744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.284 qpair failed and we were unable to recover it. 00:28:47.284 [2024-07-25 10:17:32.164971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.284 [2024-07-25 10:17:32.164997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.284 qpair failed and we were unable to recover it. 00:28:47.284 [2024-07-25 10:17:32.165123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.284 [2024-07-25 10:17:32.165151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.284 qpair failed and we were unable to recover it. 00:28:47.284 [2024-07-25 10:17:32.165310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.284 [2024-07-25 10:17:32.165339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.284 qpair failed and we were unable to recover it. 00:28:47.284 [2024-07-25 10:17:32.165526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.284 [2024-07-25 10:17:32.165555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.284 qpair failed and we were unable to recover it. 00:28:47.284 [2024-07-25 10:17:32.165722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.284 [2024-07-25 10:17:32.165747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.284 qpair failed and we were unable to recover it. 00:28:47.284 10:17:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=553360 00:28:47.284 [2024-07-25 10:17:32.165921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.284 [2024-07-25 10:17:32.165950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.284 qpair failed and we were unable to recover it. 00:28:47.284 10:17:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:28:47.284 10:17:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 553360 00:28:47.284 [2024-07-25 10:17:32.166117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.284 [2024-07-25 10:17:32.166146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.284 qpair failed and we were unable to recover it. 00:28:47.284 10:17:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 553360 ']' 00:28:47.284 [2024-07-25 10:17:32.166331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.284 [2024-07-25 10:17:32.166360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.284 10:17:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:47.284 qpair failed and we were unable to recover it. 00:28:47.284 [2024-07-25 10:17:32.166511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.284 10:17:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:47.284 [2024-07-25 10:17:32.166538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.284 qpair failed and we were unable to recover it. 00:28:47.284 [2024-07-25 10:17:32.166672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.284 10:17:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:47.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:47.284 [2024-07-25 10:17:32.166714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.284 qpair failed and we were unable to recover it. 00:28:47.284 10:17:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:47.284 [2024-07-25 10:17:32.166892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.284 [2024-07-25 10:17:32.166921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.284 qpair failed and we were unable to recover it. 00:28:47.284 10:17:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:47.284 [2024-07-25 10:17:32.167071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.284 [2024-07-25 10:17:32.167099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.284 qpair failed and we were unable to recover it. 00:28:47.284 [2024-07-25 10:17:32.167292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.285 [2024-07-25 10:17:32.167319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.285 qpair failed and we were unable to recover it. 00:28:47.285 [2024-07-25 10:17:32.167496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.285 [2024-07-25 10:17:32.167521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.285 qpair failed and we were unable to recover it. 00:28:47.285 [2024-07-25 10:17:32.167706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.285 [2024-07-25 10:17:32.167747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.285 qpair failed and we were unable to recover it. 00:28:47.285 [2024-07-25 10:17:32.167962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.285 [2024-07-25 10:17:32.168007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.285 qpair failed and we were unable to recover it. 00:28:47.285 [2024-07-25 10:17:32.168199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.285 [2024-07-25 10:17:32.168223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.285 qpair failed and we were unable to recover it. 00:28:47.285 [2024-07-25 10:17:32.168454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.285 [2024-07-25 10:17:32.168505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.285 qpair failed and we were unable to recover it. 00:28:47.285 [2024-07-25 10:17:32.168639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.285 [2024-07-25 10:17:32.168665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.285 qpair failed and we were unable to recover it. 00:28:47.285 [2024-07-25 10:17:32.168851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.285 [2024-07-25 10:17:32.168879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.285 qpair failed and we were unable to recover it. 00:28:47.285 [2024-07-25 10:17:32.169057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.285 [2024-07-25 10:17:32.169081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.285 qpair failed and we were unable to recover it. 00:28:47.285 [2024-07-25 10:17:32.169291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.285 [2024-07-25 10:17:32.169318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.285 qpair failed and we were unable to recover it. 00:28:47.285 [2024-07-25 10:17:32.169518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.285 [2024-07-25 10:17:32.169543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.285 qpair failed and we were unable to recover it. 00:28:47.285 [2024-07-25 10:17:32.169701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.285 [2024-07-25 10:17:32.169725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.285 qpair failed and we were unable to recover it. 00:28:47.285 [2024-07-25 10:17:32.169901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.285 [2024-07-25 10:17:32.169924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.285 qpair failed and we were unable to recover it. 00:28:47.285 [2024-07-25 10:17:32.170103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.285 [2024-07-25 10:17:32.170126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.285 qpair failed and we were unable to recover it. 00:28:47.285 [2024-07-25 10:17:32.170297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.285 [2024-07-25 10:17:32.170321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.285 qpair failed and we were unable to recover it. 00:28:47.285 [2024-07-25 10:17:32.170517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.285 [2024-07-25 10:17:32.170544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.285 qpair failed and we were unable to recover it. 00:28:47.285 [2024-07-25 10:17:32.170681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.285 [2024-07-25 10:17:32.170721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.285 qpair failed and we were unable to recover it. 00:28:47.285 [2024-07-25 10:17:32.170919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.285 [2024-07-25 10:17:32.170944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.285 qpair failed and we were unable to recover it. 00:28:47.285 [2024-07-25 10:17:32.171159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.285 [2024-07-25 10:17:32.171185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.285 qpair failed and we were unable to recover it. 00:28:47.285 [2024-07-25 10:17:32.171347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.285 [2024-07-25 10:17:32.171375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.285 qpair failed and we were unable to recover it. 00:28:47.285 [2024-07-25 10:17:32.171556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.285 [2024-07-25 10:17:32.171582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.285 qpair failed and we were unable to recover it. 00:28:47.285 [2024-07-25 10:17:32.171761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.285 [2024-07-25 10:17:32.171786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.285 qpair failed and we were unable to recover it. 00:28:47.285 [2024-07-25 10:17:32.171954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.285 [2024-07-25 10:17:32.171979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.285 qpair failed and we were unable to recover it. 00:28:47.285 [2024-07-25 10:17:32.172146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.285 [2024-07-25 10:17:32.172172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.285 qpair failed and we were unable to recover it. 00:28:47.285 [2024-07-25 10:17:32.172376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.285 [2024-07-25 10:17:32.172405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.285 qpair failed and we were unable to recover it. 00:28:47.285 [2024-07-25 10:17:32.172577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.285 [2024-07-25 10:17:32.172602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.285 qpair failed and we were unable to recover it. 00:28:47.285 [2024-07-25 10:17:32.172802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.285 [2024-07-25 10:17:32.172827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.285 qpair failed and we were unable to recover it. 00:28:47.285 [2024-07-25 10:17:32.173033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.285 [2024-07-25 10:17:32.173057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.285 qpair failed and we were unable to recover it. 00:28:47.285 [2024-07-25 10:17:32.173219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.285 [2024-07-25 10:17:32.173245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.285 qpair failed and we were unable to recover it. 00:28:47.285 [2024-07-25 10:17:32.173426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.285 [2024-07-25 10:17:32.173457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.285 qpair failed and we were unable to recover it. 00:28:47.285 [2024-07-25 10:17:32.173605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.285 [2024-07-25 10:17:32.173634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.285 qpair failed and we were unable to recover it. 00:28:47.285 [2024-07-25 10:17:32.173817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.285 [2024-07-25 10:17:32.173842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.285 qpair failed and we were unable to recover it. 00:28:47.285 [2024-07-25 10:17:32.174020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.285 [2024-07-25 10:17:32.174046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.285 qpair failed and we were unable to recover it. 00:28:47.285 [2024-07-25 10:17:32.174232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.285 [2024-07-25 10:17:32.174258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.285 qpair failed and we were unable to recover it. 00:28:47.285 [2024-07-25 10:17:32.174391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.285 [2024-07-25 10:17:32.174417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.285 qpair failed and we were unable to recover it. 00:28:47.285 [2024-07-25 10:17:32.174557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.285 [2024-07-25 10:17:32.174582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.285 qpair failed and we were unable to recover it. 00:28:47.285 [2024-07-25 10:17:32.174755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.285 [2024-07-25 10:17:32.174780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.285 qpair failed and we were unable to recover it. 00:28:47.285 [2024-07-25 10:17:32.174986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.285 [2024-07-25 10:17:32.175012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.285 qpair failed and we were unable to recover it. 00:28:47.285 [2024-07-25 10:17:32.175193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.286 [2024-07-25 10:17:32.175218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.286 qpair failed and we were unable to recover it. 00:28:47.286 [2024-07-25 10:17:32.175404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.286 [2024-07-25 10:17:32.175436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.286 qpair failed and we were unable to recover it. 00:28:47.286 [2024-07-25 10:17:32.175572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.286 [2024-07-25 10:17:32.175597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.286 qpair failed and we were unable to recover it. 00:28:47.286 [2024-07-25 10:17:32.175762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.286 [2024-07-25 10:17:32.175787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.286 qpair failed and we were unable to recover it. 00:28:47.286 [2024-07-25 10:17:32.175965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.286 [2024-07-25 10:17:32.175991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.286 qpair failed and we were unable to recover it. 00:28:47.286 [2024-07-25 10:17:32.176163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.286 [2024-07-25 10:17:32.176188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.286 qpair failed and we were unable to recover it. 00:28:47.286 [2024-07-25 10:17:32.176360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.286 [2024-07-25 10:17:32.176388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.286 qpair failed and we were unable to recover it. 00:28:47.286 [2024-07-25 10:17:32.176594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.286 [2024-07-25 10:17:32.176620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.286 qpair failed and we were unable to recover it. 00:28:47.286 [2024-07-25 10:17:32.176813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.286 [2024-07-25 10:17:32.176839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.286 qpair failed and we were unable to recover it. 00:28:47.286 [2024-07-25 10:17:32.176966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.286 [2024-07-25 10:17:32.176991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.286 qpair failed and we were unable to recover it. 00:28:47.286 [2024-07-25 10:17:32.177127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.286 [2024-07-25 10:17:32.177152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.286 qpair failed and we were unable to recover it. 00:28:47.286 [2024-07-25 10:17:32.177339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.286 [2024-07-25 10:17:32.177365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.286 qpair failed and we were unable to recover it. 00:28:47.286 [2024-07-25 10:17:32.177515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.286 [2024-07-25 10:17:32.177541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.286 qpair failed and we were unable to recover it. 00:28:47.286 [2024-07-25 10:17:32.177663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.286 [2024-07-25 10:17:32.177687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.286 qpair failed and we were unable to recover it. 00:28:47.286 [2024-07-25 10:17:32.177881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.286 [2024-07-25 10:17:32.177907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.286 qpair failed and we were unable to recover it. 00:28:47.286 [2024-07-25 10:17:32.178073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.286 [2024-07-25 10:17:32.178098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.286 qpair failed and we were unable to recover it. 00:28:47.286 [2024-07-25 10:17:32.178255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.286 [2024-07-25 10:17:32.178281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.286 qpair failed and we were unable to recover it. 00:28:47.286 [2024-07-25 10:17:32.178481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.286 [2024-07-25 10:17:32.178507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.286 qpair failed and we were unable to recover it. 00:28:47.286 [2024-07-25 10:17:32.178662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.286 [2024-07-25 10:17:32.178689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.286 qpair failed and we were unable to recover it. 00:28:47.286 [2024-07-25 10:17:32.178875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.286 [2024-07-25 10:17:32.178900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.286 qpair failed and we were unable to recover it. 00:28:47.286 [2024-07-25 10:17:32.179074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.286 [2024-07-25 10:17:32.179100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.286 qpair failed and we were unable to recover it. 00:28:47.286 [2024-07-25 10:17:32.179249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.286 [2024-07-25 10:17:32.179274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.286 qpair failed and we were unable to recover it. 00:28:47.286 [2024-07-25 10:17:32.179448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.286 [2024-07-25 10:17:32.179479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.286 qpair failed and we were unable to recover it. 00:28:47.286 [2024-07-25 10:17:32.179611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.286 [2024-07-25 10:17:32.179637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.286 qpair failed and we were unable to recover it. 00:28:47.286 [2024-07-25 10:17:32.179835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.286 [2024-07-25 10:17:32.179860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.286 qpair failed and we were unable to recover it. 00:28:47.286 [2024-07-25 10:17:32.180047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.286 [2024-07-25 10:17:32.180072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.286 qpair failed and we were unable to recover it. 00:28:47.286 [2024-07-25 10:17:32.180219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.286 [2024-07-25 10:17:32.180244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.286 qpair failed and we were unable to recover it. 00:28:47.286 [2024-07-25 10:17:32.180435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.286 [2024-07-25 10:17:32.180473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.286 qpair failed and we were unable to recover it. 00:28:47.286 [2024-07-25 10:17:32.180601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.286 [2024-07-25 10:17:32.180626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.286 qpair failed and we were unable to recover it. 00:28:47.286 [2024-07-25 10:17:32.180785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.286 [2024-07-25 10:17:32.180810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.286 qpair failed and we were unable to recover it. 00:28:47.286 [2024-07-25 10:17:32.181004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.286 [2024-07-25 10:17:32.181029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.286 qpair failed and we were unable to recover it. 00:28:47.286 [2024-07-25 10:17:32.181226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.286 [2024-07-25 10:17:32.181252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.286 qpair failed and we were unable to recover it. 00:28:47.286 [2024-07-25 10:17:32.181393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.286 [2024-07-25 10:17:32.181418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.286 qpair failed and we were unable to recover it. 00:28:47.287 [2024-07-25 10:17:32.181584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.287 [2024-07-25 10:17:32.181610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.287 qpair failed and we were unable to recover it. 00:28:47.287 [2024-07-25 10:17:32.181767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.287 [2024-07-25 10:17:32.181791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.287 qpair failed and we were unable to recover it. 00:28:47.287 [2024-07-25 10:17:32.181917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.287 [2024-07-25 10:17:32.181943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.287 qpair failed and we were unable to recover it. 00:28:47.287 [2024-07-25 10:17:32.182074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.287 [2024-07-25 10:17:32.182098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.287 qpair failed and we were unable to recover it. 00:28:47.287 [2024-07-25 10:17:32.182268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.287 [2024-07-25 10:17:32.182293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.287 qpair failed and we were unable to recover it. 00:28:47.287 [2024-07-25 10:17:32.182490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.287 [2024-07-25 10:17:32.182515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.287 qpair failed and we were unable to recover it. 00:28:47.287 [2024-07-25 10:17:32.182728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.287 [2024-07-25 10:17:32.182765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.287 qpair failed and we were unable to recover it. 00:28:47.287 [2024-07-25 10:17:32.182955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.287 [2024-07-25 10:17:32.182984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.287 qpair failed and we were unable to recover it. 00:28:47.287 [2024-07-25 10:17:32.183134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.287 [2024-07-25 10:17:32.183161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.287 qpair failed and we were unable to recover it. 00:28:47.287 [2024-07-25 10:17:32.183334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.287 [2024-07-25 10:17:32.183369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.287 qpair failed and we were unable to recover it. 00:28:47.287 [2024-07-25 10:17:32.183548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.287 [2024-07-25 10:17:32.183575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.287 qpair failed and we were unable to recover it. 00:28:47.287 [2024-07-25 10:17:32.183733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.287 [2024-07-25 10:17:32.183759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.287 qpair failed and we were unable to recover it. 00:28:47.287 [2024-07-25 10:17:32.183942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.287 [2024-07-25 10:17:32.183967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.287 qpair failed and we were unable to recover it. 00:28:47.287 [2024-07-25 10:17:32.184155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.287 [2024-07-25 10:17:32.184182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.287 qpair failed and we were unable to recover it. 00:28:47.287 [2024-07-25 10:17:32.184373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.287 [2024-07-25 10:17:32.184399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.287 qpair failed and we were unable to recover it. 00:28:47.287 [2024-07-25 10:17:32.184555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.287 [2024-07-25 10:17:32.184581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.287 qpair failed and we were unable to recover it. 00:28:47.287 [2024-07-25 10:17:32.184723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.287 [2024-07-25 10:17:32.184763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.287 qpair failed and we were unable to recover it. 00:28:47.287 [2024-07-25 10:17:32.184916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.287 [2024-07-25 10:17:32.184942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.287 qpair failed and we were unable to recover it. 00:28:47.287 [2024-07-25 10:17:32.185143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.287 [2024-07-25 10:17:32.185169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.287 qpair failed and we were unable to recover it. 00:28:47.287 [2024-07-25 10:17:32.185323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.287 [2024-07-25 10:17:32.185349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.287 qpair failed and we were unable to recover it. 00:28:47.287 [2024-07-25 10:17:32.185478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.287 [2024-07-25 10:17:32.185505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.287 qpair failed and we were unable to recover it. 00:28:47.287 [2024-07-25 10:17:32.185688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.287 [2024-07-25 10:17:32.185715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.287 qpair failed and we were unable to recover it. 00:28:47.287 [2024-07-25 10:17:32.185907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.287 [2024-07-25 10:17:32.185951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.287 qpair failed and we were unable to recover it. 00:28:47.287 [2024-07-25 10:17:32.186138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.287 [2024-07-25 10:17:32.186183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.287 qpair failed and we were unable to recover it. 00:28:47.287 [2024-07-25 10:17:32.186349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.287 [2024-07-25 10:17:32.186376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.287 qpair failed and we were unable to recover it. 00:28:47.287 [2024-07-25 10:17:32.186517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.287 [2024-07-25 10:17:32.186544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.287 qpair failed and we were unable to recover it. 00:28:47.287 [2024-07-25 10:17:32.186676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.287 [2024-07-25 10:17:32.186702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.287 qpair failed and we were unable to recover it. 00:28:47.287 [2024-07-25 10:17:32.186896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.287 [2024-07-25 10:17:32.186929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.287 qpair failed and we were unable to recover it. 00:28:47.287 [2024-07-25 10:17:32.187112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.287 [2024-07-25 10:17:32.187155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.287 qpair failed and we were unable to recover it. 00:28:47.287 [2024-07-25 10:17:32.187307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.287 [2024-07-25 10:17:32.187332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.287 qpair failed and we were unable to recover it. 00:28:47.287 [2024-07-25 10:17:32.187557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.287 [2024-07-25 10:17:32.187583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.287 qpair failed and we were unable to recover it. 00:28:47.287 [2024-07-25 10:17:32.187732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.287 [2024-07-25 10:17:32.187776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.287 qpair failed and we were unable to recover it. 00:28:47.287 [2024-07-25 10:17:32.187984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.287 [2024-07-25 10:17:32.188010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.287 qpair failed and we were unable to recover it. 00:28:47.287 [2024-07-25 10:17:32.188189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.287 [2024-07-25 10:17:32.188232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.287 qpair failed and we were unable to recover it. 00:28:47.287 [2024-07-25 10:17:32.188439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.287 [2024-07-25 10:17:32.188476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.287 qpair failed and we were unable to recover it. 00:28:47.287 [2024-07-25 10:17:32.188607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.287 [2024-07-25 10:17:32.188633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.287 qpair failed and we were unable to recover it. 00:28:47.287 [2024-07-25 10:17:32.188800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.287 [2024-07-25 10:17:32.188844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.287 qpair failed and we were unable to recover it. 00:28:47.287 [2024-07-25 10:17:32.189066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.287 [2024-07-25 10:17:32.189115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.288 qpair failed and we were unable to recover it. 00:28:47.288 [2024-07-25 10:17:32.189293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.288 [2024-07-25 10:17:32.189319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.288 qpair failed and we were unable to recover it. 00:28:47.288 [2024-07-25 10:17:32.189543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.288 [2024-07-25 10:17:32.189587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.288 qpair failed and we were unable to recover it. 00:28:47.288 [2024-07-25 10:17:32.189727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.288 [2024-07-25 10:17:32.189774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.288 qpair failed and we were unable to recover it. 00:28:47.288 [2024-07-25 10:17:32.189921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.288 [2024-07-25 10:17:32.189964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.288 qpair failed and we were unable to recover it. 00:28:47.288 [2024-07-25 10:17:32.190186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.288 [2024-07-25 10:17:32.190216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.288 qpair failed and we were unable to recover it. 00:28:47.288 [2024-07-25 10:17:32.190411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.288 [2024-07-25 10:17:32.190445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.288 qpair failed and we were unable to recover it. 00:28:47.288 [2024-07-25 10:17:32.190610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.288 [2024-07-25 10:17:32.190654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.288 qpair failed and we were unable to recover it. 00:28:47.288 [2024-07-25 10:17:32.190804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.288 [2024-07-25 10:17:32.190848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.288 qpair failed and we were unable to recover it. 00:28:47.288 [2024-07-25 10:17:32.191031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.288 [2024-07-25 10:17:32.191057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.288 qpair failed and we were unable to recover it. 00:28:47.288 [2024-07-25 10:17:32.191242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.288 [2024-07-25 10:17:32.191288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.288 qpair failed and we were unable to recover it. 00:28:47.288 [2024-07-25 10:17:32.191455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.288 [2024-07-25 10:17:32.191499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.288 qpair failed and we were unable to recover it. 00:28:47.288 [2024-07-25 10:17:32.191672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.288 [2024-07-25 10:17:32.191716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.288 qpair failed and we were unable to recover it. 00:28:47.288 [2024-07-25 10:17:32.191891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.288 [2024-07-25 10:17:32.191934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.288 qpair failed and we were unable to recover it. 00:28:47.288 [2024-07-25 10:17:32.192087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.288 [2024-07-25 10:17:32.192131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.288 qpair failed and we were unable to recover it. 00:28:47.288 [2024-07-25 10:17:32.192316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.288 [2024-07-25 10:17:32.192343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.288 qpair failed and we were unable to recover it. 00:28:47.288 [2024-07-25 10:17:32.192523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.288 [2024-07-25 10:17:32.192569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.288 qpair failed and we were unable to recover it. 00:28:47.288 [2024-07-25 10:17:32.192704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.288 [2024-07-25 10:17:32.192748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.288 qpair failed and we were unable to recover it. 00:28:47.288 [2024-07-25 10:17:32.192873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.288 [2024-07-25 10:17:32.192915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.288 qpair failed and we were unable to recover it. 00:28:47.288 [2024-07-25 10:17:32.193076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.288 [2024-07-25 10:17:32.193120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.288 qpair failed and we were unable to recover it. 00:28:47.288 [2024-07-25 10:17:32.193245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.288 [2024-07-25 10:17:32.193272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.288 qpair failed and we were unable to recover it. 00:28:47.288 [2024-07-25 10:17:32.193454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.288 [2024-07-25 10:17:32.193481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.288 qpair failed and we were unable to recover it. 00:28:47.288 [2024-07-25 10:17:32.193621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.288 [2024-07-25 10:17:32.193666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.288 qpair failed and we were unable to recover it. 00:28:47.288 [2024-07-25 10:17:32.193824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.288 [2024-07-25 10:17:32.193867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.288 qpair failed and we were unable to recover it. 00:28:47.288 [2024-07-25 10:17:32.194074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.288 [2024-07-25 10:17:32.194103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.288 qpair failed and we were unable to recover it. 00:28:47.288 [2024-07-25 10:17:32.194273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.288 [2024-07-25 10:17:32.194300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.288 qpair failed and we were unable to recover it. 00:28:47.288 [2024-07-25 10:17:32.194437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.288 [2024-07-25 10:17:32.194464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.288 qpair failed and we were unable to recover it. 00:28:47.288 [2024-07-25 10:17:32.194628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.288 [2024-07-25 10:17:32.194672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.288 qpair failed and we were unable to recover it. 00:28:47.288 [2024-07-25 10:17:32.194837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.288 [2024-07-25 10:17:32.194880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.288 qpair failed and we were unable to recover it. 00:28:47.288 [2024-07-25 10:17:32.195065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.288 [2024-07-25 10:17:32.195108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.288 qpair failed and we were unable to recover it. 00:28:47.288 [2024-07-25 10:17:32.195298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.288 [2024-07-25 10:17:32.195324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.288 qpair failed and we were unable to recover it. 00:28:47.288 [2024-07-25 10:17:32.195471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.288 [2024-07-25 10:17:32.195501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.288 qpair failed and we were unable to recover it. 00:28:47.288 [2024-07-25 10:17:32.195654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.288 [2024-07-25 10:17:32.195697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.288 qpair failed and we were unable to recover it. 00:28:47.288 [2024-07-25 10:17:32.195898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.288 [2024-07-25 10:17:32.195927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.288 qpair failed and we were unable to recover it. 00:28:47.288 [2024-07-25 10:17:32.196129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.288 [2024-07-25 10:17:32.196173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.288 qpair failed and we were unable to recover it. 00:28:47.288 [2024-07-25 10:17:32.196366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.288 [2024-07-25 10:17:32.196391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.288 qpair failed and we were unable to recover it. 00:28:47.288 [2024-07-25 10:17:32.196582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.288 [2024-07-25 10:17:32.196626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.288 qpair failed and we were unable to recover it. 00:28:47.288 [2024-07-25 10:17:32.196812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.288 [2024-07-25 10:17:32.196856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.288 qpair failed and we were unable to recover it. 00:28:47.288 [2024-07-25 10:17:32.197072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.289 [2024-07-25 10:17:32.197115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.289 qpair failed and we were unable to recover it. 00:28:47.289 [2024-07-25 10:17:32.197347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.289 [2024-07-25 10:17:32.197374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.289 qpair failed and we were unable to recover it. 00:28:47.289 [2024-07-25 10:17:32.197553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.289 [2024-07-25 10:17:32.197597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.289 qpair failed and we were unable to recover it. 00:28:47.289 [2024-07-25 10:17:32.197764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.289 [2024-07-25 10:17:32.197808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.289 qpair failed and we were unable to recover it. 00:28:47.289 [2024-07-25 10:17:32.198013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.289 [2024-07-25 10:17:32.198056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.289 qpair failed and we were unable to recover it. 00:28:47.289 [2024-07-25 10:17:32.198220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.289 [2024-07-25 10:17:32.198250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.289 qpair failed and we were unable to recover it. 00:28:47.289 [2024-07-25 10:17:32.198435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.289 [2024-07-25 10:17:32.198461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.289 qpair failed and we were unable to recover it. 00:28:47.289 [2024-07-25 10:17:32.198627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.289 [2024-07-25 10:17:32.198672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.289 qpair failed and we were unable to recover it. 00:28:47.289 [2024-07-25 10:17:32.198873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.289 [2024-07-25 10:17:32.198902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.289 qpair failed and we were unable to recover it. 00:28:47.289 [2024-07-25 10:17:32.199073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.289 [2024-07-25 10:17:32.199117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.289 qpair failed and we were unable to recover it. 00:28:47.289 [2024-07-25 10:17:32.199262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.289 [2024-07-25 10:17:32.199288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.289 qpair failed and we were unable to recover it. 00:28:47.289 [2024-07-25 10:17:32.199494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.289 [2024-07-25 10:17:32.199537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.289 qpair failed and we were unable to recover it. 00:28:47.289 [2024-07-25 10:17:32.199723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.289 [2024-07-25 10:17:32.199766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.289 qpair failed and we were unable to recover it. 00:28:47.289 [2024-07-25 10:17:32.199958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.289 [2024-07-25 10:17:32.200001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.289 qpair failed and we were unable to recover it. 00:28:47.289 [2024-07-25 10:17:32.200214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.289 [2024-07-25 10:17:32.200258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.289 qpair failed and we were unable to recover it. 00:28:47.289 [2024-07-25 10:17:32.200403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.289 [2024-07-25 10:17:32.200433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.289 qpair failed and we were unable to recover it. 00:28:47.289 [2024-07-25 10:17:32.200566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.289 [2024-07-25 10:17:32.200611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.289 qpair failed and we were unable to recover it. 00:28:47.289 [2024-07-25 10:17:32.200810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.289 [2024-07-25 10:17:32.200839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.289 qpair failed and we were unable to recover it. 00:28:47.289 [2024-07-25 10:17:32.201079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.289 [2024-07-25 10:17:32.201122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.289 qpair failed and we were unable to recover it. 00:28:47.289 [2024-07-25 10:17:32.201344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.289 [2024-07-25 10:17:32.201370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.289 qpair failed and we were unable to recover it. 00:28:47.289 [2024-07-25 10:17:32.201561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.289 [2024-07-25 10:17:32.201588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.289 qpair failed and we were unable to recover it. 00:28:47.289 [2024-07-25 10:17:32.201762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.289 [2024-07-25 10:17:32.201805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.289 qpair failed and we were unable to recover it. 00:28:47.289 [2024-07-25 10:17:32.201993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.289 [2024-07-25 10:17:32.202037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.289 qpair failed and we were unable to recover it. 00:28:47.289 [2024-07-25 10:17:32.202200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.289 [2024-07-25 10:17:32.202242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.289 qpair failed and we were unable to recover it. 00:28:47.289 [2024-07-25 10:17:32.202393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.289 [2024-07-25 10:17:32.202419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.289 qpair failed and we were unable to recover it. 00:28:47.289 [2024-07-25 10:17:32.202584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.289 [2024-07-25 10:17:32.202613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.289 qpair failed and we were unable to recover it. 00:28:47.289 [2024-07-25 10:17:32.202787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.289 [2024-07-25 10:17:32.202831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.289 qpair failed and we were unable to recover it. 00:28:47.289 [2024-07-25 10:17:32.203047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.289 [2024-07-25 10:17:32.203090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.289 qpair failed and we were unable to recover it. 00:28:47.289 [2024-07-25 10:17:32.203258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.289 [2024-07-25 10:17:32.203285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.289 qpair failed and we were unable to recover it. 00:28:47.289 [2024-07-25 10:17:32.203500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.289 [2024-07-25 10:17:32.203545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.289 qpair failed and we were unable to recover it. 00:28:47.289 [2024-07-25 10:17:32.203673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.289 [2024-07-25 10:17:32.203699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.289 qpair failed and we were unable to recover it. 00:28:47.289 [2024-07-25 10:17:32.203846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.289 [2024-07-25 10:17:32.203872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.289 qpair failed and we were unable to recover it. 00:28:47.289 [2024-07-25 10:17:32.204017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.289 [2024-07-25 10:17:32.204043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.289 qpair failed and we were unable to recover it. 00:28:47.289 [2024-07-25 10:17:32.204248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.289 [2024-07-25 10:17:32.204274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.289 qpair failed and we were unable to recover it. 00:28:47.289 [2024-07-25 10:17:32.204406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.289 [2024-07-25 10:17:32.204436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.289 qpair failed and we were unable to recover it. 00:28:47.289 [2024-07-25 10:17:32.204620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.289 [2024-07-25 10:17:32.204646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.289 qpair failed and we were unable to recover it. 00:28:47.289 [2024-07-25 10:17:32.204846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.289 [2024-07-25 10:17:32.204873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.289 qpair failed and we were unable to recover it. 00:28:47.289 [2024-07-25 10:17:32.205033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.289 [2024-07-25 10:17:32.205076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.289 qpair failed and we were unable to recover it. 00:28:47.290 [2024-07-25 10:17:32.205216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.290 [2024-07-25 10:17:32.205242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.290 qpair failed and we were unable to recover it. 00:28:47.290 [2024-07-25 10:17:32.205439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.290 [2024-07-25 10:17:32.205466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.290 qpair failed and we were unable to recover it. 00:28:47.290 [2024-07-25 10:17:32.205640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.290 [2024-07-25 10:17:32.205667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.290 qpair failed and we were unable to recover it. 00:28:47.290 [2024-07-25 10:17:32.205855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.290 [2024-07-25 10:17:32.205899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.290 qpair failed and we were unable to recover it. 00:28:47.290 [2024-07-25 10:17:32.206102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.290 [2024-07-25 10:17:32.206144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.290 qpair failed and we were unable to recover it. 00:28:47.290 [2024-07-25 10:17:32.206313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.290 [2024-07-25 10:17:32.206339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.290 qpair failed and we were unable to recover it. 00:28:47.290 [2024-07-25 10:17:32.206496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.290 [2024-07-25 10:17:32.206522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.290 qpair failed and we were unable to recover it. 00:28:47.290 [2024-07-25 10:17:32.206705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.290 [2024-07-25 10:17:32.206752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.290 qpair failed and we were unable to recover it. 00:28:47.290 [2024-07-25 10:17:32.206940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.290 [2024-07-25 10:17:32.206983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.290 qpair failed and we were unable to recover it. 00:28:47.290 [2024-07-25 10:17:32.207165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.290 [2024-07-25 10:17:32.207208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.290 qpair failed and we were unable to recover it. 00:28:47.290 [2024-07-25 10:17:32.207416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.290 [2024-07-25 10:17:32.207455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.290 qpair failed and we were unable to recover it. 00:28:47.290 [2024-07-25 10:17:32.207613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.290 [2024-07-25 10:17:32.207639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.290 qpair failed and we were unable to recover it. 00:28:47.290 [2024-07-25 10:17:32.207799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.290 [2024-07-25 10:17:32.207842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.290 qpair failed and we were unable to recover it. 00:28:47.290 [2024-07-25 10:17:32.207961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.290 [2024-07-25 10:17:32.208004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.290 qpair failed and we were unable to recover it. 00:28:47.290 [2024-07-25 10:17:32.208152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.290 [2024-07-25 10:17:32.208197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.290 qpair failed and we were unable to recover it. 00:28:47.290 [2024-07-25 10:17:32.208346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.290 [2024-07-25 10:17:32.208372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.290 qpair failed and we were unable to recover it. 00:28:47.290 [2024-07-25 10:17:32.208535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.290 [2024-07-25 10:17:32.208585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.290 qpair failed and we were unable to recover it. 00:28:47.290 [2024-07-25 10:17:32.208794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.290 [2024-07-25 10:17:32.208823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.290 qpair failed and we were unable to recover it. 00:28:47.290 [2024-07-25 10:17:32.209032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.290 [2024-07-25 10:17:32.209076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.290 qpair failed and we were unable to recover it. 00:28:47.290 [2024-07-25 10:17:32.209259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.290 [2024-07-25 10:17:32.209285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.290 qpair failed and we were unable to recover it. 00:28:47.290 [2024-07-25 10:17:32.209440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.290 [2024-07-25 10:17:32.209485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.290 qpair failed and we were unable to recover it. 00:28:47.290 [2024-07-25 10:17:32.209657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.290 [2024-07-25 10:17:32.209703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.290 qpair failed and we were unable to recover it. 00:28:47.290 [2024-07-25 10:17:32.209933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.290 [2024-07-25 10:17:32.209977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.290 qpair failed and we were unable to recover it. 00:28:47.290 [2024-07-25 10:17:32.210173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.290 [2024-07-25 10:17:32.210217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.290 qpair failed and we were unable to recover it. 00:28:47.290 [2024-07-25 10:17:32.210367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.290 [2024-07-25 10:17:32.210393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.290 qpair failed and we were unable to recover it. 00:28:47.290 [2024-07-25 10:17:32.210557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.290 [2024-07-25 10:17:32.210605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.290 qpair failed and we were unable to recover it. 00:28:47.290 [2024-07-25 10:17:32.210813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.290 [2024-07-25 10:17:32.210855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.290 qpair failed and we were unable to recover it. 00:28:47.290 [2024-07-25 10:17:32.211011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.290 [2024-07-25 10:17:32.211054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.290 qpair failed and we were unable to recover it. 00:28:47.290 [2024-07-25 10:17:32.211256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.290 [2024-07-25 10:17:32.211282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.290 qpair failed and we were unable to recover it. 00:28:47.290 [2024-07-25 10:17:32.211426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.290 [2024-07-25 10:17:32.211473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.290 qpair failed and we were unable to recover it. 00:28:47.290 [2024-07-25 10:17:32.211707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.290 [2024-07-25 10:17:32.211751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.290 qpair failed and we were unable to recover it. 00:28:47.290 [2024-07-25 10:17:32.211971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.290 [2024-07-25 10:17:32.212000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.290 qpair failed and we were unable to recover it. 00:28:47.290 [2024-07-25 10:17:32.212215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.290 [2024-07-25 10:17:32.212258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.290 qpair failed and we were unable to recover it. 00:28:47.290 [2024-07-25 10:17:32.212449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.290 [2024-07-25 10:17:32.212476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.290 qpair failed and we were unable to recover it. 00:28:47.290 [2024-07-25 10:17:32.212677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.290 [2024-07-25 10:17:32.212706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.290 qpair failed and we were unable to recover it. 00:28:47.290 [2024-07-25 10:17:32.212928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.290 [2024-07-25 10:17:32.212972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.290 qpair failed and we were unable to recover it. 00:28:47.290 [2024-07-25 10:17:32.213116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.290 [2024-07-25 10:17:32.213159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.290 qpair failed and we were unable to recover it. 00:28:47.290 [2024-07-25 10:17:32.213346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.291 [2024-07-25 10:17:32.213371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.291 qpair failed and we were unable to recover it. 00:28:47.291 [2024-07-25 10:17:32.213536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.291 [2024-07-25 10:17:32.213563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.291 qpair failed and we were unable to recover it. 00:28:47.291 [2024-07-25 10:17:32.213750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.291 [2024-07-25 10:17:32.213796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.291 qpair failed and we were unable to recover it. 00:28:47.291 [2024-07-25 10:17:32.213990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.291 [2024-07-25 10:17:32.214034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.291 qpair failed and we were unable to recover it. 00:28:47.291 [2024-07-25 10:17:32.214252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.291 [2024-07-25 10:17:32.214295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.291 qpair failed and we were unable to recover it. 00:28:47.291 [2024-07-25 10:17:32.214499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.291 [2024-07-25 10:17:32.214544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.291 qpair failed and we were unable to recover it. 00:28:47.291 [2024-07-25 10:17:32.214750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.291 [2024-07-25 10:17:32.214793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.291 qpair failed and we were unable to recover it. 00:28:47.291 [2024-07-25 10:17:32.215001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.291 [2024-07-25 10:17:32.215045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.291 qpair failed and we were unable to recover it. 00:28:47.291 [2024-07-25 10:17:32.215224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.291 [2024-07-25 10:17:32.215250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.291 qpair failed and we were unable to recover it. 00:28:47.291 [2024-07-25 10:17:32.215448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.291 [2024-07-25 10:17:32.215484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.291 qpair failed and we were unable to recover it. 00:28:47.291 [2024-07-25 10:17:32.215684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.291 [2024-07-25 10:17:32.215715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.291 qpair failed and we were unable to recover it. 00:28:47.291 [2024-07-25 10:17:32.215898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.291 [2024-07-25 10:17:32.215941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.291 qpair failed and we were unable to recover it. 00:28:47.291 [2024-07-25 10:17:32.216110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.291 [2024-07-25 10:17:32.216153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.291 qpair failed and we were unable to recover it. 00:28:47.291 [2024-07-25 10:17:32.216351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.291 [2024-07-25 10:17:32.216377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.291 qpair failed and we were unable to recover it. 00:28:47.291 [2024-07-25 10:17:32.216555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.291 [2024-07-25 10:17:32.216581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.291 qpair failed and we were unable to recover it. 00:28:47.291 [2024-07-25 10:17:32.216735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.291 [2024-07-25 10:17:32.216780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.291 qpair failed and we were unable to recover it. 00:28:47.291 [2024-07-25 10:17:32.216946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.291 [2024-07-25 10:17:32.216988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.291 qpair failed and we were unable to recover it. 00:28:47.291 [2024-07-25 10:17:32.217182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.291 [2024-07-25 10:17:32.217225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.291 qpair failed and we were unable to recover it. 00:28:47.291 [2024-07-25 10:17:32.217433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.291 [2024-07-25 10:17:32.217460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.291 qpair failed and we were unable to recover it. 00:28:47.291 [2024-07-25 10:17:32.217635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.291 [2024-07-25 10:17:32.217678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.291 qpair failed and we were unable to recover it. 00:28:47.291 [2024-07-25 10:17:32.217861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.291 [2024-07-25 10:17:32.217905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.291 qpair failed and we were unable to recover it. 00:28:47.291 [2024-07-25 10:17:32.218121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.291 [2024-07-25 10:17:32.218165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.291 qpair failed and we were unable to recover it. 00:28:47.291 [2024-07-25 10:17:32.218315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.291 [2024-07-25 10:17:32.218341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.291 qpair failed and we were unable to recover it. 00:28:47.291 [2024-07-25 10:17:32.218547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.291 [2024-07-25 10:17:32.218574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.291 qpair failed and we were unable to recover it. 00:28:47.291 [2024-07-25 10:17:32.218786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.291 [2024-07-25 10:17:32.218830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.291 qpair failed and we were unable to recover it. 00:28:47.291 [2024-07-25 10:17:32.219037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.291 [2024-07-25 10:17:32.219080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.291 qpair failed and we were unable to recover it. 00:28:47.291 [2024-07-25 10:17:32.219229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.291 [2024-07-25 10:17:32.219255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.291 qpair failed and we were unable to recover it. 00:28:47.291 [2024-07-25 10:17:32.219433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.291 [2024-07-25 10:17:32.219459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.291 qpair failed and we were unable to recover it. 00:28:47.291 [2024-07-25 10:17:32.219641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.291 [2024-07-25 10:17:32.219667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.291 qpair failed and we were unable to recover it. 00:28:47.291 [2024-07-25 10:17:32.219836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.291 [2024-07-25 10:17:32.219879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.291 qpair failed and we were unable to recover it. 00:28:47.291 [2024-07-25 10:17:32.220043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.291 [2024-07-25 10:17:32.220086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.291 qpair failed and we were unable to recover it. 00:28:47.291 [2024-07-25 10:17:32.220264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.291 [2024-07-25 10:17:32.220306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.292 qpair failed and we were unable to recover it. 00:28:47.292 [2024-07-25 10:17:32.220457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.292 [2024-07-25 10:17:32.220484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.292 qpair failed and we were unable to recover it. 00:28:47.292 [2024-07-25 10:17:32.220668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.292 [2024-07-25 10:17:32.220711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.292 qpair failed and we were unable to recover it. 00:28:47.292 [2024-07-25 10:17:32.220887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.292 [2024-07-25 10:17:32.220930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.292 qpair failed and we were unable to recover it. 00:28:47.292 [2024-07-25 10:17:32.221139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.292 [2024-07-25 10:17:32.221181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.292 qpair failed and we were unable to recover it. 00:28:47.292 [2024-07-25 10:17:32.221309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.292 [2024-07-25 10:17:32.221334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.292 qpair failed and we were unable to recover it. 00:28:47.292 [2024-07-25 10:17:32.221512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.292 [2024-07-25 10:17:32.221562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.292 qpair failed and we were unable to recover it. 00:28:47.292 [2024-07-25 10:17:32.221761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.292 [2024-07-25 10:17:32.221796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.292 qpair failed and we were unable to recover it. 00:28:47.292 [2024-07-25 10:17:32.222063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.292 [2024-07-25 10:17:32.222095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.292 qpair failed and we were unable to recover it. 00:28:47.292 [2024-07-25 10:17:32.222299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.292 [2024-07-25 10:17:32.222331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.292 qpair failed and we were unable to recover it. 00:28:47.292 [2024-07-25 10:17:32.222550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.292 [2024-07-25 10:17:32.222580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.292 qpair failed and we were unable to recover it. 00:28:47.292 [2024-07-25 10:17:32.222764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.292 [2024-07-25 10:17:32.222795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.292 qpair failed and we were unable to recover it. 00:28:47.292 [2024-07-25 10:17:32.222982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.292 [2024-07-25 10:17:32.223013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.292 qpair failed and we were unable to recover it. 00:28:47.292 [2024-07-25 10:17:32.223221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.292 [2024-07-25 10:17:32.223252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.292 qpair failed and we were unable to recover it. 00:28:47.292 [2024-07-25 10:17:32.223470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.292 [2024-07-25 10:17:32.223498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.292 qpair failed and we were unable to recover it. 00:28:47.292 [2024-07-25 10:17:32.223695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.292 [2024-07-25 10:17:32.223745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.292 qpair failed and we were unable to recover it. 00:28:47.292 [2024-07-25 10:17:32.223964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.292 [2024-07-25 10:17:32.223995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.292 qpair failed and we were unable to recover it. 00:28:47.292 [2024-07-25 10:17:32.224150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.292 [2024-07-25 10:17:32.224180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.292 qpair failed and we were unable to recover it. 00:28:47.292 [2024-07-25 10:17:32.224351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.292 [2024-07-25 10:17:32.224394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.292 qpair failed and we were unable to recover it. 00:28:47.292 [2024-07-25 10:17:32.224595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.292 [2024-07-25 10:17:32.224627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.292 qpair failed and we were unable to recover it. 00:28:47.292 [2024-07-25 10:17:32.224800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.292 [2024-07-25 10:17:32.224831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.292 qpair failed and we were unable to recover it. 00:28:47.292 [2024-07-25 10:17:32.225008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.292 [2024-07-25 10:17:32.225038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.292 qpair failed and we were unable to recover it. 00:28:47.292 [2024-07-25 10:17:32.225254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.292 [2024-07-25 10:17:32.225285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.292 qpair failed and we were unable to recover it. 00:28:47.292 [2024-07-25 10:17:32.225456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.292 [2024-07-25 10:17:32.225502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.292 qpair failed and we were unable to recover it. 00:28:47.292 [2024-07-25 10:17:32.225693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.292 [2024-07-25 10:17:32.225739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.292 qpair failed and we were unable to recover it. 00:28:47.292 [2024-07-25 10:17:32.225938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.292 [2024-07-25 10:17:32.225968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.292 qpair failed and we were unable to recover it. 00:28:47.292 [2024-07-25 10:17:32.226148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.292 [2024-07-25 10:17:32.226181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.292 qpair failed and we were unable to recover it. 00:28:47.292 [2024-07-25 10:17:32.226359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.292 [2024-07-25 10:17:32.226389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.292 qpair failed and we were unable to recover it. 00:28:47.292 [2024-07-25 10:17:32.226603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.292 [2024-07-25 10:17:32.226631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.292 qpair failed and we were unable to recover it. 00:28:47.292 [2024-07-25 10:17:32.226793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.292 [2024-07-25 10:17:32.226824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.292 qpair failed and we were unable to recover it. 00:28:47.292 [2024-07-25 10:17:32.226839] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:28:47.292 [2024-07-25 10:17:32.226950] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:47.292 [2024-07-25 10:17:32.227006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.292 [2024-07-25 10:17:32.227034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.292 qpair failed and we were unable to recover it. 00:28:47.292 [2024-07-25 10:17:32.227242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.292 [2024-07-25 10:17:32.227271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.292 qpair failed and we were unable to recover it. 00:28:47.292 [2024-07-25 10:17:32.227496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.292 [2024-07-25 10:17:32.227524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.292 qpair failed and we were unable to recover it. 00:28:47.292 [2024-07-25 10:17:32.228442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.292 [2024-07-25 10:17:32.228491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.292 qpair failed and we were unable to recover it. 00:28:47.292 [2024-07-25 10:17:32.228720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.292 [2024-07-25 10:17:32.228749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.292 qpair failed and we were unable to recover it. 00:28:47.292 [2024-07-25 10:17:32.228961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.292 [2024-07-25 10:17:32.228993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.292 qpair failed and we were unable to recover it. 00:28:47.292 [2024-07-25 10:17:32.229137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.292 [2024-07-25 10:17:32.229167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.292 qpair failed and we were unable to recover it. 00:28:47.292 [2024-07-25 10:17:32.229358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.293 [2024-07-25 10:17:32.229387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.293 qpair failed and we were unable to recover it. 00:28:47.293 [2024-07-25 10:17:32.229578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.293 [2024-07-25 10:17:32.229607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.293 qpair failed and we were unable to recover it. 00:28:47.293 [2024-07-25 10:17:32.229819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.293 [2024-07-25 10:17:32.229849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.293 qpair failed and we were unable to recover it. 00:28:47.293 [2024-07-25 10:17:32.232443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.293 [2024-07-25 10:17:32.232494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.293 qpair failed and we were unable to recover it. 00:28:47.293 [2024-07-25 10:17:32.232696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.293 [2024-07-25 10:17:32.232742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.293 qpair failed and we were unable to recover it. 00:28:47.293 [2024-07-25 10:17:32.232908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.293 [2024-07-25 10:17:32.232937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.293 qpair failed and we were unable to recover it. 00:28:47.293 [2024-07-25 10:17:32.233144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.293 [2024-07-25 10:17:32.233177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.293 qpair failed and we were unable to recover it. 00:28:47.293 [2024-07-25 10:17:32.233371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.293 [2024-07-25 10:17:32.233403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.293 qpair failed and we were unable to recover it. 00:28:47.293 [2024-07-25 10:17:32.233621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.293 [2024-07-25 10:17:32.233650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.293 qpair failed and we were unable to recover it. 00:28:47.293 [2024-07-25 10:17:32.233835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.293 [2024-07-25 10:17:32.233862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.293 qpair failed and we were unable to recover it. 00:28:47.293 [2024-07-25 10:17:32.234070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.293 [2024-07-25 10:17:32.234102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.293 qpair failed and we were unable to recover it. 00:28:47.293 [2024-07-25 10:17:32.234319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.293 [2024-07-25 10:17:32.234351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.293 qpair failed and we were unable to recover it. 00:28:47.293 [2024-07-25 10:17:32.234578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.293 [2024-07-25 10:17:32.234607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.293 qpair failed and we were unable to recover it. 00:28:47.293 [2024-07-25 10:17:32.234766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.293 [2024-07-25 10:17:32.234806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.293 qpair failed and we were unable to recover it. 00:28:47.293 [2024-07-25 10:17:32.234994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.293 [2024-07-25 10:17:32.235025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.293 qpair failed and we were unable to recover it. 00:28:47.293 [2024-07-25 10:17:32.235376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.293 [2024-07-25 10:17:32.235408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.293 qpair failed and we were unable to recover it. 00:28:47.293 [2024-07-25 10:17:32.235640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.293 [2024-07-25 10:17:32.235669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.293 qpair failed and we were unable to recover it. 00:28:47.293 [2024-07-25 10:17:32.235871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.293 [2024-07-25 10:17:32.235899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.293 qpair failed and we were unable to recover it. 00:28:47.293 [2024-07-25 10:17:32.236127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.293 [2024-07-25 10:17:32.236159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.293 qpair failed and we were unable to recover it. 00:28:47.293 [2024-07-25 10:17:32.236304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.293 [2024-07-25 10:17:32.236336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.293 qpair failed and we were unable to recover it. 00:28:47.293 [2024-07-25 10:17:32.236506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.293 [2024-07-25 10:17:32.236535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.293 qpair failed and we were unable to recover it. 00:28:47.293 [2024-07-25 10:17:32.236712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.293 [2024-07-25 10:17:32.236761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.293 qpair failed and we were unable to recover it. 00:28:47.293 [2024-07-25 10:17:32.236950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.293 [2024-07-25 10:17:32.236981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.293 qpair failed and we were unable to recover it. 00:28:47.293 [2024-07-25 10:17:32.237164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.293 [2024-07-25 10:17:32.237196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.293 qpair failed and we were unable to recover it. 00:28:47.293 [2024-07-25 10:17:32.237423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.293 [2024-07-25 10:17:32.237476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.293 qpair failed and we were unable to recover it. 00:28:47.293 [2024-07-25 10:17:32.237690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.293 [2024-07-25 10:17:32.237718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.293 qpair failed and we were unable to recover it. 00:28:47.293 [2024-07-25 10:17:32.237879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.293 [2024-07-25 10:17:32.237910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.293 qpair failed and we were unable to recover it. 00:28:47.293 [2024-07-25 10:17:32.238121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.293 [2024-07-25 10:17:32.238153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.293 qpair failed and we were unable to recover it. 00:28:47.293 [2024-07-25 10:17:32.241443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.293 [2024-07-25 10:17:32.241480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.293 qpair failed and we were unable to recover it. 00:28:47.293 [2024-07-25 10:17:32.241710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.293 [2024-07-25 10:17:32.241739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.293 qpair failed and we were unable to recover it. 00:28:47.293 [2024-07-25 10:17:32.241953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.293 [2024-07-25 10:17:32.241986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.293 qpair failed and we were unable to recover it. 00:28:47.293 [2024-07-25 10:17:32.242191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.293 [2024-07-25 10:17:32.242224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.293 qpair failed and we were unable to recover it. 00:28:47.293 [2024-07-25 10:17:32.242457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.293 [2024-07-25 10:17:32.242489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.293 qpair failed and we were unable to recover it. 00:28:47.293 [2024-07-25 10:17:32.242701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.293 [2024-07-25 10:17:32.242742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.293 qpair failed and we were unable to recover it. 00:28:47.293 [2024-07-25 10:17:32.242994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.293 [2024-07-25 10:17:32.243026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.293 qpair failed and we were unable to recover it. 00:28:47.293 [2024-07-25 10:17:32.243217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.293 [2024-07-25 10:17:32.243248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.293 qpair failed and we were unable to recover it. 00:28:47.293 [2024-07-25 10:17:32.243434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.293 [2024-07-25 10:17:32.243466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.293 qpair failed and we were unable to recover it. 00:28:47.293 [2024-07-25 10:17:32.243687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.293 [2024-07-25 10:17:32.243727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.293 qpair failed and we were unable to recover it. 00:28:47.294 [2024-07-25 10:17:32.243900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.294 [2024-07-25 10:17:32.243932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.294 qpair failed and we were unable to recover it. 00:28:47.294 [2024-07-25 10:17:32.244112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.294 [2024-07-25 10:17:32.244143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.294 qpair failed and we were unable to recover it. 00:28:47.294 [2024-07-25 10:17:32.244321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.294 [2024-07-25 10:17:32.244353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.294 qpair failed and we were unable to recover it. 00:28:47.294 [2024-07-25 10:17:32.244583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.294 [2024-07-25 10:17:32.244610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.294 qpair failed and we were unable to recover it. 00:28:47.294 [2024-07-25 10:17:32.244819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.294 [2024-07-25 10:17:32.244850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.294 qpair failed and we were unable to recover it. 00:28:47.294 [2024-07-25 10:17:32.244998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.294 [2024-07-25 10:17:32.245029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.294 qpair failed and we were unable to recover it. 00:28:47.294 [2024-07-25 10:17:32.245202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.294 [2024-07-25 10:17:32.245234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.294 qpair failed and we were unable to recover it. 00:28:47.294 [2024-07-25 10:17:32.245461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.294 [2024-07-25 10:17:32.245489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.294 qpair failed and we were unable to recover it. 00:28:47.294 [2024-07-25 10:17:32.245675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.294 [2024-07-25 10:17:32.245706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.294 qpair failed and we were unable to recover it. 00:28:47.294 [2024-07-25 10:17:32.245909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.294 [2024-07-25 10:17:32.245940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.294 qpair failed and we were unable to recover it. 00:28:47.294 [2024-07-25 10:17:32.246120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.294 [2024-07-25 10:17:32.246152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.294 qpair failed and we were unable to recover it. 00:28:47.294 [2024-07-25 10:17:32.246340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.294 [2024-07-25 10:17:32.246364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.294 qpair failed and we were unable to recover it. 00:28:47.294 [2024-07-25 10:17:32.246549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.294 [2024-07-25 10:17:32.246581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.294 qpair failed and we were unable to recover it. 00:28:47.294 [2024-07-25 10:17:32.246757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.294 [2024-07-25 10:17:32.246788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.294 qpair failed and we were unable to recover it. 00:28:47.294 [2024-07-25 10:17:32.246991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.294 [2024-07-25 10:17:32.247022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.294 qpair failed and we were unable to recover it. 00:28:47.294 [2024-07-25 10:17:32.247183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.294 [2024-07-25 10:17:32.247208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.294 qpair failed and we were unable to recover it. 00:28:47.294 [2024-07-25 10:17:32.247390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.294 [2024-07-25 10:17:32.247416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.294 qpair failed and we were unable to recover it. 00:28:47.294 [2024-07-25 10:17:32.247650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.294 [2024-07-25 10:17:32.247682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.294 qpair failed and we were unable to recover it. 00:28:47.294 [2024-07-25 10:17:32.247881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.294 [2024-07-25 10:17:32.247912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.294 qpair failed and we were unable to recover it. 00:28:47.294 [2024-07-25 10:17:32.248097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.294 [2024-07-25 10:17:32.248122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.294 qpair failed and we were unable to recover it. 00:28:47.294 [2024-07-25 10:17:32.248321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.294 [2024-07-25 10:17:32.248363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.294 qpair failed and we were unable to recover it. 00:28:47.294 [2024-07-25 10:17:32.248519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.294 [2024-07-25 10:17:32.248547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.294 qpair failed and we were unable to recover it. 00:28:47.294 [2024-07-25 10:17:32.251443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.294 [2024-07-25 10:17:32.251481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.294 qpair failed and we were unable to recover it. 00:28:47.294 [2024-07-25 10:17:32.251725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.294 [2024-07-25 10:17:32.251757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.294 qpair failed and we were unable to recover it. 00:28:47.294 [2024-07-25 10:17:32.251989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.294 [2024-07-25 10:17:32.252021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.294 qpair failed and we were unable to recover it. 00:28:47.294 [2024-07-25 10:17:32.252202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.294 [2024-07-25 10:17:32.252233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.294 qpair failed and we were unable to recover it. 00:28:47.294 [2024-07-25 10:17:32.252446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.294 [2024-07-25 10:17:32.252479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.294 qpair failed and we were unable to recover it. 00:28:47.294 [2024-07-25 10:17:32.252649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.294 [2024-07-25 10:17:32.252676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.294 qpair failed and we were unable to recover it. 00:28:47.294 [2024-07-25 10:17:32.252867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.294 [2024-07-25 10:17:32.252899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.294 qpair failed and we were unable to recover it. 00:28:47.294 [2024-07-25 10:17:32.253052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.294 [2024-07-25 10:17:32.253084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.294 qpair failed and we were unable to recover it. 00:28:47.294 [2024-07-25 10:17:32.253304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.294 [2024-07-25 10:17:32.253336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.294 qpair failed and we were unable to recover it. 00:28:47.294 [2024-07-25 10:17:32.253553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.294 [2024-07-25 10:17:32.253581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.294 qpair failed and we were unable to recover it. 00:28:47.294 [2024-07-25 10:17:32.253734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.294 [2024-07-25 10:17:32.253765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.294 qpair failed and we were unable to recover it. 00:28:47.294 [2024-07-25 10:17:32.253941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.294 [2024-07-25 10:17:32.253972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.294 qpair failed and we were unable to recover it. 00:28:47.294 [2024-07-25 10:17:32.254185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.294 [2024-07-25 10:17:32.254217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.294 qpair failed and we were unable to recover it. 00:28:47.294 [2024-07-25 10:17:32.254400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.294 [2024-07-25 10:17:32.254444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.294 qpair failed and we were unable to recover it. 00:28:47.294 [2024-07-25 10:17:32.254637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.294 [2024-07-25 10:17:32.254669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.294 qpair failed and we were unable to recover it. 00:28:47.294 [2024-07-25 10:17:32.254859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.295 [2024-07-25 10:17:32.254891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.295 qpair failed and we were unable to recover it. 00:28:47.295 [2024-07-25 10:17:32.255098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.295 [2024-07-25 10:17:32.255130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.295 qpair failed and we were unable to recover it. 00:28:47.295 [2024-07-25 10:17:32.255316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.295 [2024-07-25 10:17:32.255340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.295 qpair failed and we were unable to recover it. 00:28:47.295 [2024-07-25 10:17:32.255550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.295 [2024-07-25 10:17:32.255583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.295 qpair failed and we were unable to recover it. 00:28:47.295 [2024-07-25 10:17:32.255765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.295 [2024-07-25 10:17:32.255795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.295 qpair failed and we were unable to recover it. 00:28:47.295 [2024-07-25 10:17:32.255970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.295 [2024-07-25 10:17:32.256000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.295 qpair failed and we were unable to recover it. 00:28:47.295 [2024-07-25 10:17:32.256215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.295 [2024-07-25 10:17:32.256240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.295 qpair failed and we were unable to recover it. 00:28:47.295 [2024-07-25 10:17:32.256454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.295 [2024-07-25 10:17:32.256486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.295 qpair failed and we were unable to recover it. 00:28:47.295 [2024-07-25 10:17:32.256641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.295 [2024-07-25 10:17:32.256672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.295 qpair failed and we were unable to recover it. 00:28:47.295 [2024-07-25 10:17:32.256887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.295 [2024-07-25 10:17:32.256918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.295 qpair failed and we were unable to recover it. 00:28:47.295 [2024-07-25 10:17:32.257141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.295 [2024-07-25 10:17:32.257166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.295 qpair failed and we were unable to recover it. 00:28:47.295 [2024-07-25 10:17:32.257358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.295 [2024-07-25 10:17:32.257389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.295 qpair failed and we were unable to recover it. 00:28:47.295 [2024-07-25 10:17:32.257579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.295 [2024-07-25 10:17:32.257606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.295 qpair failed and we were unable to recover it. 00:28:47.295 [2024-07-25 10:17:32.260444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.295 [2024-07-25 10:17:32.260484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.295 qpair failed and we were unable to recover it. 00:28:47.295 [2024-07-25 10:17:32.260671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.295 [2024-07-25 10:17:32.260699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.295 qpair failed and we were unable to recover it. 00:28:47.295 [2024-07-25 10:17:32.260912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.295 [2024-07-25 10:17:32.260945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.295 qpair failed and we were unable to recover it. 00:28:47.295 [2024-07-25 10:17:32.261129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.295 [2024-07-25 10:17:32.261162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.295 qpair failed and we were unable to recover it. 00:28:47.295 [2024-07-25 10:17:32.261367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.295 [2024-07-25 10:17:32.261399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.295 qpair failed and we were unable to recover it. 00:28:47.295 [2024-07-25 10:17:32.261632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.295 [2024-07-25 10:17:32.261660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.295 qpair failed and we were unable to recover it. 00:28:47.295 [2024-07-25 10:17:32.261899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.295 [2024-07-25 10:17:32.261931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.295 qpair failed and we were unable to recover it. 00:28:47.295 [2024-07-25 10:17:32.262121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.295 [2024-07-25 10:17:32.262153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.295 qpair failed and we were unable to recover it. 00:28:47.295 [2024-07-25 10:17:32.262389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.295 [2024-07-25 10:17:32.262422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.295 qpair failed and we were unable to recover it. 00:28:47.295 [2024-07-25 10:17:32.262654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.295 [2024-07-25 10:17:32.262682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.295 qpair failed and we were unable to recover it. 00:28:47.295 [2024-07-25 10:17:32.262872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.295 [2024-07-25 10:17:32.262903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.295 qpair failed and we were unable to recover it. 00:28:47.295 [2024-07-25 10:17:32.263096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.295 [2024-07-25 10:17:32.263126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.295 qpair failed and we were unable to recover it. 00:28:47.295 [2024-07-25 10:17:32.263313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.295 [2024-07-25 10:17:32.263345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.295 qpair failed and we were unable to recover it. 00:28:47.295 [2024-07-25 10:17:32.263551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.295 [2024-07-25 10:17:32.263584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.295 qpair failed and we were unable to recover it. 00:28:47.295 [2024-07-25 10:17:32.263761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.295 [2024-07-25 10:17:32.263793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.295 qpair failed and we were unable to recover it. 00:28:47.295 [2024-07-25 10:17:32.263948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.295 [2024-07-25 10:17:32.263980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.295 qpair failed and we were unable to recover it. 00:28:47.295 [2024-07-25 10:17:32.264157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.295 [2024-07-25 10:17:32.264190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.295 qpair failed and we were unable to recover it. 00:28:47.295 [2024-07-25 10:17:32.264367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.295 [2024-07-25 10:17:32.264392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.295 qpair failed and we were unable to recover it. 00:28:47.295 [2024-07-25 10:17:32.264634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.295 [2024-07-25 10:17:32.264667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.295 qpair failed and we were unable to recover it. 00:28:47.295 [2024-07-25 10:17:32.264864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.295 [2024-07-25 10:17:32.264897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.295 qpair failed and we were unable to recover it. 00:28:47.295 [2024-07-25 10:17:32.265081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.295 [2024-07-25 10:17:32.265112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.295 qpair failed and we were unable to recover it. 00:28:47.295 [2024-07-25 10:17:32.265324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.295 [2024-07-25 10:17:32.265350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.295 qpair failed and we were unable to recover it. 00:28:47.295 [2024-07-25 10:17:32.265499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.295 [2024-07-25 10:17:32.265545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.295 qpair failed and we were unable to recover it. 00:28:47.295 [2024-07-25 10:17:32.265735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.295 [2024-07-25 10:17:32.265767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.295 qpair failed and we were unable to recover it. 00:28:47.295 [2024-07-25 10:17:32.265972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.295 [2024-07-25 10:17:32.266004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.295 qpair failed and we were unable to recover it. 00:28:47.296 [2024-07-25 10:17:32.266219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.296 [2024-07-25 10:17:32.266246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.296 qpair failed and we were unable to recover it. 00:28:47.296 [2024-07-25 10:17:32.266459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.296 [2024-07-25 10:17:32.266491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.296 qpair failed and we were unable to recover it. 00:28:47.296 [2024-07-25 10:17:32.269444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.296 [2024-07-25 10:17:32.269482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.296 qpair failed and we were unable to recover it. 00:28:47.296 [2024-07-25 10:17:32.269671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.296 [2024-07-25 10:17:32.269704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.296 qpair failed and we were unable to recover it. 00:28:47.296 [2024-07-25 10:17:32.269912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.296 [2024-07-25 10:17:32.269944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.296 qpair failed and we were unable to recover it. 00:28:47.296 [2024-07-25 10:17:32.270153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.296 [2024-07-25 10:17:32.270184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.296 qpair failed and we were unable to recover it. 00:28:47.296 [2024-07-25 10:17:32.270376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.296 [2024-07-25 10:17:32.270406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.296 qpair failed and we were unable to recover it. 00:28:47.296 [2024-07-25 10:17:32.270598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.296 [2024-07-25 10:17:32.270628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.296 qpair failed and we were unable to recover it. 00:28:47.296 [2024-07-25 10:17:32.270832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.296 [2024-07-25 10:17:32.270863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.296 qpair failed and we were unable to recover it. 00:28:47.296 [2024-07-25 10:17:32.271085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.296 [2024-07-25 10:17:32.271117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.296 qpair failed and we were unable to recover it. 00:28:47.296 [2024-07-25 10:17:32.271329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.296 [2024-07-25 10:17:32.271358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.296 qpair failed and we were unable to recover it. 00:28:47.296 [2024-07-25 10:17:32.271496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.296 [2024-07-25 10:17:32.271527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.296 qpair failed and we were unable to recover it. 00:28:47.296 [2024-07-25 10:17:32.271678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.296 [2024-07-25 10:17:32.271709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.296 qpair failed and we were unable to recover it. 00:28:47.296 [2024-07-25 10:17:32.271915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.296 [2024-07-25 10:17:32.271945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.296 qpair failed and we were unable to recover it. 00:28:47.296 [2024-07-25 10:17:32.272092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.296 [2024-07-25 10:17:32.272122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.296 qpair failed and we were unable to recover it. 00:28:47.296 [2024-07-25 10:17:32.272345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.296 [2024-07-25 10:17:32.272377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.296 qpair failed and we were unable to recover it. 00:28:47.296 [2024-07-25 10:17:32.272584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.296 [2024-07-25 10:17:32.272616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.296 qpair failed and we were unable to recover it. 00:28:47.296 [2024-07-25 10:17:32.272834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.296 [2024-07-25 10:17:32.272866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.296 qpair failed and we were unable to recover it. 00:28:47.296 [2024-07-25 10:17:32.273004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.296 [2024-07-25 10:17:32.273035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.296 qpair failed and we were unable to recover it. 00:28:47.296 [2024-07-25 10:17:32.273220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.296 [2024-07-25 10:17:32.273250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.296 qpair failed and we were unable to recover it. 00:28:47.296 [2024-07-25 10:17:32.273458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.296 [2024-07-25 10:17:32.273491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.296 qpair failed and we were unable to recover it. 00:28:47.296 [2024-07-25 10:17:32.273692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.296 [2024-07-25 10:17:32.273724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.296 qpair failed and we were unable to recover it. 00:28:47.296 [2024-07-25 10:17:32.273905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.296 [2024-07-25 10:17:32.273936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.296 qpair failed and we were unable to recover it. 00:28:47.296 [2024-07-25 10:17:32.274078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.296 [2024-07-25 10:17:32.274109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.296 qpair failed and we were unable to recover it. 00:28:47.296 [2024-07-25 10:17:32.274314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.296 [2024-07-25 10:17:32.274346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.296 qpair failed and we were unable to recover it. 00:28:47.296 [2024-07-25 10:17:32.274524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.296 [2024-07-25 10:17:32.274555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.296 qpair failed and we were unable to recover it. 00:28:47.296 [2024-07-25 10:17:32.274755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.296 [2024-07-25 10:17:32.274786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.296 qpair failed and we were unable to recover it. 00:28:47.296 [2024-07-25 10:17:32.274967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.296 [2024-07-25 10:17:32.274998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.296 qpair failed and we were unable to recover it. 00:28:47.296 [2024-07-25 10:17:32.275215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.296 [2024-07-25 10:17:32.275252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.296 qpair failed and we were unable to recover it. 00:28:47.296 [2024-07-25 10:17:32.275436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.296 [2024-07-25 10:17:32.275467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.296 qpair failed and we were unable to recover it. 00:28:47.296 [2024-07-25 10:17:32.275679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.296 [2024-07-25 10:17:32.275711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.296 qpair failed and we were unable to recover it. 00:28:47.296 EAL: No free 2048 kB hugepages reported on node 1 00:28:47.296 [2024-07-25 10:17:32.277490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.296 [2024-07-25 10:17:32.277528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.297 qpair failed and we were unable to recover it. 00:28:47.297 [2024-07-25 10:17:32.277725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.297 [2024-07-25 10:17:32.277758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.297 qpair failed and we were unable to recover it. 00:28:47.297 [2024-07-25 10:17:32.277912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.297 [2024-07-25 10:17:32.277943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.297 qpair failed and we were unable to recover it. 00:28:47.297 [2024-07-25 10:17:32.278153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.297 [2024-07-25 10:17:32.278183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.297 qpair failed and we were unable to recover it. 00:28:47.297 [2024-07-25 10:17:32.278386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.297 [2024-07-25 10:17:32.278416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.297 qpair failed and we were unable to recover it. 00:28:47.297 [2024-07-25 10:17:32.278652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.297 [2024-07-25 10:17:32.278682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.297 qpair failed and we were unable to recover it. 00:28:47.297 [2024-07-25 10:17:32.278840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.297 [2024-07-25 10:17:32.278870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.297 qpair failed and we were unable to recover it. 00:28:47.297 [2024-07-25 10:17:32.279040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.297 [2024-07-25 10:17:32.279069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.297 qpair failed and we were unable to recover it. 00:28:47.297 [2024-07-25 10:17:32.279266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.297 [2024-07-25 10:17:32.279295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.297 qpair failed and we were unable to recover it. 00:28:47.297 [2024-07-25 10:17:32.279483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.297 [2024-07-25 10:17:32.279513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.297 qpair failed and we were unable to recover it. 00:28:47.297 [2024-07-25 10:17:32.279723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.297 [2024-07-25 10:17:32.279758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.297 qpair failed and we were unable to recover it. 00:28:47.297 [2024-07-25 10:17:32.279936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.297 [2024-07-25 10:17:32.279966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.297 qpair failed and we were unable to recover it. 00:28:47.297 [2024-07-25 10:17:32.280163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.297 [2024-07-25 10:17:32.280192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.297 qpair failed and we were unable to recover it. 00:28:47.297 [2024-07-25 10:17:32.280367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.297 [2024-07-25 10:17:32.280396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.297 qpair failed and we were unable to recover it. 00:28:47.297 [2024-07-25 10:17:32.280618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.297 [2024-07-25 10:17:32.280649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.297 qpair failed and we were unable to recover it. 00:28:47.297 [2024-07-25 10:17:32.280858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.297 [2024-07-25 10:17:32.280887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.297 qpair failed and we were unable to recover it. 00:28:47.297 [2024-07-25 10:17:32.281038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.297 [2024-07-25 10:17:32.281067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.297 qpair failed and we were unable to recover it. 00:28:47.297 [2024-07-25 10:17:32.281239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.297 [2024-07-25 10:17:32.281269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.297 qpair failed and we were unable to recover it. 00:28:47.297 [2024-07-25 10:17:32.281472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.297 [2024-07-25 10:17:32.281503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.297 qpair failed and we were unable to recover it. 00:28:47.297 [2024-07-25 10:17:32.281675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.297 [2024-07-25 10:17:32.281705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.297 qpair failed and we were unable to recover it. 00:28:47.297 [2024-07-25 10:17:32.281914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.297 [2024-07-25 10:17:32.281943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.297 qpair failed and we were unable to recover it. 00:28:47.297 [2024-07-25 10:17:32.282109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.297 [2024-07-25 10:17:32.282138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.297 qpair failed and we were unable to recover it. 00:28:47.297 [2024-07-25 10:17:32.282334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.297 [2024-07-25 10:17:32.282363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.297 qpair failed and we were unable to recover it. 00:28:47.297 [2024-07-25 10:17:32.282515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.297 [2024-07-25 10:17:32.282544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.297 qpair failed and we were unable to recover it. 00:28:47.297 [2024-07-25 10:17:32.282726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.297 [2024-07-25 10:17:32.282755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.297 qpair failed and we were unable to recover it. 00:28:47.297 [2024-07-25 10:17:32.282957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.297 [2024-07-25 10:17:32.282986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.297 qpair failed and we were unable to recover it. 00:28:47.297 [2024-07-25 10:17:32.283177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.297 [2024-07-25 10:17:32.283206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.297 qpair failed and we were unable to recover it. 00:28:47.297 [2024-07-25 10:17:32.283406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.297 [2024-07-25 10:17:32.283443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.297 qpair failed and we were unable to recover it. 00:28:47.297 [2024-07-25 10:17:32.283632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.297 [2024-07-25 10:17:32.283662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.297 qpair failed and we were unable to recover it. 00:28:47.297 [2024-07-25 10:17:32.283837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.297 [2024-07-25 10:17:32.283866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.297 qpair failed and we were unable to recover it. 00:28:47.297 [2024-07-25 10:17:32.284040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.297 [2024-07-25 10:17:32.284069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.297 qpair failed and we were unable to recover it. 00:28:47.297 [2024-07-25 10:17:32.284236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.297 [2024-07-25 10:17:32.284265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.297 qpair failed and we were unable to recover it. 00:28:47.297 [2024-07-25 10:17:32.284437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.297 [2024-07-25 10:17:32.284466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.297 qpair failed and we were unable to recover it. 00:28:47.297 [2024-07-25 10:17:32.284662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.297 [2024-07-25 10:17:32.284691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.297 qpair failed and we were unable to recover it. 00:28:47.297 [2024-07-25 10:17:32.284891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.297 [2024-07-25 10:17:32.284920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.297 qpair failed and we were unable to recover it. 00:28:47.297 [2024-07-25 10:17:32.285130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.297 [2024-07-25 10:17:32.285159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.297 qpair failed and we were unable to recover it. 00:28:47.297 [2024-07-25 10:17:32.285371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.297 [2024-07-25 10:17:32.285400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.297 qpair failed and we were unable to recover it. 00:28:47.297 [2024-07-25 10:17:32.285619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.297 [2024-07-25 10:17:32.285648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.297 qpair failed and we were unable to recover it. 00:28:47.297 [2024-07-25 10:17:32.285829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.298 [2024-07-25 10:17:32.285858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.298 qpair failed and we were unable to recover it. 00:28:47.298 [2024-07-25 10:17:32.286074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.298 [2024-07-25 10:17:32.286103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.298 qpair failed and we were unable to recover it. 00:28:47.298 [2024-07-25 10:17:32.286281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.298 [2024-07-25 10:17:32.286310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.298 qpair failed and we were unable to recover it. 00:28:47.298 [2024-07-25 10:17:32.286493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.298 [2024-07-25 10:17:32.286523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.298 qpair failed and we were unable to recover it. 00:28:47.298 [2024-07-25 10:17:32.286703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.298 [2024-07-25 10:17:32.286732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.298 qpair failed and we were unable to recover it. 00:28:47.298 [2024-07-25 10:17:32.286948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.298 [2024-07-25 10:17:32.286978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.298 qpair failed and we were unable to recover it. 00:28:47.298 [2024-07-25 10:17:32.287198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.298 [2024-07-25 10:17:32.287240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.298 qpair failed and we were unable to recover it. 00:28:47.298 [2024-07-25 10:17:32.287450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.298 [2024-07-25 10:17:32.287480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.298 qpair failed and we were unable to recover it. 00:28:47.298 [2024-07-25 10:17:32.287629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.298 [2024-07-25 10:17:32.287658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.298 qpair failed and we were unable to recover it. 00:28:47.298 [2024-07-25 10:17:32.287830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.298 [2024-07-25 10:17:32.287859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.298 qpair failed and we were unable to recover it. 00:28:47.298 [2024-07-25 10:17:32.288049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.298 [2024-07-25 10:17:32.288081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.298 qpair failed and we were unable to recover it. 00:28:47.298 [2024-07-25 10:17:32.288258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.298 [2024-07-25 10:17:32.288286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.298 qpair failed and we were unable to recover it. 00:28:47.298 [2024-07-25 10:17:32.288464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.298 [2024-07-25 10:17:32.288497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.298 qpair failed and we were unable to recover it. 00:28:47.298 [2024-07-25 10:17:32.288670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.298 [2024-07-25 10:17:32.288699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.298 qpair failed and we were unable to recover it. 00:28:47.298 [2024-07-25 10:17:32.288901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.298 [2024-07-25 10:17:32.288930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.298 qpair failed and we were unable to recover it. 00:28:47.298 [2024-07-25 10:17:32.289101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.298 [2024-07-25 10:17:32.289130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.298 qpair failed and we were unable to recover it. 00:28:47.298 [2024-07-25 10:17:32.289299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.298 [2024-07-25 10:17:32.289327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.298 qpair failed and we were unable to recover it. 00:28:47.298 [2024-07-25 10:17:32.289509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.298 [2024-07-25 10:17:32.289539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.298 qpair failed and we were unable to recover it. 00:28:47.298 [2024-07-25 10:17:32.289697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.298 [2024-07-25 10:17:32.289727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.298 qpair failed and we were unable to recover it. 00:28:47.298 [2024-07-25 10:17:32.289892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.298 [2024-07-25 10:17:32.289921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.298 qpair failed and we were unable to recover it. 00:28:47.298 [2024-07-25 10:17:32.290120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.298 [2024-07-25 10:17:32.290149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.298 qpair failed and we were unable to recover it. 00:28:47.298 [2024-07-25 10:17:32.290332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.298 [2024-07-25 10:17:32.290361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.298 qpair failed and we were unable to recover it. 00:28:47.298 [2024-07-25 10:17:32.290522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.298 [2024-07-25 10:17:32.290552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.298 qpair failed and we were unable to recover it. 00:28:47.298 [2024-07-25 10:17:32.290758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.298 [2024-07-25 10:17:32.290787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.298 qpair failed and we were unable to recover it. 00:28:47.298 [2024-07-25 10:17:32.290990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.298 [2024-07-25 10:17:32.291019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.298 qpair failed and we were unable to recover it. 00:28:47.298 [2024-07-25 10:17:32.291195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.298 [2024-07-25 10:17:32.291223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.298 qpair failed and we were unable to recover it. 00:28:47.298 [2024-07-25 10:17:32.291409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.298 [2024-07-25 10:17:32.291444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.298 qpair failed and we were unable to recover it. 00:28:47.298 [2024-07-25 10:17:32.291654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.298 [2024-07-25 10:17:32.291682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.298 qpair failed and we were unable to recover it. 00:28:47.298 [2024-07-25 10:17:32.291863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.298 [2024-07-25 10:17:32.291892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.298 qpair failed and we were unable to recover it. 00:28:47.298 [2024-07-25 10:17:32.292100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.298 [2024-07-25 10:17:32.292128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.298 qpair failed and we were unable to recover it. 00:28:47.298 [2024-07-25 10:17:32.292312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.298 [2024-07-25 10:17:32.292342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.298 qpair failed and we were unable to recover it. 00:28:47.298 [2024-07-25 10:17:32.292556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.298 [2024-07-25 10:17:32.292586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.298 qpair failed and we were unable to recover it. 00:28:47.298 [2024-07-25 10:17:32.292725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.298 [2024-07-25 10:17:32.292753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.298 qpair failed and we were unable to recover it. 00:28:47.298 [2024-07-25 10:17:32.292969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.298 [2024-07-25 10:17:32.292998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.298 qpair failed and we were unable to recover it. 00:28:47.298 [2024-07-25 10:17:32.293211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.298 [2024-07-25 10:17:32.293240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.298 qpair failed and we were unable to recover it. 00:28:47.298 [2024-07-25 10:17:32.293445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.298 [2024-07-25 10:17:32.293474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.298 qpair failed and we were unable to recover it. 00:28:47.298 [2024-07-25 10:17:32.293613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.298 [2024-07-25 10:17:32.293641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.298 qpair failed and we were unable to recover it. 00:28:47.298 [2024-07-25 10:17:32.293828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.298 [2024-07-25 10:17:32.293860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.298 qpair failed and we were unable to recover it. 00:28:47.299 [2024-07-25 10:17:32.294069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.299 [2024-07-25 10:17:32.294098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.299 qpair failed and we were unable to recover it. 00:28:47.299 [2024-07-25 10:17:32.294276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.299 [2024-07-25 10:17:32.294305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.299 qpair failed and we were unable to recover it. 00:28:47.299 [2024-07-25 10:17:32.294511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.299 [2024-07-25 10:17:32.294540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.299 qpair failed and we were unable to recover it. 00:28:47.299 [2024-07-25 10:17:32.294720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.299 [2024-07-25 10:17:32.294749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.299 qpair failed and we were unable to recover it. 00:28:47.299 [2024-07-25 10:17:32.294904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.299 [2024-07-25 10:17:32.294934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.299 qpair failed and we were unable to recover it. 00:28:47.299 [2024-07-25 10:17:32.295104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.299 [2024-07-25 10:17:32.295132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.299 qpair failed and we were unable to recover it. 00:28:47.299 [2024-07-25 10:17:32.295305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.299 [2024-07-25 10:17:32.295333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.299 qpair failed and we were unable to recover it. 00:28:47.299 [2024-07-25 10:17:32.295534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.299 [2024-07-25 10:17:32.295564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.299 qpair failed and we were unable to recover it. 00:28:47.299 [2024-07-25 10:17:32.295744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.299 [2024-07-25 10:17:32.295773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.299 qpair failed and we were unable to recover it. 00:28:47.299 [2024-07-25 10:17:32.295943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.299 [2024-07-25 10:17:32.295972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.299 qpair failed and we were unable to recover it. 00:28:47.299 [2024-07-25 10:17:32.296139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.299 [2024-07-25 10:17:32.296168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.299 qpair failed and we were unable to recover it. 00:28:47.299 [2024-07-25 10:17:32.296339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.299 [2024-07-25 10:17:32.296368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.299 qpair failed and we were unable to recover it. 00:28:47.299 [2024-07-25 10:17:32.296540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.299 [2024-07-25 10:17:32.296570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.299 qpair failed and we were unable to recover it. 00:28:47.299 [2024-07-25 10:17:32.296736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.299 [2024-07-25 10:17:32.296764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.299 qpair failed and we were unable to recover it. 00:28:47.299 [2024-07-25 10:17:32.296936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.299 [2024-07-25 10:17:32.296969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.299 qpair failed and we were unable to recover it. 00:28:47.299 [2024-07-25 10:17:32.297176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.299 [2024-07-25 10:17:32.297205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.299 qpair failed and we were unable to recover it. 00:28:47.299 [2024-07-25 10:17:32.297368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.299 [2024-07-25 10:17:32.297397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.299 qpair failed and we were unable to recover it. 00:28:47.299 [2024-07-25 10:17:32.297614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.299 [2024-07-25 10:17:32.297643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.299 qpair failed and we were unable to recover it. 00:28:47.299 [2024-07-25 10:17:32.297785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.299 [2024-07-25 10:17:32.297814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.299 qpair failed and we were unable to recover it. 00:28:47.299 [2024-07-25 10:17:32.298025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.299 [2024-07-25 10:17:32.298053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.299 qpair failed and we were unable to recover it. 00:28:47.299 [2024-07-25 10:17:32.298233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.299 [2024-07-25 10:17:32.298263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.299 qpair failed and we were unable to recover it. 00:28:47.299 [2024-07-25 10:17:32.298397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.299 [2024-07-25 10:17:32.298425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.299 qpair failed and we were unable to recover it. 00:28:47.299 [2024-07-25 10:17:32.298622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.299 [2024-07-25 10:17:32.298651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.299 qpair failed and we were unable to recover it. 00:28:47.299 [2024-07-25 10:17:32.298828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.299 [2024-07-25 10:17:32.298857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.299 qpair failed and we were unable to recover it. 00:28:47.299 [2024-07-25 10:17:32.299062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.299 [2024-07-25 10:17:32.299091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.299 qpair failed and we were unable to recover it. 00:28:47.299 [2024-07-25 10:17:32.299268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.299 [2024-07-25 10:17:32.299297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.299 qpair failed and we were unable to recover it. 00:28:47.299 [2024-07-25 10:17:32.299476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.299 [2024-07-25 10:17:32.299505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.299 qpair failed and we were unable to recover it. 00:28:47.299 [2024-07-25 10:17:32.299682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.299 [2024-07-25 10:17:32.299711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.299 qpair failed and we were unable to recover it. 00:28:47.299 [2024-07-25 10:17:32.299920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.299 [2024-07-25 10:17:32.299950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.299 qpair failed and we were unable to recover it. 00:28:47.299 [2024-07-25 10:17:32.300156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.299 [2024-07-25 10:17:32.300185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.299 qpair failed and we were unable to recover it. 00:28:47.299 [2024-07-25 10:17:32.300343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.299 [2024-07-25 10:17:32.300371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.299 qpair failed and we were unable to recover it. 00:28:47.299 [2024-07-25 10:17:32.300527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.299 [2024-07-25 10:17:32.300556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.299 qpair failed and we were unable to recover it. 00:28:47.299 [2024-07-25 10:17:32.300746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.299 [2024-07-25 10:17:32.300775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.299 qpair failed and we were unable to recover it. 00:28:47.299 [2024-07-25 10:17:32.300989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.299 [2024-07-25 10:17:32.301018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.299 qpair failed and we were unable to recover it. 00:28:47.299 [2024-07-25 10:17:32.301187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.299 [2024-07-25 10:17:32.301216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.299 qpair failed and we were unable to recover it. 00:28:47.299 [2024-07-25 10:17:32.301416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.299 [2024-07-25 10:17:32.301450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.299 qpair failed and we were unable to recover it. 00:28:47.299 [2024-07-25 10:17:32.301601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.299 [2024-07-25 10:17:32.301630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.299 qpair failed and we were unable to recover it. 00:28:47.299 [2024-07-25 10:17:32.301831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.299 [2024-07-25 10:17:32.301860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.300 qpair failed and we were unable to recover it. 00:28:47.300 [2024-07-25 10:17:32.302025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.300 [2024-07-25 10:17:32.302054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.300 qpair failed and we were unable to recover it. 00:28:47.300 [2024-07-25 10:17:32.302214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.300 [2024-07-25 10:17:32.302244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.300 qpair failed and we were unable to recover it. 00:28:47.300 [2024-07-25 10:17:32.302389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.300 [2024-07-25 10:17:32.302418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.300 qpair failed and we were unable to recover it. 00:28:47.300 [2024-07-25 10:17:32.302600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.300 [2024-07-25 10:17:32.302633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.300 qpair failed and we were unable to recover it. 00:28:47.300 [2024-07-25 10:17:32.302784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.300 [2024-07-25 10:17:32.302812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.300 qpair failed and we were unable to recover it. 00:28:47.300 [2024-07-25 10:17:32.303000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.300 [2024-07-25 10:17:32.303029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.300 qpair failed and we were unable to recover it. 00:28:47.300 [2024-07-25 10:17:32.303198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.300 [2024-07-25 10:17:32.303227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.300 qpair failed and we were unable to recover it. 00:28:47.300 [2024-07-25 10:17:32.303442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.300 [2024-07-25 10:17:32.303471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.300 qpair failed and we were unable to recover it. 00:28:47.300 [2024-07-25 10:17:32.303658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.300 [2024-07-25 10:17:32.303686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.300 qpair failed and we were unable to recover it. 00:28:47.300 [2024-07-25 10:17:32.303879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.300 [2024-07-25 10:17:32.303908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.300 qpair failed and we were unable to recover it. 00:28:47.300 [2024-07-25 10:17:32.304082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.300 [2024-07-25 10:17:32.304111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.300 qpair failed and we were unable to recover it. 00:28:47.300 [2024-07-25 10:17:32.304309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.300 [2024-07-25 10:17:32.304337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.300 qpair failed and we were unable to recover it. 00:28:47.300 [2024-07-25 10:17:32.304541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.300 [2024-07-25 10:17:32.304571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.300 qpair failed and we were unable to recover it. 00:28:47.300 [2024-07-25 10:17:32.304742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.300 [2024-07-25 10:17:32.304771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.300 qpair failed and we were unable to recover it. 00:28:47.300 [2024-07-25 10:17:32.304978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.300 [2024-07-25 10:17:32.305008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.300 qpair failed and we were unable to recover it. 00:28:47.300 [2024-07-25 10:17:32.305215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.300 [2024-07-25 10:17:32.305244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.300 qpair failed and we were unable to recover it. 00:28:47.300 [2024-07-25 10:17:32.305419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.300 [2024-07-25 10:17:32.305455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.300 qpair failed and we were unable to recover it. 00:28:47.300 [2024-07-25 10:17:32.305607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.300 [2024-07-25 10:17:32.305637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.300 qpair failed and we were unable to recover it. 00:28:47.300 [2024-07-25 10:17:32.305783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.300 [2024-07-25 10:17:32.305812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.300 qpair failed and we were unable to recover it. 00:28:47.300 [2024-07-25 10:17:32.306024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.300 [2024-07-25 10:17:32.306053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.300 qpair failed and we were unable to recover it. 00:28:47.300 [2024-07-25 10:17:32.306230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.300 [2024-07-25 10:17:32.306259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.300 qpair failed and we were unable to recover it. 00:28:47.300 [2024-07-25 10:17:32.306462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.300 [2024-07-25 10:17:32.306493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.300 qpair failed and we were unable to recover it. 00:28:47.300 [2024-07-25 10:17:32.306639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.300 [2024-07-25 10:17:32.306669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.300 qpair failed and we were unable to recover it. 00:28:47.300 [2024-07-25 10:17:32.306840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.300 [2024-07-25 10:17:32.306870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.300 qpair failed and we were unable to recover it. 00:28:47.300 [2024-07-25 10:17:32.307049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.300 [2024-07-25 10:17:32.307078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.300 qpair failed and we were unable to recover it. 00:28:47.300 [2024-07-25 10:17:32.307278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.300 [2024-07-25 10:17:32.307308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.300 qpair failed and we were unable to recover it. 00:28:47.300 [2024-07-25 10:17:32.307517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.300 [2024-07-25 10:17:32.307548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.300 qpair failed and we were unable to recover it. 00:28:47.300 [2024-07-25 10:17:32.307732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.300 [2024-07-25 10:17:32.307762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.300 qpair failed and we were unable to recover it. 00:28:47.300 [2024-07-25 10:17:32.307963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.300 [2024-07-25 10:17:32.307992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.300 qpair failed and we were unable to recover it. 00:28:47.300 [2024-07-25 10:17:32.308162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.300 [2024-07-25 10:17:32.308192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.300 qpair failed and we were unable to recover it. 00:28:47.300 [2024-07-25 10:17:32.308411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.300 [2024-07-25 10:17:32.308453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.300 qpair failed and we were unable to recover it. 00:28:47.300 [2024-07-25 10:17:32.308655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.300 [2024-07-25 10:17:32.308684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.301 qpair failed and we were unable to recover it. 00:28:47.301 [2024-07-25 10:17:32.308888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.301 [2024-07-25 10:17:32.308917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.301 qpair failed and we were unable to recover it. 00:28:47.301 [2024-07-25 10:17:32.309086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.301 [2024-07-25 10:17:32.309115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.301 qpair failed and we were unable to recover it. 00:28:47.301 [2024-07-25 10:17:32.309291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.301 [2024-07-25 10:17:32.309321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.301 qpair failed and we were unable to recover it. 00:28:47.301 [2024-07-25 10:17:32.309522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.301 [2024-07-25 10:17:32.309553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.301 qpair failed and we were unable to recover it. 00:28:47.301 [2024-07-25 10:17:32.309732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.301 [2024-07-25 10:17:32.309761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.301 qpair failed and we were unable to recover it. 00:28:47.301 [2024-07-25 10:17:32.309973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.301 [2024-07-25 10:17:32.310002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.301 qpair failed and we were unable to recover it. 00:28:47.301 [2024-07-25 10:17:32.310188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.301 [2024-07-25 10:17:32.310222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.301 qpair failed and we were unable to recover it. 00:28:47.301 [2024-07-25 10:17:32.310378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.301 [2024-07-25 10:17:32.310407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.301 qpair failed and we were unable to recover it. 00:28:47.301 [2024-07-25 10:17:32.310622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.301 [2024-07-25 10:17:32.310652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.301 qpair failed and we were unable to recover it. 00:28:47.301 [2024-07-25 10:17:32.310825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.301 [2024-07-25 10:17:32.310854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.301 qpair failed and we were unable to recover it. 00:28:47.301 [2024-07-25 10:17:32.311060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.301 [2024-07-25 10:17:32.311090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.301 qpair failed and we were unable to recover it. 00:28:47.301 [2024-07-25 10:17:32.311260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.301 [2024-07-25 10:17:32.311294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.301 qpair failed and we were unable to recover it. 00:28:47.301 [2024-07-25 10:17:32.311448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.301 [2024-07-25 10:17:32.311478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.301 qpair failed and we were unable to recover it. 00:28:47.301 [2024-07-25 10:17:32.311617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.301 [2024-07-25 10:17:32.311646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.301 qpair failed and we were unable to recover it. 00:28:47.301 [2024-07-25 10:17:32.311860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.301 [2024-07-25 10:17:32.311889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.301 qpair failed and we were unable to recover it. 00:28:47.301 [2024-07-25 10:17:32.312014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.301 [2024-07-25 10:17:32.312052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.301 qpair failed and we were unable to recover it. 00:28:47.301 [2024-07-25 10:17:32.312236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.301 [2024-07-25 10:17:32.312265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.301 qpair failed and we were unable to recover it. 00:28:47.301 [2024-07-25 10:17:32.312452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.301 [2024-07-25 10:17:32.312482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.301 qpair failed and we were unable to recover it. 00:28:47.301 [2024-07-25 10:17:32.312687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.301 [2024-07-25 10:17:32.312716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.301 qpair failed and we were unable to recover it. 00:28:47.301 [2024-07-25 10:17:32.312917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.301 [2024-07-25 10:17:32.312946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.301 qpair failed and we were unable to recover it. 00:28:47.301 [2024-07-25 10:17:32.313143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.301 [2024-07-25 10:17:32.313172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.301 qpair failed and we were unable to recover it. 00:28:47.301 [2024-07-25 10:17:32.313374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.301 [2024-07-25 10:17:32.313402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.301 qpair failed and we were unable to recover it. 00:28:47.301 [2024-07-25 10:17:32.313601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.301 [2024-07-25 10:17:32.313631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.301 qpair failed and we were unable to recover it. 00:28:47.301 [2024-07-25 10:17:32.313772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.301 [2024-07-25 10:17:32.313801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.301 qpair failed and we were unable to recover it. 00:28:47.301 [2024-07-25 10:17:32.313977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.301 [2024-07-25 10:17:32.314006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.301 qpair failed and we were unable to recover it. 00:28:47.301 [2024-07-25 10:17:32.314211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.301 [2024-07-25 10:17:32.314240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.301 qpair failed and we were unable to recover it. 00:28:47.301 [2024-07-25 10:17:32.314442] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:47.301 [2024-07-25 10:17:32.314447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.301 [2024-07-25 10:17:32.314480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.301 qpair failed and we were unable to recover it. 00:28:47.301 [2024-07-25 10:17:32.314687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.301 [2024-07-25 10:17:32.314716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.301 qpair failed and we were unable to recover it. 00:28:47.301 [2024-07-25 10:17:32.314894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.301 [2024-07-25 10:17:32.314923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.301 qpair failed and we were unable to recover it. 00:28:47.301 [2024-07-25 10:17:32.315107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.301 [2024-07-25 10:17:32.315136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.301 qpair failed and we were unable to recover it. 00:28:47.301 [2024-07-25 10:17:32.315339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.301 [2024-07-25 10:17:32.315368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.301 qpair failed and we were unable to recover it. 00:28:47.301 [2024-07-25 10:17:32.315573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.301 [2024-07-25 10:17:32.315603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.301 qpair failed and we were unable to recover it. 00:28:47.301 [2024-07-25 10:17:32.315786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.301 [2024-07-25 10:17:32.315815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.301 qpair failed and we were unable to recover it. 00:28:47.301 [2024-07-25 10:17:32.316014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.301 [2024-07-25 10:17:32.316043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.301 qpair failed and we were unable to recover it. 00:28:47.301 [2024-07-25 10:17:32.316243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.301 [2024-07-25 10:17:32.316272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.301 qpair failed and we were unable to recover it. 00:28:47.301 [2024-07-25 10:17:32.316452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.301 [2024-07-25 10:17:32.316481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.301 qpair failed and we were unable to recover it. 00:28:47.301 [2024-07-25 10:17:32.316679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.301 [2024-07-25 10:17:32.316708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.301 qpair failed and we were unable to recover it. 00:28:47.302 [2024-07-25 10:17:32.316883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.302 [2024-07-25 10:17:32.316912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.302 qpair failed and we were unable to recover it. 00:28:47.302 [2024-07-25 10:17:32.317122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.302 [2024-07-25 10:17:32.317152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.302 qpair failed and we were unable to recover it. 00:28:47.302 [2024-07-25 10:17:32.317352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.302 [2024-07-25 10:17:32.317382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.302 qpair failed and we were unable to recover it. 00:28:47.302 [2024-07-25 10:17:32.317588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.302 [2024-07-25 10:17:32.317618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.302 qpair failed and we were unable to recover it. 00:28:47.302 [2024-07-25 10:17:32.317824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.302 [2024-07-25 10:17:32.317853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.302 qpair failed and we were unable to recover it. 00:28:47.302 [2024-07-25 10:17:32.318120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.302 [2024-07-25 10:17:32.318149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.302 qpair failed and we were unable to recover it. 00:28:47.302 [2024-07-25 10:17:32.318328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.302 [2024-07-25 10:17:32.318357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.302 qpair failed and we were unable to recover it. 00:28:47.302 [2024-07-25 10:17:32.318560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.302 [2024-07-25 10:17:32.318589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.302 qpair failed and we were unable to recover it. 00:28:47.302 [2024-07-25 10:17:32.318743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.302 [2024-07-25 10:17:32.318772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.302 qpair failed and we were unable to recover it. 00:28:47.302 [2024-07-25 10:17:32.318947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.302 [2024-07-25 10:17:32.318976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.302 qpair failed and we were unable to recover it. 00:28:47.302 [2024-07-25 10:17:32.319132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.302 [2024-07-25 10:17:32.319161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.302 qpair failed and we were unable to recover it. 00:28:47.302 [2024-07-25 10:17:32.319348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.302 [2024-07-25 10:17:32.319386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.302 qpair failed and we were unable to recover it. 00:28:47.302 [2024-07-25 10:17:32.319537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.302 [2024-07-25 10:17:32.319566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.302 qpair failed and we were unable to recover it. 00:28:47.302 [2024-07-25 10:17:32.319776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.302 [2024-07-25 10:17:32.319805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.302 qpair failed and we were unable to recover it. 00:28:47.302 [2024-07-25 10:17:32.320011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.302 [2024-07-25 10:17:32.320040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.302 qpair failed and we were unable to recover it. 00:28:47.302 [2024-07-25 10:17:32.320225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.302 [2024-07-25 10:17:32.320254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.302 qpair failed and we were unable to recover it. 00:28:47.302 [2024-07-25 10:17:32.320421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.302 [2024-07-25 10:17:32.320467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.302 qpair failed and we were unable to recover it. 00:28:47.302 [2024-07-25 10:17:32.320672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.302 [2024-07-25 10:17:32.320701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.302 qpair failed and we were unable to recover it. 00:28:47.302 [2024-07-25 10:17:32.320963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.302 [2024-07-25 10:17:32.320992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.302 qpair failed and we were unable to recover it. 00:28:47.302 [2024-07-25 10:17:32.321168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.302 [2024-07-25 10:17:32.321197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.302 qpair failed and we were unable to recover it. 00:28:47.302 [2024-07-25 10:17:32.321409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.302 [2024-07-25 10:17:32.321444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.302 qpair failed and we were unable to recover it. 00:28:47.302 [2024-07-25 10:17:32.321619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.302 [2024-07-25 10:17:32.321649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.302 qpair failed and we were unable to recover it. 00:28:47.302 [2024-07-25 10:17:32.321830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.302 [2024-07-25 10:17:32.321858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.302 qpair failed and we were unable to recover it. 00:28:47.302 [2024-07-25 10:17:32.322039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.302 [2024-07-25 10:17:32.322067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.302 qpair failed and we were unable to recover it. 00:28:47.302 [2024-07-25 10:17:32.322212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.302 [2024-07-25 10:17:32.322241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.302 qpair failed and we were unable to recover it. 00:28:47.302 [2024-07-25 10:17:32.322436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.302 [2024-07-25 10:17:32.322477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.302 qpair failed and we were unable to recover it. 00:28:47.302 [2024-07-25 10:17:32.322643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.302 [2024-07-25 10:17:32.322672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.302 qpair failed and we were unable to recover it. 00:28:47.302 [2024-07-25 10:17:32.322878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.302 [2024-07-25 10:17:32.322911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.302 qpair failed and we were unable to recover it. 00:28:47.302 [2024-07-25 10:17:32.323074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.302 [2024-07-25 10:17:32.323104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.302 qpair failed and we were unable to recover it. 00:28:47.302 [2024-07-25 10:17:32.323309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.302 [2024-07-25 10:17:32.323339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.302 qpair failed and we were unable to recover it. 00:28:47.302 [2024-07-25 10:17:32.323515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.302 [2024-07-25 10:17:32.323545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.302 qpair failed and we were unable to recover it. 00:28:47.302 [2024-07-25 10:17:32.323695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.302 [2024-07-25 10:17:32.323723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.302 qpair failed and we were unable to recover it. 00:28:47.302 [2024-07-25 10:17:32.323869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.302 [2024-07-25 10:17:32.323898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.302 qpair failed and we were unable to recover it. 00:28:47.302 [2024-07-25 10:17:32.324163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.302 [2024-07-25 10:17:32.324193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.302 qpair failed and we were unable to recover it. 00:28:47.302 [2024-07-25 10:17:32.324391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.302 [2024-07-25 10:17:32.324420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.302 qpair failed and we were unable to recover it. 00:28:47.302 [2024-07-25 10:17:32.324630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.302 [2024-07-25 10:17:32.324659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.302 qpair failed and we were unable to recover it. 00:28:47.302 [2024-07-25 10:17:32.324814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.302 [2024-07-25 10:17:32.324843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.302 qpair failed and we were unable to recover it. 00:28:47.302 [2024-07-25 10:17:32.325054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.303 [2024-07-25 10:17:32.325083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.303 qpair failed and we were unable to recover it. 00:28:47.303 [2024-07-25 10:17:32.325257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.303 [2024-07-25 10:17:32.325286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.303 qpair failed and we were unable to recover it. 00:28:47.303 [2024-07-25 10:17:32.325544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.303 [2024-07-25 10:17:32.325574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.303 qpair failed and we were unable to recover it. 00:28:47.303 [2024-07-25 10:17:32.325781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.303 [2024-07-25 10:17:32.325810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.303 qpair failed and we were unable to recover it. 00:28:47.303 [2024-07-25 10:17:32.326006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.303 [2024-07-25 10:17:32.326036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.303 qpair failed and we were unable to recover it. 00:28:47.303 [2024-07-25 10:17:32.326239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.303 [2024-07-25 10:17:32.326268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.303 qpair failed and we were unable to recover it. 00:28:47.303 [2024-07-25 10:17:32.326447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.303 [2024-07-25 10:17:32.326477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.303 qpair failed and we were unable to recover it. 00:28:47.303 [2024-07-25 10:17:32.326621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.303 [2024-07-25 10:17:32.326650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.303 qpair failed and we were unable to recover it. 00:28:47.303 [2024-07-25 10:17:32.326857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.303 [2024-07-25 10:17:32.326886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.303 qpair failed and we were unable to recover it. 00:28:47.303 [2024-07-25 10:17:32.327087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.303 [2024-07-25 10:17:32.327116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.303 qpair failed and we were unable to recover it. 00:28:47.303 [2024-07-25 10:17:32.327277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.303 [2024-07-25 10:17:32.327305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.303 qpair failed and we were unable to recover it. 00:28:47.303 [2024-07-25 10:17:32.327473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.303 [2024-07-25 10:17:32.327503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.303 qpair failed and we were unable to recover it. 00:28:47.303 [2024-07-25 10:17:32.327713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.303 [2024-07-25 10:17:32.327742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.303 qpair failed and we were unable to recover it. 00:28:47.303 [2024-07-25 10:17:32.327927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.303 [2024-07-25 10:17:32.327956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.303 qpair failed and we were unable to recover it. 00:28:47.303 [2024-07-25 10:17:32.328222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.303 [2024-07-25 10:17:32.328251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.303 qpair failed and we were unable to recover it. 00:28:47.303 [2024-07-25 10:17:32.328419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.303 [2024-07-25 10:17:32.328474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.303 qpair failed and we were unable to recover it. 00:28:47.303 [2024-07-25 10:17:32.328650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.303 [2024-07-25 10:17:32.328680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.303 qpair failed and we were unable to recover it. 00:28:47.303 [2024-07-25 10:17:32.328893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.303 [2024-07-25 10:17:32.328922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.303 qpair failed and we were unable to recover it. 00:28:47.303 [2024-07-25 10:17:32.329091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.303 [2024-07-25 10:17:32.329120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.303 qpair failed and we were unable to recover it. 00:28:47.303 [2024-07-25 10:17:32.329321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.303 [2024-07-25 10:17:32.329350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.303 qpair failed and we were unable to recover it. 00:28:47.303 [2024-07-25 10:17:32.329531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.303 [2024-07-25 10:17:32.329561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.303 qpair failed and we were unable to recover it. 00:28:47.303 [2024-07-25 10:17:32.329733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.303 [2024-07-25 10:17:32.329762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.303 qpair failed and we were unable to recover it. 00:28:47.303 [2024-07-25 10:17:32.329967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.303 [2024-07-25 10:17:32.329996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.303 qpair failed and we were unable to recover it. 00:28:47.303 [2024-07-25 10:17:32.330196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.303 [2024-07-25 10:17:32.330225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.303 qpair failed and we were unable to recover it. 00:28:47.303 [2024-07-25 10:17:32.330400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.303 [2024-07-25 10:17:32.330435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.303 qpair failed and we were unable to recover it. 00:28:47.303 [2024-07-25 10:17:32.330637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.303 [2024-07-25 10:17:32.330666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.303 qpair failed and we were unable to recover it. 00:28:47.303 [2024-07-25 10:17:32.330868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.303 [2024-07-25 10:17:32.330897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.303 qpair failed and we were unable to recover it. 00:28:47.303 [2024-07-25 10:17:32.331113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.303 [2024-07-25 10:17:32.331141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.303 qpair failed and we were unable to recover it. 00:28:47.303 [2024-07-25 10:17:32.331355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.303 [2024-07-25 10:17:32.331384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.303 qpair failed and we were unable to recover it. 00:28:47.303 [2024-07-25 10:17:32.331595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.303 [2024-07-25 10:17:32.331625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.303 qpair failed and we were unable to recover it. 00:28:47.303 [2024-07-25 10:17:32.331794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.303 [2024-07-25 10:17:32.331829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.303 qpair failed and we were unable to recover it. 00:28:47.303 [2024-07-25 10:17:32.332039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.303 [2024-07-25 10:17:32.332068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.303 qpair failed and we were unable to recover it. 00:28:47.303 [2024-07-25 10:17:32.332251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.303 [2024-07-25 10:17:32.332281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.303 qpair failed and we were unable to recover it. 00:28:47.303 [2024-07-25 10:17:32.332486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.303 [2024-07-25 10:17:32.332516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.303 qpair failed and we were unable to recover it. 00:28:47.303 [2024-07-25 10:17:32.332694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.303 [2024-07-25 10:17:32.332724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.303 qpair failed and we were unable to recover it. 00:28:47.303 [2024-07-25 10:17:32.332903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.303 [2024-07-25 10:17:32.332932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.303 qpair failed and we were unable to recover it. 00:28:47.303 [2024-07-25 10:17:32.333111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.303 [2024-07-25 10:17:32.333141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.303 qpair failed and we were unable to recover it. 00:28:47.303 [2024-07-25 10:17:32.333319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.303 [2024-07-25 10:17:32.333348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.303 qpair failed and we were unable to recover it. 00:28:47.303 [2024-07-25 10:17:32.333550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.304 [2024-07-25 10:17:32.333581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.304 qpair failed and we were unable to recover it. 00:28:47.304 [2024-07-25 10:17:32.333727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.304 [2024-07-25 10:17:32.333756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.304 qpair failed and we were unable to recover it. 00:28:47.304 [2024-07-25 10:17:32.333941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.304 [2024-07-25 10:17:32.333971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.304 qpair failed and we were unable to recover it. 00:28:47.304 [2024-07-25 10:17:32.334147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.304 [2024-07-25 10:17:32.334176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.304 qpair failed and we were unable to recover it. 00:28:47.304 [2024-07-25 10:17:32.334313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.304 [2024-07-25 10:17:32.334342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.304 qpair failed and we were unable to recover it. 00:28:47.304 [2024-07-25 10:17:32.334545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.304 [2024-07-25 10:17:32.334574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.304 qpair failed and we were unable to recover it. 00:28:47.304 [2024-07-25 10:17:32.334783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.304 [2024-07-25 10:17:32.334813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.304 qpair failed and we were unable to recover it. 00:28:47.304 [2024-07-25 10:17:32.334988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.304 [2024-07-25 10:17:32.335017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.304 qpair failed and we were unable to recover it. 00:28:47.304 [2024-07-25 10:17:32.335232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.304 [2024-07-25 10:17:32.335261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.304 qpair failed and we were unable to recover it. 00:28:47.304 [2024-07-25 10:17:32.335536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.304 [2024-07-25 10:17:32.335565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.304 qpair failed and we were unable to recover it. 00:28:47.304 [2024-07-25 10:17:32.335747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.304 [2024-07-25 10:17:32.335777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.304 qpair failed and we were unable to recover it. 00:28:47.304 [2024-07-25 10:17:32.335982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.304 [2024-07-25 10:17:32.336011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.304 qpair failed and we were unable to recover it. 00:28:47.304 [2024-07-25 10:17:32.336188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.304 [2024-07-25 10:17:32.336217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.304 qpair failed and we were unable to recover it. 00:28:47.304 [2024-07-25 10:17:32.336363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.304 [2024-07-25 10:17:32.336393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.304 qpair failed and we were unable to recover it. 00:28:47.304 [2024-07-25 10:17:32.336579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.304 [2024-07-25 10:17:32.336609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.304 qpair failed and we were unable to recover it. 00:28:47.304 [2024-07-25 10:17:32.336792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.304 [2024-07-25 10:17:32.336821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.304 qpair failed and we were unable to recover it. 00:28:47.304 [2024-07-25 10:17:32.336973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.304 [2024-07-25 10:17:32.337001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.304 qpair failed and we were unable to recover it. 00:28:47.304 [2024-07-25 10:17:32.337180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.304 [2024-07-25 10:17:32.337210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.304 qpair failed and we were unable to recover it. 00:28:47.304 [2024-07-25 10:17:32.337417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.304 [2024-07-25 10:17:32.337470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.304 qpair failed and we were unable to recover it. 00:28:47.304 [2024-07-25 10:17:32.337688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.304 [2024-07-25 10:17:32.337718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.304 qpair failed and we were unable to recover it. 00:28:47.304 [2024-07-25 10:17:32.337907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.304 [2024-07-25 10:17:32.337937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.304 qpair failed and we were unable to recover it. 00:28:47.304 [2024-07-25 10:17:32.338126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.304 [2024-07-25 10:17:32.338156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.304 qpair failed and we were unable to recover it. 00:28:47.304 [2024-07-25 10:17:32.338324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.304 [2024-07-25 10:17:32.338364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.304 qpair failed and we were unable to recover it. 00:28:47.304 [2024-07-25 10:17:32.338504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.304 [2024-07-25 10:17:32.338535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.304 qpair failed and we were unable to recover it. 00:28:47.304 [2024-07-25 10:17:32.338701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.304 [2024-07-25 10:17:32.338730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.304 qpair failed and we were unable to recover it. 00:28:47.304 [2024-07-25 10:17:32.338911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.304 [2024-07-25 10:17:32.338940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.304 qpair failed and we were unable to recover it. 00:28:47.304 [2024-07-25 10:17:32.339146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.304 [2024-07-25 10:17:32.339175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.304 qpair failed and we were unable to recover it. 00:28:47.304 [2024-07-25 10:17:32.339354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.304 [2024-07-25 10:17:32.339383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.304 qpair failed and we were unable to recover it. 00:28:47.304 [2024-07-25 10:17:32.339588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.304 [2024-07-25 10:17:32.339617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.304 qpair failed and we were unable to recover it. 00:28:47.304 [2024-07-25 10:17:32.339795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.304 [2024-07-25 10:17:32.339824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.304 qpair failed and we were unable to recover it. 00:28:47.304 [2024-07-25 10:17:32.339971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.304 [2024-07-25 10:17:32.340001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.304 qpair failed and we were unable to recover it. 00:28:47.304 [2024-07-25 10:17:32.340212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.304 [2024-07-25 10:17:32.340240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.304 qpair failed and we were unable to recover it. 00:28:47.304 [2024-07-25 10:17:32.340392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.304 [2024-07-25 10:17:32.340426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.304 qpair failed and we were unable to recover it. 00:28:47.304 [2024-07-25 10:17:32.340617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.304 [2024-07-25 10:17:32.340647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.304 qpair failed and we were unable to recover it. 00:28:47.304 [2024-07-25 10:17:32.340839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.304 [2024-07-25 10:17:32.340877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.304 qpair failed and we were unable to recover it. 00:28:47.304 [2024-07-25 10:17:32.341063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.304 [2024-07-25 10:17:32.341092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.304 qpair failed and we were unable to recover it. 00:28:47.304 [2024-07-25 10:17:32.341294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.304 [2024-07-25 10:17:32.341323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.304 qpair failed and we were unable to recover it. 00:28:47.304 [2024-07-25 10:17:32.341498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.304 [2024-07-25 10:17:32.341528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.304 qpair failed and we were unable to recover it. 00:28:47.305 [2024-07-25 10:17:32.341736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.305 [2024-07-25 10:17:32.341766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.305 qpair failed and we were unable to recover it. 00:28:47.305 [2024-07-25 10:17:32.341943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.305 [2024-07-25 10:17:32.341972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.305 qpair failed and we were unable to recover it. 00:28:47.305 [2024-07-25 10:17:32.342174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.305 [2024-07-25 10:17:32.342203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.305 qpair failed and we were unable to recover it. 00:28:47.305 [2024-07-25 10:17:32.342376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.305 [2024-07-25 10:17:32.342404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.305 qpair failed and we were unable to recover it. 00:28:47.305 [2024-07-25 10:17:32.342659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.305 [2024-07-25 10:17:32.342716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff98000b90 with addr=10.0.0.2, port=4420 00:28:47.305 qpair failed and we were unable to recover it. 00:28:47.305 [2024-07-25 10:17:32.342938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.305 [2024-07-25 10:17:32.342984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff98000b90 with addr=10.0.0.2, port=4420 00:28:47.305 qpair failed and we were unable to recover it. 00:28:47.305 [2024-07-25 10:17:32.343225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.305 [2024-07-25 10:17:32.343269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff98000b90 with addr=10.0.0.2, port=4420 00:28:47.305 qpair failed and we were unable to recover it. 00:28:47.305 [2024-07-25 10:17:32.343495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.305 [2024-07-25 10:17:32.343543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7eff98000b90 with addr=10.0.0.2, port=4420 00:28:47.305 qpair failed and we were unable to recover it. 00:28:47.305 [2024-07-25 10:17:32.343753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.305 [2024-07-25 10:17:32.343783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.305 qpair failed and we were unable to recover it. 00:28:47.305 [2024-07-25 10:17:32.343929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.305 [2024-07-25 10:17:32.343958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.305 qpair failed and we were unable to recover it. 00:28:47.305 [2024-07-25 10:17:32.344152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.305 [2024-07-25 10:17:32.344181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.305 qpair failed and we were unable to recover it. 00:28:47.305 [2024-07-25 10:17:32.344352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.305 [2024-07-25 10:17:32.344381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.305 qpair failed and we were unable to recover it. 00:28:47.305 [2024-07-25 10:17:32.344588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.305 [2024-07-25 10:17:32.344617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.305 qpair failed and we were unable to recover it. 00:28:47.305 [2024-07-25 10:17:32.344817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.305 [2024-07-25 10:17:32.344846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.305 qpair failed and we were unable to recover it. 00:28:47.305 [2024-07-25 10:17:32.345020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.305 [2024-07-25 10:17:32.345049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.305 qpair failed and we were unable to recover it. 00:28:47.305 [2024-07-25 10:17:32.345247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.305 [2024-07-25 10:17:32.345277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.305 qpair failed and we were unable to recover it. 00:28:47.305 [2024-07-25 10:17:32.345459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.305 [2024-07-25 10:17:32.345489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.305 qpair failed and we were unable to recover it. 00:28:47.305 [2024-07-25 10:17:32.345694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.305 [2024-07-25 10:17:32.345723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.305 qpair failed and we were unable to recover it. 00:28:47.305 [2024-07-25 10:17:32.345926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.305 [2024-07-25 10:17:32.345954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.305 qpair failed and we were unable to recover it. 00:28:47.305 [2024-07-25 10:17:32.346153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.305 [2024-07-25 10:17:32.346182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.305 qpair failed and we were unable to recover it. 00:28:47.305 [2024-07-25 10:17:32.346355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.305 [2024-07-25 10:17:32.346384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.305 qpair failed and we were unable to recover it. 00:28:47.305 [2024-07-25 10:17:32.346593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.305 [2024-07-25 10:17:32.346623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.305 qpair failed and we were unable to recover it. 00:28:47.305 [2024-07-25 10:17:32.346825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.305 [2024-07-25 10:17:32.346854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.305 qpair failed and we were unable to recover it. 00:28:47.305 [2024-07-25 10:17:32.347035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.305 [2024-07-25 10:17:32.347063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.305 qpair failed and we were unable to recover it. 00:28:47.305 [2024-07-25 10:17:32.347270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.305 [2024-07-25 10:17:32.347299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.305 qpair failed and we were unable to recover it. 00:28:47.305 [2024-07-25 10:17:32.347496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.305 [2024-07-25 10:17:32.347526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.305 qpair failed and we were unable to recover it. 00:28:47.305 [2024-07-25 10:17:32.347725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.305 [2024-07-25 10:17:32.347754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.305 qpair failed and we were unable to recover it. 00:28:47.305 [2024-07-25 10:17:32.347893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.305 [2024-07-25 10:17:32.347922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.305 qpair failed and we were unable to recover it. 00:28:47.305 [2024-07-25 10:17:32.348097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.305 [2024-07-25 10:17:32.348126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.305 qpair failed and we were unable to recover it. 00:28:47.305 [2024-07-25 10:17:32.348296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.305 [2024-07-25 10:17:32.348325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.305 qpair failed and we were unable to recover it. 00:28:47.305 [2024-07-25 10:17:32.348468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.305 [2024-07-25 10:17:32.348498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.305 qpair failed and we were unable to recover it. 00:28:47.305 [2024-07-25 10:17:32.348669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.306 [2024-07-25 10:17:32.348698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.306 qpair failed and we were unable to recover it. 00:28:47.306 [2024-07-25 10:17:32.348872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.306 [2024-07-25 10:17:32.348901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.306 qpair failed and we were unable to recover it. 00:28:47.306 [2024-07-25 10:17:32.349071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.306 [2024-07-25 10:17:32.349100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.306 qpair failed and we were unable to recover it. 00:28:47.306 [2024-07-25 10:17:32.349274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.306 [2024-07-25 10:17:32.349307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.306 qpair failed and we were unable to recover it. 00:28:47.306 [2024-07-25 10:17:32.349494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.306 [2024-07-25 10:17:32.349533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.306 qpair failed and we were unable to recover it. 00:28:47.306 [2024-07-25 10:17:32.349680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.306 [2024-07-25 10:17:32.349709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.306 qpair failed and we were unable to recover it. 00:28:47.306 [2024-07-25 10:17:32.349909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.306 [2024-07-25 10:17:32.349939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.306 qpair failed and we were unable to recover it. 00:28:47.306 [2024-07-25 10:17:32.350139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.306 [2024-07-25 10:17:32.350169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.306 qpair failed and we were unable to recover it. 00:28:47.306 [2024-07-25 10:17:32.350380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.306 [2024-07-25 10:17:32.350409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.306 qpair failed and we were unable to recover it. 00:28:47.306 [2024-07-25 10:17:32.350624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.306 [2024-07-25 10:17:32.350654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.306 qpair failed and we were unable to recover it. 00:28:47.306 [2024-07-25 10:17:32.350830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.306 [2024-07-25 10:17:32.350859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.306 qpair failed and we were unable to recover it. 00:28:47.306 [2024-07-25 10:17:32.351066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.306 [2024-07-25 10:17:32.351095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.306 qpair failed and we were unable to recover it. 00:28:47.306 [2024-07-25 10:17:32.351272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.306 [2024-07-25 10:17:32.351302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.306 qpair failed and we were unable to recover it. 00:28:47.306 [2024-07-25 10:17:32.351484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.306 [2024-07-25 10:17:32.351514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.306 qpair failed and we were unable to recover it. 00:28:47.306 [2024-07-25 10:17:32.351716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.306 [2024-07-25 10:17:32.351746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.306 qpair failed and we were unable to recover it. 00:28:47.306 [2024-07-25 10:17:32.351923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.306 [2024-07-25 10:17:32.351952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.306 qpair failed and we were unable to recover it. 00:28:47.306 [2024-07-25 10:17:32.352099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.306 [2024-07-25 10:17:32.352128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.306 qpair failed and we were unable to recover it. 00:28:47.306 [2024-07-25 10:17:32.352308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.306 [2024-07-25 10:17:32.352338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.306 qpair failed and we were unable to recover it. 00:28:47.306 [2024-07-25 10:17:32.352483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.306 [2024-07-25 10:17:32.352514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.306 qpair failed and we were unable to recover it. 00:28:47.306 [2024-07-25 10:17:32.352690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.306 [2024-07-25 10:17:32.352719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.306 qpair failed and we were unable to recover it. 00:28:47.306 [2024-07-25 10:17:32.352919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.306 [2024-07-25 10:17:32.352948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.306 qpair failed and we were unable to recover it. 00:28:47.306 [2024-07-25 10:17:32.353148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.306 [2024-07-25 10:17:32.353200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.306 qpair failed and we were unable to recover it. 00:28:47.306 [2024-07-25 10:17:32.353358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.306 [2024-07-25 10:17:32.353386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.306 qpair failed and we were unable to recover it. 00:28:47.306 [2024-07-25 10:17:32.353551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.306 [2024-07-25 10:17:32.353580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.306 qpair failed and we were unable to recover it. 00:28:47.306 [2024-07-25 10:17:32.353727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.306 [2024-07-25 10:17:32.353756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.306 qpair failed and we were unable to recover it. 00:28:47.306 [2024-07-25 10:17:32.353921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.306 [2024-07-25 10:17:32.353950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.306 qpair failed and we were unable to recover it. 00:28:47.306 [2024-07-25 10:17:32.354132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.306 [2024-07-25 10:17:32.354162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.306 qpair failed and we were unable to recover it. 00:28:47.306 [2024-07-25 10:17:32.354330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.306 [2024-07-25 10:17:32.354359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.306 qpair failed and we were unable to recover it. 00:28:47.306 [2024-07-25 10:17:32.354557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.306 [2024-07-25 10:17:32.354586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.306 qpair failed and we were unable to recover it. 00:28:47.306 [2024-07-25 10:17:32.354790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.306 [2024-07-25 10:17:32.354819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.306 qpair failed and we were unable to recover it. 00:28:47.306 [2024-07-25 10:17:32.354998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.306 [2024-07-25 10:17:32.355027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.306 qpair failed and we were unable to recover it. 00:28:47.306 [2024-07-25 10:17:32.355170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.306 [2024-07-25 10:17:32.355199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.306 qpair failed and we were unable to recover it. 00:28:47.306 [2024-07-25 10:17:32.355400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.306 [2024-07-25 10:17:32.355436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.306 qpair failed and we were unable to recover it. 00:28:47.306 [2024-07-25 10:17:32.355604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.306 [2024-07-25 10:17:32.355633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.306 qpair failed and we were unable to recover it. 00:28:47.306 [2024-07-25 10:17:32.355857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.306 [2024-07-25 10:17:32.355886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.306 qpair failed and we were unable to recover it. 00:28:47.306 [2024-07-25 10:17:32.356098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.306 [2024-07-25 10:17:32.356127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.306 qpair failed and we were unable to recover it. 00:28:47.306 [2024-07-25 10:17:32.356269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.306 [2024-07-25 10:17:32.356298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.306 qpair failed and we were unable to recover it. 00:28:47.306 [2024-07-25 10:17:32.356503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.307 [2024-07-25 10:17:32.356533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.307 qpair failed and we were unable to recover it. 00:28:47.307 [2024-07-25 10:17:32.356713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.307 [2024-07-25 10:17:32.356742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.307 qpair failed and we were unable to recover it. 00:28:47.307 [2024-07-25 10:17:32.356946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.307 [2024-07-25 10:17:32.356975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.307 qpair failed and we were unable to recover it. 00:28:47.307 [2024-07-25 10:17:32.357127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.307 [2024-07-25 10:17:32.357157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.307 qpair failed and we were unable to recover it. 00:28:47.307 [2024-07-25 10:17:32.357330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.307 [2024-07-25 10:17:32.357360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.307 qpair failed and we were unable to recover it. 00:28:47.307 [2024-07-25 10:17:32.357545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.307 [2024-07-25 10:17:32.357575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.307 qpair failed and we were unable to recover it. 00:28:47.307 [2024-07-25 10:17:32.357778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.307 [2024-07-25 10:17:32.357812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.307 qpair failed and we were unable to recover it. 00:28:47.307 [2024-07-25 10:17:32.357988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.307 [2024-07-25 10:17:32.358017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.307 qpair failed and we were unable to recover it. 00:28:47.307 [2024-07-25 10:17:32.358219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.307 [2024-07-25 10:17:32.358249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.307 qpair failed and we were unable to recover it. 00:28:47.307 [2024-07-25 10:17:32.358450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.307 [2024-07-25 10:17:32.358480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.307 qpair failed and we were unable to recover it. 00:28:47.307 [2024-07-25 10:17:32.358662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.307 [2024-07-25 10:17:32.358690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.307 qpair failed and we were unable to recover it. 00:28:47.307 [2024-07-25 10:17:32.358854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.307 [2024-07-25 10:17:32.358883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.307 qpair failed and we were unable to recover it. 00:28:47.307 [2024-07-25 10:17:32.359046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.307 [2024-07-25 10:17:32.359075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.307 qpair failed and we were unable to recover it. 00:28:47.307 [2024-07-25 10:17:32.359218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.307 [2024-07-25 10:17:32.359247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.307 qpair failed and we were unable to recover it. 00:28:47.307 [2024-07-25 10:17:32.359388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.307 [2024-07-25 10:17:32.359418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.307 qpair failed and we were unable to recover it. 00:28:47.307 [2024-07-25 10:17:32.359601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.307 [2024-07-25 10:17:32.359629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.307 qpair failed and we were unable to recover it. 00:28:47.307 [2024-07-25 10:17:32.359804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.307 [2024-07-25 10:17:32.359833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.307 qpair failed and we were unable to recover it. 00:28:47.307 [2024-07-25 10:17:32.360039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.307 [2024-07-25 10:17:32.360068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.307 qpair failed and we were unable to recover it. 00:28:47.307 [2024-07-25 10:17:32.360219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.307 [2024-07-25 10:17:32.360248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.307 qpair failed and we were unable to recover it. 00:28:47.307 [2024-07-25 10:17:32.360444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.307 [2024-07-25 10:17:32.360474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.307 qpair failed and we were unable to recover it. 00:28:47.307 [2024-07-25 10:17:32.360660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.307 [2024-07-25 10:17:32.360690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.307 qpair failed and we were unable to recover it. 00:28:47.307 [2024-07-25 10:17:32.360847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.307 [2024-07-25 10:17:32.360876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.307 qpair failed and we were unable to recover it. 00:28:47.307 [2024-07-25 10:17:32.361054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.307 [2024-07-25 10:17:32.361083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.307 qpair failed and we were unable to recover it. 00:28:47.307 [2024-07-25 10:17:32.361261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.307 [2024-07-25 10:17:32.361291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.307 qpair failed and we were unable to recover it. 00:28:47.307 [2024-07-25 10:17:32.361468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.307 [2024-07-25 10:17:32.361498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.307 qpair failed and we were unable to recover it. 00:28:47.307 [2024-07-25 10:17:32.361671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.307 [2024-07-25 10:17:32.361701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.307 qpair failed and we were unable to recover it. 00:28:47.307 [2024-07-25 10:17:32.361877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.307 [2024-07-25 10:17:32.361907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.307 qpair failed and we were unable to recover it. 00:28:47.307 [2024-07-25 10:17:32.362115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.307 [2024-07-25 10:17:32.362145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.307 qpair failed and we were unable to recover it. 00:28:47.307 [2024-07-25 10:17:32.362290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.307 [2024-07-25 10:17:32.362319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.307 qpair failed and we were unable to recover it. 00:28:47.307 [2024-07-25 10:17:32.362489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.307 [2024-07-25 10:17:32.362519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.307 qpair failed and we were unable to recover it. 00:28:47.307 [2024-07-25 10:17:32.362660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.307 [2024-07-25 10:17:32.362690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.307 qpair failed and we were unable to recover it. 00:28:47.307 [2024-07-25 10:17:32.362891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.307 [2024-07-25 10:17:32.362920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.307 qpair failed and we were unable to recover it. 00:28:47.307 [2024-07-25 10:17:32.363104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.307 [2024-07-25 10:17:32.363133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.307 qpair failed and we were unable to recover it. 00:28:47.307 [2024-07-25 10:17:32.363306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.307 [2024-07-25 10:17:32.363336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.307 qpair failed and we were unable to recover it. 00:28:47.307 [2024-07-25 10:17:32.363517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.307 [2024-07-25 10:17:32.363547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.307 qpair failed and we were unable to recover it. 00:28:47.307 [2024-07-25 10:17:32.363751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.307 [2024-07-25 10:17:32.363781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.307 qpair failed and we were unable to recover it. 00:28:47.307 [2024-07-25 10:17:32.363951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.307 [2024-07-25 10:17:32.363980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.307 qpair failed and we were unable to recover it. 00:28:47.307 [2024-07-25 10:17:32.364151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.307 [2024-07-25 10:17:32.364180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.307 qpair failed and we were unable to recover it. 00:28:47.307 [2024-07-25 10:17:32.364325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.308 [2024-07-25 10:17:32.364354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.308 qpair failed and we were unable to recover it. 00:28:47.308 [2024-07-25 10:17:32.364553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.308 [2024-07-25 10:17:32.364582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.308 qpair failed and we were unable to recover it. 00:28:47.308 [2024-07-25 10:17:32.364761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.308 [2024-07-25 10:17:32.364790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.308 qpair failed and we were unable to recover it. 00:28:47.308 [2024-07-25 10:17:32.364993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.308 [2024-07-25 10:17:32.365022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.308 qpair failed and we were unable to recover it. 00:28:47.308 [2024-07-25 10:17:32.365196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.308 [2024-07-25 10:17:32.365225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.308 qpair failed and we were unable to recover it. 00:28:47.308 [2024-07-25 10:17:32.365399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.308 [2024-07-25 10:17:32.365435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.308 qpair failed and we were unable to recover it. 00:28:47.308 [2024-07-25 10:17:32.365609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.308 [2024-07-25 10:17:32.365637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.308 qpair failed and we were unable to recover it. 00:28:47.308 [2024-07-25 10:17:32.365808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.308 [2024-07-25 10:17:32.365837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.308 qpair failed and we were unable to recover it. 00:28:47.308 [2024-07-25 10:17:32.366008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.308 [2024-07-25 10:17:32.366042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.308 qpair failed and we were unable to recover it. 00:28:47.308 [2024-07-25 10:17:32.366212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.308 [2024-07-25 10:17:32.366241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.308 qpair failed and we were unable to recover it. 00:28:47.308 [2024-07-25 10:17:32.366445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.308 [2024-07-25 10:17:32.366475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.308 qpair failed and we were unable to recover it. 00:28:47.308 [2024-07-25 10:17:32.366676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.308 [2024-07-25 10:17:32.366706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.308 qpair failed and we were unable to recover it. 00:28:47.308 [2024-07-25 10:17:32.366910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.308 [2024-07-25 10:17:32.366939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.308 qpair failed and we were unable to recover it. 00:28:47.308 [2024-07-25 10:17:32.367111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.308 [2024-07-25 10:17:32.367140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.308 qpair failed and we were unable to recover it. 00:28:47.308 [2024-07-25 10:17:32.367310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.308 [2024-07-25 10:17:32.367339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.308 qpair failed and we were unable to recover it. 00:28:47.308 [2024-07-25 10:17:32.367542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.308 [2024-07-25 10:17:32.367572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.308 qpair failed and we were unable to recover it. 00:28:47.308 [2024-07-25 10:17:32.367749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.308 [2024-07-25 10:17:32.367778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.308 qpair failed and we were unable to recover it. 00:28:47.308 [2024-07-25 10:17:32.367939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.308 [2024-07-25 10:17:32.367968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.308 qpair failed and we were unable to recover it. 00:28:47.308 [2024-07-25 10:17:32.368173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.308 [2024-07-25 10:17:32.368202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.308 qpair failed and we were unable to recover it. 00:28:47.308 [2024-07-25 10:17:32.368371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.308 [2024-07-25 10:17:32.368400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.308 qpair failed and we were unable to recover it. 00:28:47.308 [2024-07-25 10:17:32.368595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.308 [2024-07-25 10:17:32.368626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.308 qpair failed and we were unable to recover it. 00:28:47.308 [2024-07-25 10:17:32.368802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.308 [2024-07-25 10:17:32.368832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.308 qpair failed and we were unable to recover it. 00:28:47.308 [2024-07-25 10:17:32.368978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.308 [2024-07-25 10:17:32.369008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.308 qpair failed and we were unable to recover it. 00:28:47.308 [2024-07-25 10:17:32.369177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.308 [2024-07-25 10:17:32.369206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.308 qpair failed and we were unable to recover it. 00:28:47.308 [2024-07-25 10:17:32.369379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.308 [2024-07-25 10:17:32.369408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.308 qpair failed and we were unable to recover it. 00:28:47.308 [2024-07-25 10:17:32.369623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.308 [2024-07-25 10:17:32.369652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.308 qpair failed and we were unable to recover it. 00:28:47.308 [2024-07-25 10:17:32.369819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.308 [2024-07-25 10:17:32.369848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.308 qpair failed and we were unable to recover it. 00:28:47.308 [2024-07-25 10:17:32.370030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.308 [2024-07-25 10:17:32.370059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.308 qpair failed and we were unable to recover it. 00:28:47.308 [2024-07-25 10:17:32.370260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.308 [2024-07-25 10:17:32.370289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.308 qpair failed and we were unable to recover it. 00:28:47.308 [2024-07-25 10:17:32.370442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.308 [2024-07-25 10:17:32.370471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.308 qpair failed and we were unable to recover it. 00:28:47.308 [2024-07-25 10:17:32.370639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.308 [2024-07-25 10:17:32.370668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.308 qpair failed and we were unable to recover it. 00:28:47.308 [2024-07-25 10:17:32.370872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.308 [2024-07-25 10:17:32.370901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.308 qpair failed and we were unable to recover it. 00:28:47.308 [2024-07-25 10:17:32.371107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.308 [2024-07-25 10:17:32.371136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.308 qpair failed and we were unable to recover it. 00:28:47.308 [2024-07-25 10:17:32.371268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.308 [2024-07-25 10:17:32.371297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.308 qpair failed and we were unable to recover it. 00:28:47.308 [2024-07-25 10:17:32.371476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.308 [2024-07-25 10:17:32.371506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.308 qpair failed and we were unable to recover it. 00:28:47.308 [2024-07-25 10:17:32.371729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.308 [2024-07-25 10:17:32.371775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.308 qpair failed and we were unable to recover it. 00:28:47.308 [2024-07-25 10:17:32.372007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.308 [2024-07-25 10:17:32.372038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.308 qpair failed and we were unable to recover it. 00:28:47.308 [2024-07-25 10:17:32.372258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.308 [2024-07-25 10:17:32.372287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.308 qpair failed and we were unable to recover it. 00:28:47.309 [2024-07-25 10:17:32.372501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.309 [2024-07-25 10:17:32.372532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.309 qpair failed and we were unable to recover it. 00:28:47.309 [2024-07-25 10:17:32.372695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.309 [2024-07-25 10:17:32.372724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.309 qpair failed and we were unable to recover it. 00:28:47.309 [2024-07-25 10:17:32.372904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.309 [2024-07-25 10:17:32.372933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.309 qpair failed and we were unable to recover it. 00:28:47.309 [2024-07-25 10:17:32.373119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.309 [2024-07-25 10:17:32.373148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.309 qpair failed and we were unable to recover it. 00:28:47.309 [2024-07-25 10:17:32.373316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.309 [2024-07-25 10:17:32.373346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18a3ea0 with addr=10.0.0.2, port=4420 00:28:47.309 qpair failed and we were unable to recover it. 00:28:47.309 [2024-07-25 10:17:32.373514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.309 [2024-07-25 10:17:32.373544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.309 qpair failed and we were unable to recover it. 00:28:47.309 [2024-07-25 10:17:32.373715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.309 [2024-07-25 10:17:32.373744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.309 qpair failed and we were unable to recover it. 00:28:47.309 [2024-07-25 10:17:32.373949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.309 [2024-07-25 10:17:32.373978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.309 qpair failed and we were unable to recover it. 00:28:47.309 [2024-07-25 10:17:32.374142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.309 [2024-07-25 10:17:32.374171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.309 qpair failed and we were unable to recover it. 00:28:47.309 [2024-07-25 10:17:32.374367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.309 [2024-07-25 10:17:32.374396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.309 qpair failed and we were unable to recover it. 00:28:47.309 [2024-07-25 10:17:32.374554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.309 [2024-07-25 10:17:32.374583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.309 qpair failed and we were unable to recover it. 00:28:47.309 [2024-07-25 10:17:32.374792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.309 [2024-07-25 10:17:32.374821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.309 qpair failed and we were unable to recover it. 00:28:47.309 [2024-07-25 10:17:32.375019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.309 [2024-07-25 10:17:32.375049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.309 qpair failed and we were unable to recover it. 00:28:47.309 [2024-07-25 10:17:32.375245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.309 [2024-07-25 10:17:32.375274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.309 qpair failed and we were unable to recover it. 00:28:47.309 [2024-07-25 10:17:32.375475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.309 [2024-07-25 10:17:32.375505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.309 qpair failed and we were unable to recover it. 00:28:47.309 [2024-07-25 10:17:32.375715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.309 [2024-07-25 10:17:32.375745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.309 qpair failed and we were unable to recover it. 00:28:47.309 [2024-07-25 10:17:32.375951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.309 [2024-07-25 10:17:32.375980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.309 qpair failed and we were unable to recover it. 00:28:47.309 [2024-07-25 10:17:32.376195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.309 [2024-07-25 10:17:32.376225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.309 qpair failed and we were unable to recover it. 00:28:47.309 [2024-07-25 10:17:32.376439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.309 [2024-07-25 10:17:32.376470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.309 qpair failed and we were unable to recover it. 00:28:47.309 [2024-07-25 10:17:32.376642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.309 [2024-07-25 10:17:32.376672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.309 qpair failed and we were unable to recover it. 00:28:47.309 [2024-07-25 10:17:32.376874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.309 [2024-07-25 10:17:32.376904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.309 qpair failed and we were unable to recover it. 00:28:47.309 [2024-07-25 10:17:32.377043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.309 [2024-07-25 10:17:32.377073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.309 qpair failed and we were unable to recover it. 00:28:47.309 [2024-07-25 10:17:32.377251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.309 [2024-07-25 10:17:32.377280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.309 qpair failed and we were unable to recover it. 00:28:47.309 [2024-07-25 10:17:32.377452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.309 [2024-07-25 10:17:32.377482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.309 qpair failed and we were unable to recover it. 00:28:47.309 [2024-07-25 10:17:32.377635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.309 [2024-07-25 10:17:32.377664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.309 qpair failed and we were unable to recover it. 00:28:47.309 [2024-07-25 10:17:32.377844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.309 [2024-07-25 10:17:32.377873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.309 qpair failed and we were unable to recover it. 00:28:47.309 [2024-07-25 10:17:32.378030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.309 [2024-07-25 10:17:32.378059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.309 qpair failed and we were unable to recover it. 00:28:47.309 [2024-07-25 10:17:32.378254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.309 [2024-07-25 10:17:32.378283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.309 qpair failed and we were unable to recover it. 00:28:47.309 [2024-07-25 10:17:32.378439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.309 [2024-07-25 10:17:32.378468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.309 qpair failed and we were unable to recover it. 00:28:47.309 [2024-07-25 10:17:32.378679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.309 [2024-07-25 10:17:32.378709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.309 qpair failed and we were unable to recover it. 00:28:47.309 [2024-07-25 10:17:32.378878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.309 [2024-07-25 10:17:32.378907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.309 qpair failed and we were unable to recover it. 00:28:47.309 [2024-07-25 10:17:32.379079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.309 [2024-07-25 10:17:32.379108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.309 qpair failed and we were unable to recover it. 00:28:47.309 [2024-07-25 10:17:32.379281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.309 [2024-07-25 10:17:32.379310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.309 qpair failed and we were unable to recover it. 00:28:47.309 [2024-07-25 10:17:32.379498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.309 [2024-07-25 10:17:32.379528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.309 qpair failed and we were unable to recover it. 00:28:47.309 [2024-07-25 10:17:32.379704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.309 [2024-07-25 10:17:32.379734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.309 qpair failed and we were unable to recover it. 00:28:47.310 [2024-07-25 10:17:32.379920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.310 [2024-07-25 10:17:32.379949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.310 qpair failed and we were unable to recover it. 00:28:47.310 [2024-07-25 10:17:32.380157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.310 [2024-07-25 10:17:32.380186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.310 qpair failed and we were unable to recover it. 00:28:47.310 [2024-07-25 10:17:32.380371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.310 [2024-07-25 10:17:32.380404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.310 qpair failed and we were unable to recover it. 00:28:47.310 [2024-07-25 10:17:32.380582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.310 [2024-07-25 10:17:32.380611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.310 qpair failed and we were unable to recover it. 00:28:47.310 [2024-07-25 10:17:32.380769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.310 [2024-07-25 10:17:32.380798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.310 qpair failed and we were unable to recover it. 00:28:47.310 [2024-07-25 10:17:32.380959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.310 [2024-07-25 10:17:32.380988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.310 qpair failed and we were unable to recover it. 00:28:47.310 [2024-07-25 10:17:32.381173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.310 [2024-07-25 10:17:32.381201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.310 qpair failed and we were unable to recover it. 00:28:47.310 [2024-07-25 10:17:32.381386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.310 [2024-07-25 10:17:32.381415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.310 qpair failed and we were unable to recover it. 00:28:47.310 [2024-07-25 10:17:32.381580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.310 [2024-07-25 10:17:32.381609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.310 qpair failed and we were unable to recover it. 00:28:47.310 [2024-07-25 10:17:32.381764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.310 [2024-07-25 10:17:32.381793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.310 qpair failed and we were unable to recover it. 00:28:47.310 [2024-07-25 10:17:32.381980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.310 [2024-07-25 10:17:32.382009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.310 qpair failed and we were unable to recover it. 00:28:47.310 [2024-07-25 10:17:32.382138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.310 [2024-07-25 10:17:32.382167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.310 qpair failed and we were unable to recover it. 00:28:47.310 [2024-07-25 10:17:32.382321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.310 [2024-07-25 10:17:32.382350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.310 qpair failed and we were unable to recover it. 00:28:47.310 [2024-07-25 10:17:32.382507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.310 [2024-07-25 10:17:32.382537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.310 qpair failed and we were unable to recover it. 00:28:47.310 [2024-07-25 10:17:32.382671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.310 [2024-07-25 10:17:32.382700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.310 qpair failed and we were unable to recover it. 00:28:47.310 [2024-07-25 10:17:32.382896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.310 [2024-07-25 10:17:32.382925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.310 qpair failed and we were unable to recover it. 00:28:47.310 [2024-07-25 10:17:32.383122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.310 [2024-07-25 10:17:32.383152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.310 qpair failed and we were unable to recover it. 00:28:47.310 [2024-07-25 10:17:32.383321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.310 [2024-07-25 10:17:32.383349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.310 qpair failed and we were unable to recover it. 00:28:47.310 [2024-07-25 10:17:32.383480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.310 [2024-07-25 10:17:32.383510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.310 qpair failed and we were unable to recover it. 00:28:47.310 [2024-07-25 10:17:32.383702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.310 [2024-07-25 10:17:32.383730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.310 qpair failed and we were unable to recover it. 00:28:47.310 [2024-07-25 10:17:32.383894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.310 [2024-07-25 10:17:32.383923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.310 qpair failed and we were unable to recover it. 00:28:47.310 [2024-07-25 10:17:32.384083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.310 [2024-07-25 10:17:32.384112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.310 qpair failed and we were unable to recover it. 00:28:47.310 [2024-07-25 10:17:32.384281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.310 [2024-07-25 10:17:32.384310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.310 qpair failed and we were unable to recover it. 00:28:47.310 [2024-07-25 10:17:32.384492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.310 [2024-07-25 10:17:32.384522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.310 qpair failed and we were unable to recover it. 00:28:47.310 [2024-07-25 10:17:32.384679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.310 [2024-07-25 10:17:32.384707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.310 qpair failed and we were unable to recover it. 00:28:47.310 [2024-07-25 10:17:32.384867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.310 [2024-07-25 10:17:32.384896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.310 qpair failed and we were unable to recover it. 00:28:47.310 [2024-07-25 10:17:32.385085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.310 [2024-07-25 10:17:32.385114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.310 qpair failed and we were unable to recover it. 00:28:47.310 [2024-07-25 10:17:32.385262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.310 [2024-07-25 10:17:32.385292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.310 qpair failed and we were unable to recover it. 00:28:47.310 [2024-07-25 10:17:32.385478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.310 [2024-07-25 10:17:32.385508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.310 qpair failed and we were unable to recover it. 00:28:47.310 [2024-07-25 10:17:32.385672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.310 [2024-07-25 10:17:32.385701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.310 qpair failed and we were unable to recover it. 00:28:47.310 [2024-07-25 10:17:32.385900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.310 [2024-07-25 10:17:32.385930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.310 qpair failed and we were unable to recover it. 00:28:47.310 [2024-07-25 10:17:32.386066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.310 [2024-07-25 10:17:32.386095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.310 qpair failed and we were unable to recover it. 00:28:47.310 [2024-07-25 10:17:32.386222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.310 [2024-07-25 10:17:32.386251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.310 qpair failed and we were unable to recover it. 00:28:47.310 [2024-07-25 10:17:32.386443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.310 [2024-07-25 10:17:32.386472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.310 qpair failed and we were unable to recover it. 00:28:47.311 [2024-07-25 10:17:32.386631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.311 [2024-07-25 10:17:32.386660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.311 qpair failed and we were unable to recover it. 00:28:47.311 [2024-07-25 10:17:32.386847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.311 [2024-07-25 10:17:32.386876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.311 qpair failed and we were unable to recover it. 00:28:47.311 [2024-07-25 10:17:32.387062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.311 [2024-07-25 10:17:32.387091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.311 qpair failed and we were unable to recover it. 00:28:47.311 [2024-07-25 10:17:32.387228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.311 [2024-07-25 10:17:32.387257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.311 qpair failed and we were unable to recover it. 00:28:47.311 [2024-07-25 10:17:32.387447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.311 [2024-07-25 10:17:32.387478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.311 qpair failed and we were unable to recover it. 00:28:47.311 [2024-07-25 10:17:32.387642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.311 [2024-07-25 10:17:32.387671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.311 qpair failed and we were unable to recover it. 00:28:47.311 [2024-07-25 10:17:32.387864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.311 [2024-07-25 10:17:32.387893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.311 qpair failed and we were unable to recover it. 00:28:47.311 [2024-07-25 10:17:32.388030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.311 [2024-07-25 10:17:32.388059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.311 qpair failed and we were unable to recover it. 00:28:47.311 [2024-07-25 10:17:32.388218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.311 [2024-07-25 10:17:32.388252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.311 qpair failed and we were unable to recover it. 00:28:47.311 [2024-07-25 10:17:32.388387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.311 [2024-07-25 10:17:32.388416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.311 qpair failed and we were unable to recover it. 00:28:47.311 [2024-07-25 10:17:32.388589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.311 [2024-07-25 10:17:32.388618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.311 qpair failed and we were unable to recover it. 00:28:47.311 [2024-07-25 10:17:32.388809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.311 [2024-07-25 10:17:32.388838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.311 qpair failed and we were unable to recover it. 00:28:47.311 [2024-07-25 10:17:32.389023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.311 [2024-07-25 10:17:32.389052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.311 qpair failed and we were unable to recover it. 00:28:47.311 [2024-07-25 10:17:32.389207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.311 [2024-07-25 10:17:32.389236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.311 qpair failed and we were unable to recover it. 00:28:47.311 [2024-07-25 10:17:32.389398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.311 [2024-07-25 10:17:32.389433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.311 qpair failed and we were unable to recover it. 00:28:47.311 [2024-07-25 10:17:32.389591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.311 [2024-07-25 10:17:32.389620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.311 qpair failed and we were unable to recover it. 00:28:47.311 [2024-07-25 10:17:32.389787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.311 [2024-07-25 10:17:32.389816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.311 qpair failed and we were unable to recover it. 00:28:47.311 [2024-07-25 10:17:32.390003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.311 [2024-07-25 10:17:32.390032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.311 qpair failed and we were unable to recover it. 00:28:47.311 [2024-07-25 10:17:32.390207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.311 [2024-07-25 10:17:32.390236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.311 qpair failed and we were unable to recover it. 00:28:47.311 [2024-07-25 10:17:32.390401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.311 [2024-07-25 10:17:32.390435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.311 qpair failed and we were unable to recover it. 00:28:47.311 [2024-07-25 10:17:32.390591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.311 [2024-07-25 10:17:32.390620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.311 qpair failed and we were unable to recover it. 00:28:47.311 [2024-07-25 10:17:32.390819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.311 [2024-07-25 10:17:32.390848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.311 qpair failed and we were unable to recover it. 00:28:47.311 [2024-07-25 10:17:32.391009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.311 [2024-07-25 10:17:32.391046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.311 qpair failed and we were unable to recover it. 00:28:47.311 [2024-07-25 10:17:32.391256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.311 [2024-07-25 10:17:32.391285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.311 qpair failed and we were unable to recover it. 00:28:47.311 [2024-07-25 10:17:32.391450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.311 [2024-07-25 10:17:32.391480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.311 qpair failed and we were unable to recover it. 00:28:47.311 [2024-07-25 10:17:32.391609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.311 [2024-07-25 10:17:32.391638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.311 qpair failed and we were unable to recover it. 00:28:47.311 [2024-07-25 10:17:32.391818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.311 [2024-07-25 10:17:32.391847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.311 qpair failed and we were unable to recover it. 00:28:47.311 [2024-07-25 10:17:32.391982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.311 [2024-07-25 10:17:32.392011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.311 qpair failed and we were unable to recover it. 00:28:47.311 [2024-07-25 10:17:32.392211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.311 [2024-07-25 10:17:32.392240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.311 qpair failed and we were unable to recover it. 00:28:47.311 [2024-07-25 10:17:32.392466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.311 [2024-07-25 10:17:32.392497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.311 qpair failed and we were unable to recover it. 00:28:47.311 [2024-07-25 10:17:32.392674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.311 [2024-07-25 10:17:32.392703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.311 qpair failed and we were unable to recover it. 00:28:47.311 [2024-07-25 10:17:32.392876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.311 [2024-07-25 10:17:32.392905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.311 qpair failed and we were unable to recover it. 00:28:47.311 [2024-07-25 10:17:32.393070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.311 [2024-07-25 10:17:32.393099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.311 qpair failed and we were unable to recover it. 00:28:47.311 [2024-07-25 10:17:32.393300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.311 [2024-07-25 10:17:32.393329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.311 qpair failed and we were unable to recover it. 00:28:47.311 [2024-07-25 10:17:32.393539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.311 [2024-07-25 10:17:32.393569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.311 qpair failed and we were unable to recover it. 00:28:47.311 [2024-07-25 10:17:32.393762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.311 [2024-07-25 10:17:32.393799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.311 qpair failed and we were unable to recover it. 00:28:47.311 [2024-07-25 10:17:32.394005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.311 [2024-07-25 10:17:32.394034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.311 qpair failed and we were unable to recover it. 00:28:47.311 [2024-07-25 10:17:32.394253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.311 [2024-07-25 10:17:32.394282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.312 qpair failed and we were unable to recover it. 00:28:47.312 [2024-07-25 10:17:32.394446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.312 [2024-07-25 10:17:32.394476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.312 qpair failed and we were unable to recover it. 00:28:47.312 [2024-07-25 10:17:32.394647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.312 [2024-07-25 10:17:32.394676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.312 qpair failed and we were unable to recover it. 00:28:47.312 [2024-07-25 10:17:32.394886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.312 [2024-07-25 10:17:32.394915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.312 qpair failed and we were unable to recover it. 00:28:47.312 [2024-07-25 10:17:32.395127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.312 [2024-07-25 10:17:32.395157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.312 qpair failed and we were unable to recover it. 00:28:47.312 [2024-07-25 10:17:32.395332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.312 [2024-07-25 10:17:32.395361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.312 qpair failed and we were unable to recover it. 00:28:47.312 [2024-07-25 10:17:32.395550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.312 [2024-07-25 10:17:32.395590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.312 qpair failed and we were unable to recover it. 00:28:47.312 [2024-07-25 10:17:32.395766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.312 [2024-07-25 10:17:32.395795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.312 qpair failed and we were unable to recover it. 00:28:47.312 [2024-07-25 10:17:32.395993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.312 [2024-07-25 10:17:32.396022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.312 qpair failed and we were unable to recover it. 00:28:47.312 [2024-07-25 10:17:32.396225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.312 [2024-07-25 10:17:32.396254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.312 qpair failed and we were unable to recover it. 00:28:47.312 [2024-07-25 10:17:32.396454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.312 [2024-07-25 10:17:32.396483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.312 qpair failed and we were unable to recover it. 00:28:47.312 [2024-07-25 10:17:32.396668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.312 [2024-07-25 10:17:32.396701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.312 qpair failed and we were unable to recover it. 00:28:47.312 [2024-07-25 10:17:32.396840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.312 [2024-07-25 10:17:32.396870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.312 qpair failed and we were unable to recover it. 00:28:47.312 [2024-07-25 10:17:32.397002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.312 [2024-07-25 10:17:32.397041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.312 qpair failed and we were unable to recover it. 00:28:47.312 [2024-07-25 10:17:32.397221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.312 [2024-07-25 10:17:32.397250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.312 qpair failed and we were unable to recover it. 00:28:47.312 [2024-07-25 10:17:32.397401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.312 [2024-07-25 10:17:32.397435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.312 qpair failed and we were unable to recover it. 00:28:47.312 [2024-07-25 10:17:32.397637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.312 [2024-07-25 10:17:32.397667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.312 qpair failed and we were unable to recover it. 00:28:47.312 [2024-07-25 10:17:32.397866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.312 [2024-07-25 10:17:32.397896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.312 qpair failed and we were unable to recover it. 00:28:47.312 [2024-07-25 10:17:32.398057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.312 [2024-07-25 10:17:32.398086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.312 qpair failed and we were unable to recover it. 00:28:47.312 [2024-07-25 10:17:32.398260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.312 [2024-07-25 10:17:32.398290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.312 qpair failed and we were unable to recover it. 00:28:47.312 [2024-07-25 10:17:32.398499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.312 [2024-07-25 10:17:32.398530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.312 qpair failed and we were unable to recover it. 00:28:47.312 [2024-07-25 10:17:32.398702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.312 [2024-07-25 10:17:32.398731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.312 qpair failed and we were unable to recover it. 00:28:47.312 [2024-07-25 10:17:32.398905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.312 [2024-07-25 10:17:32.398934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.312 qpair failed and we were unable to recover it. 00:28:47.312 [2024-07-25 10:17:32.399146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.312 [2024-07-25 10:17:32.399174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.312 qpair failed and we were unable to recover it. 00:28:47.312 [2024-07-25 10:17:32.399390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.312 [2024-07-25 10:17:32.399418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.312 qpair failed and we were unable to recover it. 00:28:47.312 [2024-07-25 10:17:32.399567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.312 [2024-07-25 10:17:32.399596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.312 qpair failed and we were unable to recover it. 00:28:47.312 [2024-07-25 10:17:32.399798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.312 [2024-07-25 10:17:32.399827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.312 qpair failed and we were unable to recover it. 00:28:47.312 [2024-07-25 10:17:32.400029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.312 [2024-07-25 10:17:32.400058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.312 qpair failed and we were unable to recover it. 00:28:47.312 [2024-07-25 10:17:32.400232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.312 [2024-07-25 10:17:32.400261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.312 qpair failed and we were unable to recover it. 00:28:47.312 [2024-07-25 10:17:32.400459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.312 [2024-07-25 10:17:32.400489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.312 qpair failed and we were unable to recover it. 00:28:47.312 [2024-07-25 10:17:32.400698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.312 [2024-07-25 10:17:32.400727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.312 qpair failed and we were unable to recover it. 00:28:47.312 [2024-07-25 10:17:32.400885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.312 [2024-07-25 10:17:32.400913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.312 qpair failed and we were unable to recover it. 00:28:47.312 [2024-07-25 10:17:32.401135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.312 [2024-07-25 10:17:32.401164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.312 qpair failed and we were unable to recover it. 00:28:47.312 [2024-07-25 10:17:32.401382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.312 [2024-07-25 10:17:32.401411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.312 qpair failed and we were unable to recover it. 00:28:47.312 [2024-07-25 10:17:32.401573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.312 [2024-07-25 10:17:32.401602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.312 qpair failed and we were unable to recover it. 00:28:47.312 [2024-07-25 10:17:32.401786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.312 [2024-07-25 10:17:32.401815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.312 qpair failed and we were unable to recover it. 00:28:47.312 [2024-07-25 10:17:32.401986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.312 [2024-07-25 10:17:32.402014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.312 qpair failed and we were unable to recover it. 00:28:47.312 [2024-07-25 10:17:32.402223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.312 [2024-07-25 10:17:32.402251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.312 qpair failed and we were unable to recover it. 00:28:47.312 [2024-07-25 10:17:32.402445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.313 [2024-07-25 10:17:32.402474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.313 qpair failed and we were unable to recover it. 00:28:47.313 [2024-07-25 10:17:32.402650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.313 [2024-07-25 10:17:32.402678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.313 qpair failed and we were unable to recover it. 00:28:47.313 [2024-07-25 10:17:32.402880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.313 [2024-07-25 10:17:32.402909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.313 qpair failed and we were unable to recover it. 00:28:47.313 [2024-07-25 10:17:32.403113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.313 [2024-07-25 10:17:32.403143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.313 qpair failed and we were unable to recover it. 00:28:47.313 [2024-07-25 10:17:32.403337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.313 [2024-07-25 10:17:32.403367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.313 qpair failed and we were unable to recover it. 00:28:47.313 [2024-07-25 10:17:32.403570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.313 [2024-07-25 10:17:32.403600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.313 qpair failed and we were unable to recover it. 00:28:47.313 [2024-07-25 10:17:32.403783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.313 [2024-07-25 10:17:32.403811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.313 qpair failed and we were unable to recover it. 00:28:47.313 [2024-07-25 10:17:32.404022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.313 [2024-07-25 10:17:32.404050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.313 qpair failed and we were unable to recover it. 00:28:47.313 [2024-07-25 10:17:32.404220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.313 [2024-07-25 10:17:32.404248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.313 qpair failed and we were unable to recover it. 00:28:47.313 [2024-07-25 10:17:32.404420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.313 [2024-07-25 10:17:32.404454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.313 qpair failed and we were unable to recover it. 00:28:47.313 [2024-07-25 10:17:32.404628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.313 [2024-07-25 10:17:32.404657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.313 qpair failed and we were unable to recover it. 00:28:47.313 [2024-07-25 10:17:32.404856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.313 [2024-07-25 10:17:32.404885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.313 qpair failed and we were unable to recover it. 00:28:47.313 [2024-07-25 10:17:32.405057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.313 [2024-07-25 10:17:32.405085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.313 qpair failed and we were unable to recover it. 00:28:47.313 [2024-07-25 10:17:32.405288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.313 [2024-07-25 10:17:32.405321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.313 qpair failed and we were unable to recover it. 00:28:47.313 [2024-07-25 10:17:32.405533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.313 [2024-07-25 10:17:32.405561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.313 qpair failed and we were unable to recover it. 00:28:47.313 [2024-07-25 10:17:32.405718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.313 [2024-07-25 10:17:32.405747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.313 qpair failed and we were unable to recover it. 00:28:47.313 [2024-07-25 10:17:32.405890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.313 [2024-07-25 10:17:32.405919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.313 qpair failed and we were unable to recover it. 00:28:47.313 [2024-07-25 10:17:32.406094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.313 [2024-07-25 10:17:32.406123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.313 qpair failed and we were unable to recover it. 00:28:47.313 [2024-07-25 10:17:32.406321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.313 [2024-07-25 10:17:32.406350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.313 qpair failed and we were unable to recover it. 00:28:47.313 [2024-07-25 10:17:32.406521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.313 [2024-07-25 10:17:32.406552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.313 qpair failed and we were unable to recover it. 00:28:47.313 [2024-07-25 10:17:32.406731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.313 [2024-07-25 10:17:32.406760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.313 qpair failed and we were unable to recover it. 00:28:47.313 [2024-07-25 10:17:32.406939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.313 [2024-07-25 10:17:32.406967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.313 qpair failed and we were unable to recover it. 00:28:47.313 [2024-07-25 10:17:32.407165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.313 [2024-07-25 10:17:32.407195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.313 qpair failed and we were unable to recover it. 00:28:47.313 [2024-07-25 10:17:32.407404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.313 [2024-07-25 10:17:32.407442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.313 qpair failed and we were unable to recover it. 00:28:47.313 [2024-07-25 10:17:32.407665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.313 [2024-07-25 10:17:32.407695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.313 qpair failed and we were unable to recover it. 00:28:47.313 [2024-07-25 10:17:32.407868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.313 [2024-07-25 10:17:32.407896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.313 qpair failed and we were unable to recover it. 00:28:47.313 [2024-07-25 10:17:32.408095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.313 [2024-07-25 10:17:32.408125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.313 qpair failed and we were unable to recover it. 00:28:47.313 [2024-07-25 10:17:32.408310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.313 [2024-07-25 10:17:32.408341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.313 qpair failed and we were unable to recover it. 00:28:47.313 [2024-07-25 10:17:32.408533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.313 [2024-07-25 10:17:32.408564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.313 qpair failed and we were unable to recover it. 00:28:47.313 [2024-07-25 10:17:32.408773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.313 [2024-07-25 10:17:32.408803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.313 qpair failed and we were unable to recover it. 00:28:47.313 [2024-07-25 10:17:32.408978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.313 [2024-07-25 10:17:32.409008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.313 qpair failed and we were unable to recover it. 00:28:47.313 [2024-07-25 10:17:32.409185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.313 [2024-07-25 10:17:32.409214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.313 qpair failed and we were unable to recover it. 00:28:47.313 [2024-07-25 10:17:32.409422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.313 [2024-07-25 10:17:32.409465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.313 qpair failed and we were unable to recover it. 00:28:47.313 [2024-07-25 10:17:32.409644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.313 [2024-07-25 10:17:32.409673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.313 qpair failed and we were unable to recover it. 00:28:47.313 [2024-07-25 10:17:32.409882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.313 [2024-07-25 10:17:32.409911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.313 qpair failed and we were unable to recover it. 00:28:47.313 [2024-07-25 10:17:32.410080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.313 [2024-07-25 10:17:32.410109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.313 qpair failed and we were unable to recover it. 00:28:47.313 [2024-07-25 10:17:32.410310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.313 [2024-07-25 10:17:32.410339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.313 qpair failed and we were unable to recover it. 00:28:47.313 [2024-07-25 10:17:32.410516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.313 [2024-07-25 10:17:32.410546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.314 qpair failed and we were unable to recover it. 00:28:47.314 [2024-07-25 10:17:32.410727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.314 [2024-07-25 10:17:32.410755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.314 qpair failed and we were unable to recover it. 00:28:47.314 [2024-07-25 10:17:32.410958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.314 [2024-07-25 10:17:32.410986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.314 qpair failed and we were unable to recover it. 00:28:47.314 [2024-07-25 10:17:32.411194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.314 [2024-07-25 10:17:32.411223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.314 qpair failed and we were unable to recover it. 00:28:47.314 [2024-07-25 10:17:32.411411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.314 [2024-07-25 10:17:32.411447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.314 qpair failed and we were unable to recover it. 00:28:47.314 [2024-07-25 10:17:32.411619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.314 [2024-07-25 10:17:32.411648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.314 qpair failed and we were unable to recover it. 00:28:47.314 [2024-07-25 10:17:32.411848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.314 [2024-07-25 10:17:32.411878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.314 qpair failed and we were unable to recover it. 00:28:47.314 [2024-07-25 10:17:32.412059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.314 [2024-07-25 10:17:32.412088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.314 qpair failed and we were unable to recover it. 00:28:47.314 [2024-07-25 10:17:32.412264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.314 [2024-07-25 10:17:32.412292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.314 qpair failed and we were unable to recover it. 00:28:47.314 [2024-07-25 10:17:32.412455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.314 [2024-07-25 10:17:32.412484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.314 qpair failed and we were unable to recover it. 00:28:47.314 [2024-07-25 10:17:32.412690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.314 [2024-07-25 10:17:32.412720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.314 qpair failed and we were unable to recover it. 00:28:47.314 [2024-07-25 10:17:32.412928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.314 [2024-07-25 10:17:32.412957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.314 qpair failed and we were unable to recover it. 00:28:47.314 [2024-07-25 10:17:32.413135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.314 [2024-07-25 10:17:32.413164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.314 qpair failed and we were unable to recover it. 00:28:47.314 [2024-07-25 10:17:32.413334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.314 [2024-07-25 10:17:32.413363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.314 qpair failed and we were unable to recover it. 00:28:47.314 [2024-07-25 10:17:32.413528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.314 [2024-07-25 10:17:32.413556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.314 qpair failed and we were unable to recover it. 00:28:47.314 [2024-07-25 10:17:32.413725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.314 [2024-07-25 10:17:32.413754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.314 qpair failed and we were unable to recover it. 00:28:47.314 [2024-07-25 10:17:32.413915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.314 [2024-07-25 10:17:32.413952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.314 qpair failed and we were unable to recover it. 00:28:47.314 [2024-07-25 10:17:32.414106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.314 [2024-07-25 10:17:32.414135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.314 qpair failed and we were unable to recover it. 00:28:47.314 [2024-07-25 10:17:32.414325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.314 [2024-07-25 10:17:32.414354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.314 qpair failed and we were unable to recover it. 00:28:47.314 [2024-07-25 10:17:32.414567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.314 [2024-07-25 10:17:32.414597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.314 qpair failed and we were unable to recover it. 00:28:47.314 [2024-07-25 10:17:32.414798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.314 [2024-07-25 10:17:32.414827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.314 qpair failed and we were unable to recover it. 00:28:47.314 [2024-07-25 10:17:32.414995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.314 [2024-07-25 10:17:32.415024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.314 qpair failed and we were unable to recover it. 00:28:47.314 [2024-07-25 10:17:32.415197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.314 [2024-07-25 10:17:32.415225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.314 qpair failed and we were unable to recover it. 00:28:47.314 [2024-07-25 10:17:32.415418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.314 [2024-07-25 10:17:32.415452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.314 qpair failed and we were unable to recover it. 00:28:47.314 [2024-07-25 10:17:32.415638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.314 [2024-07-25 10:17:32.415667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.314 qpair failed and we were unable to recover it. 00:28:47.314 [2024-07-25 10:17:32.415830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.314 [2024-07-25 10:17:32.415859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.314 qpair failed and we were unable to recover it. 00:28:47.314 [2024-07-25 10:17:32.416030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.314 [2024-07-25 10:17:32.416059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.314 qpair failed and we were unable to recover it. 00:28:47.314 [2024-07-25 10:17:32.416265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.314 [2024-07-25 10:17:32.416293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.314 qpair failed and we were unable to recover it. 00:28:47.314 [2024-07-25 10:17:32.416467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.314 [2024-07-25 10:17:32.416495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.314 qpair failed and we were unable to recover it. 00:28:47.314 [2024-07-25 10:17:32.416696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.314 [2024-07-25 10:17:32.416725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.314 qpair failed and we were unable to recover it. 00:28:47.314 [2024-07-25 10:17:32.416930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.314 [2024-07-25 10:17:32.416958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.314 qpair failed and we were unable to recover it. 00:28:47.314 [2024-07-25 10:17:32.417162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.314 [2024-07-25 10:17:32.417191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.314 qpair failed and we were unable to recover it. 00:28:47.314 [2024-07-25 10:17:32.417381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.314 [2024-07-25 10:17:32.417410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.314 qpair failed and we were unable to recover it. 00:28:47.315 [2024-07-25 10:17:32.417600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.315 [2024-07-25 10:17:32.417628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.315 qpair failed and we were unable to recover it. 00:28:47.315 [2024-07-25 10:17:32.417799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.315 [2024-07-25 10:17:32.417828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.315 qpair failed and we were unable to recover it. 00:28:47.315 [2024-07-25 10:17:32.418000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.315 [2024-07-25 10:17:32.418029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.315 qpair failed and we were unable to recover it. 00:28:47.315 [2024-07-25 10:17:32.418170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.315 [2024-07-25 10:17:32.418198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.315 qpair failed and we were unable to recover it. 00:28:47.315 [2024-07-25 10:17:32.418343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.315 [2024-07-25 10:17:32.418371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.315 qpair failed and we were unable to recover it. 00:28:47.315 [2024-07-25 10:17:32.418581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.315 [2024-07-25 10:17:32.418610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.315 qpair failed and we were unable to recover it. 00:28:47.315 [2024-07-25 10:17:32.418753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.315 [2024-07-25 10:17:32.418783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.315 qpair failed and we were unable to recover it. 00:28:47.315 [2024-07-25 10:17:32.418984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.315 [2024-07-25 10:17:32.419013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.315 qpair failed and we were unable to recover it. 00:28:47.315 [2024-07-25 10:17:32.419199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.315 [2024-07-25 10:17:32.419228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.315 qpair failed and we were unable to recover it. 00:28:47.315 [2024-07-25 10:17:32.419397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.315 [2024-07-25 10:17:32.419426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.315 qpair failed and we were unable to recover it. 00:28:47.315 [2024-07-25 10:17:32.419637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.315 [2024-07-25 10:17:32.419666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.315 qpair failed and we were unable to recover it. 00:28:47.315 [2024-07-25 10:17:32.419866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.315 [2024-07-25 10:17:32.419895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.315 qpair failed and we were unable to recover it. 00:28:47.315 [2024-07-25 10:17:32.420035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.315 [2024-07-25 10:17:32.420062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.315 qpair failed and we were unable to recover it. 00:28:47.315 [2024-07-25 10:17:32.420267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.315 [2024-07-25 10:17:32.420296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.315 qpair failed and we were unable to recover it. 00:28:47.315 [2024-07-25 10:17:32.420488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.315 [2024-07-25 10:17:32.420518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.315 qpair failed and we were unable to recover it. 00:28:47.315 [2024-07-25 10:17:32.420691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.315 [2024-07-25 10:17:32.420720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.315 qpair failed and we were unable to recover it. 00:28:47.315 [2024-07-25 10:17:32.420932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.315 [2024-07-25 10:17:32.420961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.315 qpair failed and we were unable to recover it. 00:28:47.315 [2024-07-25 10:17:32.421132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.315 [2024-07-25 10:17:32.421161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.315 qpair failed and we were unable to recover it. 00:28:47.315 [2024-07-25 10:17:32.421363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.315 [2024-07-25 10:17:32.421391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.315 qpair failed and we were unable to recover it. 00:28:47.315 [2024-07-25 10:17:32.421597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.315 [2024-07-25 10:17:32.421626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.315 qpair failed and we were unable to recover it. 00:28:47.315 [2024-07-25 10:17:32.421770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.315 [2024-07-25 10:17:32.421798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.315 qpair failed and we were unable to recover it. 00:28:47.315 [2024-07-25 10:17:32.422006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.315 [2024-07-25 10:17:32.422035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.315 qpair failed and we were unable to recover it. 00:28:47.315 [2024-07-25 10:17:32.422237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.315 [2024-07-25 10:17:32.422266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.315 qpair failed and we were unable to recover it. 00:28:47.315 [2024-07-25 10:17:32.422417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.315 [2024-07-25 10:17:32.422456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.605 qpair failed and we were unable to recover it. 00:28:47.605 [2024-07-25 10:17:32.422662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.605 [2024-07-25 10:17:32.422691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.605 qpair failed and we were unable to recover it. 00:28:47.605 [2024-07-25 10:17:32.422831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.605 [2024-07-25 10:17:32.422857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.605 qpair failed and we were unable to recover it. 00:28:47.605 [2024-07-25 10:17:32.423061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.605 [2024-07-25 10:17:32.423090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.605 qpair failed and we were unable to recover it. 00:28:47.605 [2024-07-25 10:17:32.423294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.605 [2024-07-25 10:17:32.423323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.605 qpair failed and we were unable to recover it. 00:28:47.605 [2024-07-25 10:17:32.423533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.605 [2024-07-25 10:17:32.423562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.605 qpair failed and we were unable to recover it. 00:28:47.605 [2024-07-25 10:17:32.423763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.605 [2024-07-25 10:17:32.423792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.605 qpair failed and we were unable to recover it. 00:28:47.605 [2024-07-25 10:17:32.423992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.605 [2024-07-25 10:17:32.424021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.605 qpair failed and we were unable to recover it. 00:28:47.605 [2024-07-25 10:17:32.424233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.605 [2024-07-25 10:17:32.424262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.605 qpair failed and we were unable to recover it. 00:28:47.605 [2024-07-25 10:17:32.424422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.605 [2024-07-25 10:17:32.424472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.605 qpair failed and we were unable to recover it. 00:28:47.605 [2024-07-25 10:17:32.424636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.605 [2024-07-25 10:17:32.424664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.605 qpair failed and we were unable to recover it. 00:28:47.605 [2024-07-25 10:17:32.424872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.605 [2024-07-25 10:17:32.424901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.605 qpair failed and we were unable to recover it. 00:28:47.605 [2024-07-25 10:17:32.425102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.605 [2024-07-25 10:17:32.425132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.605 qpair failed and we were unable to recover it. 00:28:47.605 [2024-07-25 10:17:32.425333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.605 [2024-07-25 10:17:32.425362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.605 qpair failed and we were unable to recover it. 00:28:47.605 [2024-07-25 10:17:32.425520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.605 [2024-07-25 10:17:32.425549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.605 qpair failed and we were unable to recover it. 00:28:47.605 [2024-07-25 10:17:32.425727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.605 [2024-07-25 10:17:32.425756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.605 qpair failed and we were unable to recover it. 00:28:47.605 [2024-07-25 10:17:32.425966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.605 [2024-07-25 10:17:32.425995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.605 qpair failed and we were unable to recover it. 00:28:47.605 [2024-07-25 10:17:32.426142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.605 [2024-07-25 10:17:32.426171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.605 qpair failed and we were unable to recover it. 00:28:47.605 [2024-07-25 10:17:32.426346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.605 [2024-07-25 10:17:32.426375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.605 qpair failed and we were unable to recover it. 00:28:47.605 [2024-07-25 10:17:32.426545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.605 [2024-07-25 10:17:32.426573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.605 qpair failed and we were unable to recover it. 00:28:47.605 [2024-07-25 10:17:32.426777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.605 [2024-07-25 10:17:32.426805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.605 qpair failed and we were unable to recover it. 00:28:47.605 [2024-07-25 10:17:32.427014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.605 [2024-07-25 10:17:32.427043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.605 qpair failed and we were unable to recover it. 00:28:47.605 [2024-07-25 10:17:32.427255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.605 [2024-07-25 10:17:32.427283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.605 qpair failed and we were unable to recover it. 00:28:47.605 [2024-07-25 10:17:32.427456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.605 [2024-07-25 10:17:32.427485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.605 qpair failed and we were unable to recover it. 00:28:47.605 [2024-07-25 10:17:32.427643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.605 [2024-07-25 10:17:32.427672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.605 qpair failed and we were unable to recover it. 00:28:47.605 [2024-07-25 10:17:32.427878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.605 [2024-07-25 10:17:32.427907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.605 qpair failed and we were unable to recover it. 00:28:47.605 [2024-07-25 10:17:32.428082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.605 [2024-07-25 10:17:32.428111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.605 qpair failed and we were unable to recover it. 00:28:47.605 [2024-07-25 10:17:32.428283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.605 [2024-07-25 10:17:32.428312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.605 qpair failed and we were unable to recover it. 00:28:47.605 [2024-07-25 10:17:32.428486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.605 [2024-07-25 10:17:32.428514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.605 qpair failed and we were unable to recover it. 00:28:47.605 [2024-07-25 10:17:32.428716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.605 [2024-07-25 10:17:32.428745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.605 qpair failed and we were unable to recover it. 00:28:47.605 [2024-07-25 10:17:32.428921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.605 [2024-07-25 10:17:32.428950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.605 qpair failed and we were unable to recover it. 00:28:47.605 [2024-07-25 10:17:32.429118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.605 [2024-07-25 10:17:32.429148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.605 qpair failed and we were unable to recover it. 00:28:47.605 [2024-07-25 10:17:32.429282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.605 [2024-07-25 10:17:32.429319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.605 qpair failed and we were unable to recover it. 00:28:47.605 [2024-07-25 10:17:32.429494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.605 [2024-07-25 10:17:32.429524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.605 qpair failed and we were unable to recover it. 00:28:47.605 [2024-07-25 10:17:32.429698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.605 [2024-07-25 10:17:32.429727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.605 qpair failed and we were unable to recover it. 00:28:47.605 [2024-07-25 10:17:32.429913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.605 [2024-07-25 10:17:32.429942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.605 qpair failed and we were unable to recover it. 00:28:47.605 [2024-07-25 10:17:32.430073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.606 [2024-07-25 10:17:32.430101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.606 qpair failed and we were unable to recover it. 00:28:47.606 [2024-07-25 10:17:32.430281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.606 [2024-07-25 10:17:32.430309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.606 qpair failed and we were unable to recover it. 00:28:47.606 [2024-07-25 10:17:32.430480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.606 [2024-07-25 10:17:32.430509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.606 qpair failed and we were unable to recover it. 00:28:47.606 [2024-07-25 10:17:32.430713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.606 [2024-07-25 10:17:32.430742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.606 qpair failed and we were unable to recover it. 00:28:47.606 [2024-07-25 10:17:32.430911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.606 [2024-07-25 10:17:32.430946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.606 qpair failed and we were unable to recover it. 00:28:47.606 [2024-07-25 10:17:32.431210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.606 [2024-07-25 10:17:32.431240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.606 qpair failed and we were unable to recover it. 00:28:47.606 [2024-07-25 10:17:32.431455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.606 [2024-07-25 10:17:32.431485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.606 qpair failed and we were unable to recover it. 00:28:47.606 [2024-07-25 10:17:32.431670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.606 [2024-07-25 10:17:32.431699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.606 qpair failed and we were unable to recover it. 00:28:47.606 [2024-07-25 10:17:32.431873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.606 [2024-07-25 10:17:32.431902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.606 qpair failed and we were unable to recover it. 00:28:47.606 [2024-07-25 10:17:32.432075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.606 [2024-07-25 10:17:32.432104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.606 qpair failed and we were unable to recover it. 00:28:47.606 [2024-07-25 10:17:32.432310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.606 [2024-07-25 10:17:32.432338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.606 qpair failed and we were unable to recover it. 00:28:47.606 [2024-07-25 10:17:32.432509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.606 [2024-07-25 10:17:32.432538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.606 qpair failed and we were unable to recover it. 00:28:47.606 [2024-07-25 10:17:32.432738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.606 [2024-07-25 10:17:32.432767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.606 qpair failed and we were unable to recover it. 00:28:47.606 [2024-07-25 10:17:32.432939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.606 [2024-07-25 10:17:32.432968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.606 qpair failed and we were unable to recover it. 00:28:47.606 [2024-07-25 10:17:32.433138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.606 [2024-07-25 10:17:32.433167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.606 qpair failed and we were unable to recover it. 00:28:47.606 [2024-07-25 10:17:32.433448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.606 [2024-07-25 10:17:32.433477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.606 qpair failed and we were unable to recover it. 00:28:47.606 [2024-07-25 10:17:32.433687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.606 [2024-07-25 10:17:32.433716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.606 qpair failed and we were unable to recover it. 00:28:47.606 [2024-07-25 10:17:32.433919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.606 [2024-07-25 10:17:32.433948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.606 qpair failed and we were unable to recover it. 00:28:47.606 [2024-07-25 10:17:32.434125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.606 [2024-07-25 10:17:32.434154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.606 qpair failed and we were unable to recover it. 00:28:47.606 [2024-07-25 10:17:32.434305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.606 [2024-07-25 10:17:32.434334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.606 qpair failed and we were unable to recover it. 00:28:47.606 [2024-07-25 10:17:32.434536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.606 [2024-07-25 10:17:32.434567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.606 qpair failed and we were unable to recover it. 00:28:47.606 [2024-07-25 10:17:32.434744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.606 [2024-07-25 10:17:32.434773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.606 qpair failed and we were unable to recover it. 00:28:47.606 [2024-07-25 10:17:32.435039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.606 [2024-07-25 10:17:32.435068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.606 qpair failed and we were unable to recover it. 00:28:47.606 [2024-07-25 10:17:32.435233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.606 [2024-07-25 10:17:32.435262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.606 qpair failed and we were unable to recover it. 00:28:47.606 [2024-07-25 10:17:32.435401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.606 [2024-07-25 10:17:32.435435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.606 qpair failed and we were unable to recover it. 00:28:47.606 [2024-07-25 10:17:32.435603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.606 [2024-07-25 10:17:32.435632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.606 qpair failed and we were unable to recover it. 00:28:47.606 [2024-07-25 10:17:32.435804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.606 [2024-07-25 10:17:32.435833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.606 qpair failed and we were unable to recover it. 00:28:47.606 [2024-07-25 10:17:32.436034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.606 [2024-07-25 10:17:32.436063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.606 qpair failed and we were unable to recover it. 00:28:47.606 [2024-07-25 10:17:32.436227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.606 [2024-07-25 10:17:32.436256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.606 qpair failed and we were unable to recover it. 00:28:47.606 [2024-07-25 10:17:32.436457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.606 [2024-07-25 10:17:32.436486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.606 qpair failed and we were unable to recover it. 00:28:47.606 [2024-07-25 10:17:32.436695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.606 [2024-07-25 10:17:32.436724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.606 qpair failed and we were unable to recover it. 00:28:47.606 [2024-07-25 10:17:32.436909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.606 [2024-07-25 10:17:32.436939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.606 qpair failed and we were unable to recover it. 00:28:47.606 [2024-07-25 10:17:32.437108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.606 [2024-07-25 10:17:32.437137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.606 qpair failed and we were unable to recover it. 00:28:47.606 [2024-07-25 10:17:32.437353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.606 [2024-07-25 10:17:32.437382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.606 qpair failed and we were unable to recover it. 00:28:47.606 [2024-07-25 10:17:32.437543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.606 [2024-07-25 10:17:32.437573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.606 qpair failed and we were unable to recover it. 00:28:47.606 [2024-07-25 10:17:32.437717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.606 [2024-07-25 10:17:32.437746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.606 qpair failed and we were unable to recover it. 00:28:47.606 [2024-07-25 10:17:32.437923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.606 [2024-07-25 10:17:32.437952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.606 qpair failed and we were unable to recover it. 00:28:47.606 [2024-07-25 10:17:32.438156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.607 [2024-07-25 10:17:32.438185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.607 qpair failed and we were unable to recover it. 00:28:47.607 [2024-07-25 10:17:32.438392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.607 [2024-07-25 10:17:32.438421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.607 qpair failed and we were unable to recover it. 00:28:47.607 [2024-07-25 10:17:32.438622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.607 [2024-07-25 10:17:32.438651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.607 qpair failed and we were unable to recover it. 00:28:47.607 [2024-07-25 10:17:32.438824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.607 [2024-07-25 10:17:32.438853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.607 qpair failed and we were unable to recover it. 00:28:47.607 [2024-07-25 10:17:32.439024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.607 [2024-07-25 10:17:32.439053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.607 qpair failed and we were unable to recover it. 00:28:47.607 [2024-07-25 10:17:32.439253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.607 [2024-07-25 10:17:32.439282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.607 qpair failed and we were unable to recover it. 00:28:47.607 [2024-07-25 10:17:32.439504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.607 [2024-07-25 10:17:32.439534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.607 qpair failed and we were unable to recover it. 00:28:47.607 [2024-07-25 10:17:32.439709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.607 [2024-07-25 10:17:32.439743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.607 qpair failed and we were unable to recover it. 00:28:47.607 [2024-07-25 10:17:32.439909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.607 [2024-07-25 10:17:32.439938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.607 qpair failed and we were unable to recover it. 00:28:47.607 [2024-07-25 10:17:32.440100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.607 [2024-07-25 10:17:32.440128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.607 qpair failed and we were unable to recover it. 00:28:47.607 [2024-07-25 10:17:32.440302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.607 [2024-07-25 10:17:32.440331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.607 qpair failed and we were unable to recover it. 00:28:47.607 [2024-07-25 10:17:32.440535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.607 [2024-07-25 10:17:32.440564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.607 qpair failed and we were unable to recover it. 00:28:47.607 [2024-07-25 10:17:32.440745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.607 [2024-07-25 10:17:32.440774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.607 qpair failed and we were unable to recover it. 00:28:47.607 [2024-07-25 10:17:32.440946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.607 [2024-07-25 10:17:32.440974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.607 qpair failed and we were unable to recover it. 00:28:47.607 [2024-07-25 10:17:32.441177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.607 [2024-07-25 10:17:32.441206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.607 qpair failed and we were unable to recover it. 00:28:47.607 [2024-07-25 10:17:32.441377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.607 [2024-07-25 10:17:32.441406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.607 qpair failed and we were unable to recover it. 00:28:47.607 [2024-07-25 10:17:32.441664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.607 [2024-07-25 10:17:32.441693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.607 qpair failed and we were unable to recover it. 00:28:47.607 [2024-07-25 10:17:32.441904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.607 [2024-07-25 10:17:32.441932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.607 qpair failed and we were unable to recover it. 00:28:47.607 [2024-07-25 10:17:32.442115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.607 [2024-07-25 10:17:32.442144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.607 qpair failed and we were unable to recover it. 00:28:47.607 [2024-07-25 10:17:32.442348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.607 [2024-07-25 10:17:32.442377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.607 qpair failed and we were unable to recover it. 00:28:47.607 [2024-07-25 10:17:32.442565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.607 [2024-07-25 10:17:32.442595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.607 qpair failed and we were unable to recover it. 00:28:47.607 [2024-07-25 10:17:32.442821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.607 [2024-07-25 10:17:32.442850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.607 qpair failed and we were unable to recover it. 00:28:47.607 [2024-07-25 10:17:32.443055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.607 [2024-07-25 10:17:32.443084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.607 qpair failed and we were unable to recover it. 00:28:47.607 [2024-07-25 10:17:32.443258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.607 [2024-07-25 10:17:32.443287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.607 qpair failed and we were unable to recover it. 00:28:47.607 [2024-07-25 10:17:32.443470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.607 [2024-07-25 10:17:32.443501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.607 qpair failed and we were unable to recover it. 00:28:47.607 [2024-07-25 10:17:32.443643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.607 [2024-07-25 10:17:32.443672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.607 qpair failed and we were unable to recover it. 00:28:47.607 [2024-07-25 10:17:32.443861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.607 [2024-07-25 10:17:32.443890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.607 qpair failed and we were unable to recover it. 00:28:47.607 [2024-07-25 10:17:32.444058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.607 [2024-07-25 10:17:32.444086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.607 qpair failed and we were unable to recover it. 00:28:47.607 [2024-07-25 10:17:32.444289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.607 [2024-07-25 10:17:32.444317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.607 qpair failed and we were unable to recover it. 00:28:47.607 [2024-07-25 10:17:32.444425] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:47.607 [2024-07-25 10:17:32.444459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.607 [2024-07-25 10:17:32.444473] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:47.607 [2024-07-25 10:17:32.444486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.607 [2024-07-25 10:17:32.444490] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:47.607 qpair failed and we were unable to recover it. 00:28:47.607 [2024-07-25 10:17:32.444504] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:47.607 [2024-07-25 10:17:32.444516] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:47.607 [2024-07-25 10:17:32.444600] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:28:47.607 [2024-07-25 10:17:32.444727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.607 [2024-07-25 10:17:32.444657] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:28:47.607 [2024-07-25 10:17:32.444755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.607 qpair failed and we were unable to recover it. 00:28:47.607 [2024-07-25 10:17:32.444686] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:28:47.607 [2024-07-25 10:17:32.444690] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:28:47.607 [2024-07-25 10:17:32.444960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.607 [2024-07-25 10:17:32.444987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.607 qpair failed and we were unable to recover it. 00:28:47.607 [2024-07-25 10:17:32.445187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.607 [2024-07-25 10:17:32.445216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.607 qpair failed and we were unable to recover it. 00:28:47.607 [2024-07-25 10:17:32.445424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.607 [2024-07-25 10:17:32.445457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.607 qpair failed and we were unable to recover it. 00:28:47.607 [2024-07-25 10:17:32.445655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.607 [2024-07-25 10:17:32.445683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.608 qpair failed and we were unable to recover it. 00:28:47.608 [2024-07-25 10:17:32.445883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.608 [2024-07-25 10:17:32.445911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.608 qpair failed and we were unable to recover it. 00:28:47.608 [2024-07-25 10:17:32.446109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.608 [2024-07-25 10:17:32.446137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.608 qpair failed and we were unable to recover it. 00:28:47.608 [2024-07-25 10:17:32.446334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.608 [2024-07-25 10:17:32.446362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.608 qpair failed and we were unable to recover it. 00:28:47.608 [2024-07-25 10:17:32.446536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.608 [2024-07-25 10:17:32.446565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.608 qpair failed and we were unable to recover it. 00:28:47.608 [2024-07-25 10:17:32.446741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.608 [2024-07-25 10:17:32.446770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.608 qpair failed and we were unable to recover it. 00:28:47.608 [2024-07-25 10:17:32.446979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.608 [2024-07-25 10:17:32.447007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.608 qpair failed and we were unable to recover it. 00:28:47.608 [2024-07-25 10:17:32.447211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.608 [2024-07-25 10:17:32.447240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.608 qpair failed and we were unable to recover it. 00:28:47.608 [2024-07-25 10:17:32.447411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.608 [2024-07-25 10:17:32.447446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.608 qpair failed and we were unable to recover it. 00:28:47.608 [2024-07-25 10:17:32.447636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.608 [2024-07-25 10:17:32.447664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.608 qpair failed and we were unable to recover it. 00:28:47.608 [2024-07-25 10:17:32.447876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.608 [2024-07-25 10:17:32.447910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.608 qpair failed and we were unable to recover it. 00:28:47.608 [2024-07-25 10:17:32.448064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.608 [2024-07-25 10:17:32.448093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.608 qpair failed and we were unable to recover it. 00:28:47.608 [2024-07-25 10:17:32.448295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.608 [2024-07-25 10:17:32.448324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.608 qpair failed and we were unable to recover it. 00:28:47.608 [2024-07-25 10:17:32.448505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.608 [2024-07-25 10:17:32.448535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.608 qpair failed and we were unable to recover it. 00:28:47.608 [2024-07-25 10:17:32.448715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.608 [2024-07-25 10:17:32.448743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.608 qpair failed and we were unable to recover it. 00:28:47.608 [2024-07-25 10:17:32.448892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.608 [2024-07-25 10:17:32.448920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.608 qpair failed and we were unable to recover it. 00:28:47.608 [2024-07-25 10:17:32.449123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.608 [2024-07-25 10:17:32.449152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.608 qpair failed and we were unable to recover it. 00:28:47.608 [2024-07-25 10:17:32.449326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.608 [2024-07-25 10:17:32.449355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.608 qpair failed and we were unable to recover it. 00:28:47.608 [2024-07-25 10:17:32.449525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.608 [2024-07-25 10:17:32.449554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.608 qpair failed and we were unable to recover it. 00:28:47.608 [2024-07-25 10:17:32.449726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.608 [2024-07-25 10:17:32.449753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.608 qpair failed and we were unable to recover it. 00:28:47.608 [2024-07-25 10:17:32.449923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.608 [2024-07-25 10:17:32.449953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.608 qpair failed and we were unable to recover it. 00:28:47.608 [2024-07-25 10:17:32.450156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.608 [2024-07-25 10:17:32.450185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.608 qpair failed and we were unable to recover it. 00:28:47.608 [2024-07-25 10:17:32.450395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.608 [2024-07-25 10:17:32.450423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.608 qpair failed and we were unable to recover it. 00:28:47.608 [2024-07-25 10:17:32.450617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.608 [2024-07-25 10:17:32.450646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.608 qpair failed and we were unable to recover it. 00:28:47.608 [2024-07-25 10:17:32.450820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.608 [2024-07-25 10:17:32.450849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.608 qpair failed and we were unable to recover it. 00:28:47.608 [2024-07-25 10:17:32.451059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.608 [2024-07-25 10:17:32.451087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.608 qpair failed and we were unable to recover it. 00:28:47.608 [2024-07-25 10:17:32.451296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.608 [2024-07-25 10:17:32.451324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.608 qpair failed and we were unable to recover it. 00:28:47.608 [2024-07-25 10:17:32.451532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.608 [2024-07-25 10:17:32.451561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.608 qpair failed and we were unable to recover it. 00:28:47.608 [2024-07-25 10:17:32.451738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.608 [2024-07-25 10:17:32.451768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.608 qpair failed and we were unable to recover it. 00:28:47.608 [2024-07-25 10:17:32.451936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.608 [2024-07-25 10:17:32.451965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.608 qpair failed and we were unable to recover it. 00:28:47.608 [2024-07-25 10:17:32.452169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.608 [2024-07-25 10:17:32.452199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.608 qpair failed and we were unable to recover it. 00:28:47.608 [2024-07-25 10:17:32.452386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.608 [2024-07-25 10:17:32.452414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.608 qpair failed and we were unable to recover it. 00:28:47.608 [2024-07-25 10:17:32.452600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.608 [2024-07-25 10:17:32.452629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.608 qpair failed and we were unable to recover it. 00:28:47.608 [2024-07-25 10:17:32.452833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.608 [2024-07-25 10:17:32.452862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.608 qpair failed and we were unable to recover it. 00:28:47.608 [2024-07-25 10:17:32.453004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.608 [2024-07-25 10:17:32.453032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.608 qpair failed and we were unable to recover it. 00:28:47.608 [2024-07-25 10:17:32.453235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.608 [2024-07-25 10:17:32.453264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.608 qpair failed and we were unable to recover it. 00:28:47.608 [2024-07-25 10:17:32.453468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.608 [2024-07-25 10:17:32.453497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.608 qpair failed and we were unable to recover it. 00:28:47.608 [2024-07-25 10:17:32.453713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.608 [2024-07-25 10:17:32.453742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.608 qpair failed and we were unable to recover it. 00:28:47.608 [2024-07-25 10:17:32.453931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.609 [2024-07-25 10:17:32.453971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.609 qpair failed and we were unable to recover it. 00:28:47.609 [2024-07-25 10:17:32.454168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.609 [2024-07-25 10:17:32.454196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.609 qpair failed and we were unable to recover it. 00:28:47.609 [2024-07-25 10:17:32.454367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.609 [2024-07-25 10:17:32.454397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.609 qpair failed and we were unable to recover it. 00:28:47.609 [2024-07-25 10:17:32.454586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.609 [2024-07-25 10:17:32.454614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.609 qpair failed and we were unable to recover it. 00:28:47.609 [2024-07-25 10:17:32.454740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.609 [2024-07-25 10:17:32.454768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.609 qpair failed and we were unable to recover it. 00:28:47.609 [2024-07-25 10:17:32.454900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.609 [2024-07-25 10:17:32.454929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.609 qpair failed and we were unable to recover it. 00:28:47.609 [2024-07-25 10:17:32.455094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.609 [2024-07-25 10:17:32.455122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.609 qpair failed and we were unable to recover it. 00:28:47.609 [2024-07-25 10:17:32.455330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.609 [2024-07-25 10:17:32.455358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.609 qpair failed and we were unable to recover it. 00:28:47.609 [2024-07-25 10:17:32.455487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.609 [2024-07-25 10:17:32.455516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.609 qpair failed and we were unable to recover it. 00:28:47.609 [2024-07-25 10:17:32.455722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.609 [2024-07-25 10:17:32.455751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.609 qpair failed and we were unable to recover it. 00:28:47.609 [2024-07-25 10:17:32.455961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.609 [2024-07-25 10:17:32.455990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.609 qpair failed and we were unable to recover it. 00:28:47.609 [2024-07-25 10:17:32.456191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.609 [2024-07-25 10:17:32.456219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.609 qpair failed and we were unable to recover it. 00:28:47.609 [2024-07-25 10:17:32.456413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.609 [2024-07-25 10:17:32.456472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.609 qpair failed and we were unable to recover it. 00:28:47.609 [2024-07-25 10:17:32.456646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.609 [2024-07-25 10:17:32.456675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.609 qpair failed and we were unable to recover it. 00:28:47.609 [2024-07-25 10:17:32.456824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.609 [2024-07-25 10:17:32.456852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.609 qpair failed and we were unable to recover it. 00:28:47.609 [2024-07-25 10:17:32.457054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.609 [2024-07-25 10:17:32.457082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.609 qpair failed and we were unable to recover it. 00:28:47.609 [2024-07-25 10:17:32.457283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.609 [2024-07-25 10:17:32.457312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.609 qpair failed and we were unable to recover it. 00:28:47.609 [2024-07-25 10:17:32.457514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.609 [2024-07-25 10:17:32.457544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.609 qpair failed and we were unable to recover it. 00:28:47.609 [2024-07-25 10:17:32.457751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.609 [2024-07-25 10:17:32.457780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.609 qpair failed and we were unable to recover it. 00:28:47.609 [2024-07-25 10:17:32.457942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.609 [2024-07-25 10:17:32.457971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.609 qpair failed and we were unable to recover it. 00:28:47.609 [2024-07-25 10:17:32.458173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.609 [2024-07-25 10:17:32.458202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.609 qpair failed and we were unable to recover it. 00:28:47.609 [2024-07-25 10:17:32.458399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.609 [2024-07-25 10:17:32.458433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.609 qpair failed and we were unable to recover it. 00:28:47.609 [2024-07-25 10:17:32.458639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.609 [2024-07-25 10:17:32.458668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.609 qpair failed and we were unable to recover it. 00:28:47.609 [2024-07-25 10:17:32.458840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.609 [2024-07-25 10:17:32.458868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.609 qpair failed and we were unable to recover it. 00:28:47.609 [2024-07-25 10:17:32.459073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.609 [2024-07-25 10:17:32.459101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.609 qpair failed and we were unable to recover it. 00:28:47.609 [2024-07-25 10:17:32.459252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.609 [2024-07-25 10:17:32.459281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.609 qpair failed and we were unable to recover it. 00:28:47.609 [2024-07-25 10:17:32.459469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.609 [2024-07-25 10:17:32.459499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.609 qpair failed and we were unable to recover it. 00:28:47.609 [2024-07-25 10:17:32.459678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.609 [2024-07-25 10:17:32.459707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.609 qpair failed and we were unable to recover it. 00:28:47.609 [2024-07-25 10:17:32.459893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.609 [2024-07-25 10:17:32.459921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.609 qpair failed and we were unable to recover it. 00:28:47.609 [2024-07-25 10:17:32.460098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.609 [2024-07-25 10:17:32.460126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.609 qpair failed and we were unable to recover it. 00:28:47.609 [2024-07-25 10:17:32.460406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.610 [2024-07-25 10:17:32.460441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.610 qpair failed and we were unable to recover it. 00:28:47.610 [2024-07-25 10:17:32.460642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.610 [2024-07-25 10:17:32.460671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.610 qpair failed and we were unable to recover it. 00:28:47.610 [2024-07-25 10:17:32.460855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.610 [2024-07-25 10:17:32.460885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.610 qpair failed and we were unable to recover it. 00:28:47.610 [2024-07-25 10:17:32.461074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.610 [2024-07-25 10:17:32.461102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.610 qpair failed and we were unable to recover it. 00:28:47.610 [2024-07-25 10:17:32.461285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.610 [2024-07-25 10:17:32.461313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.610 qpair failed and we were unable to recover it. 00:28:47.610 [2024-07-25 10:17:32.461485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.610 [2024-07-25 10:17:32.461514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.610 qpair failed and we were unable to recover it. 00:28:47.610 [2024-07-25 10:17:32.461692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.610 [2024-07-25 10:17:32.461721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.610 qpair failed and we were unable to recover it. 00:28:47.610 [2024-07-25 10:17:32.461895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.610 [2024-07-25 10:17:32.461924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.610 qpair failed and we were unable to recover it. 00:28:47.610 [2024-07-25 10:17:32.462092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.610 [2024-07-25 10:17:32.462120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.610 qpair failed and we were unable to recover it. 00:28:47.610 [2024-07-25 10:17:32.462299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.610 [2024-07-25 10:17:32.462327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.610 qpair failed and we were unable to recover it. 00:28:47.610 [2024-07-25 10:17:32.462535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.610 [2024-07-25 10:17:32.462565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.610 qpair failed and we were unable to recover it. 00:28:47.610 [2024-07-25 10:17:32.462744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.610 [2024-07-25 10:17:32.462772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.610 qpair failed and we were unable to recover it. 00:28:47.610 [2024-07-25 10:17:32.462982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.610 [2024-07-25 10:17:32.463012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.610 qpair failed and we were unable to recover it. 00:28:47.610 [2024-07-25 10:17:32.463190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.610 [2024-07-25 10:17:32.463219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.610 qpair failed and we were unable to recover it. 00:28:47.610 [2024-07-25 10:17:32.463401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.610 [2024-07-25 10:17:32.463434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.610 qpair failed and we were unable to recover it. 00:28:47.610 [2024-07-25 10:17:32.463571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.610 [2024-07-25 10:17:32.463599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.610 qpair failed and we were unable to recover it. 00:28:47.610 [2024-07-25 10:17:32.463787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.610 [2024-07-25 10:17:32.463816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.610 qpair failed and we were unable to recover it. 00:28:47.610 [2024-07-25 10:17:32.463980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.610 [2024-07-25 10:17:32.464008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.610 qpair failed and we were unable to recover it. 00:28:47.610 [2024-07-25 10:17:32.464185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.610 [2024-07-25 10:17:32.464214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.610 qpair failed and we were unable to recover it. 00:28:47.610 [2024-07-25 10:17:32.464419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.610 [2024-07-25 10:17:32.464454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.610 qpair failed and we were unable to recover it. 00:28:47.610 [2024-07-25 10:17:32.464661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.610 [2024-07-25 10:17:32.464688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.610 qpair failed and we were unable to recover it. 00:28:47.610 [2024-07-25 10:17:32.464871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.610 [2024-07-25 10:17:32.464899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.610 qpair failed and we were unable to recover it. 00:28:47.610 [2024-07-25 10:17:32.465098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.610 [2024-07-25 10:17:32.465131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.610 qpair failed and we were unable to recover it. 00:28:47.610 [2024-07-25 10:17:32.465314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.610 [2024-07-25 10:17:32.465344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.610 qpair failed and we were unable to recover it. 00:28:47.610 [2024-07-25 10:17:32.465519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.610 [2024-07-25 10:17:32.465548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.610 qpair failed and we were unable to recover it. 00:28:47.610 [2024-07-25 10:17:32.465750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.610 [2024-07-25 10:17:32.465780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.610 qpair failed and we were unable to recover it. 00:28:47.610 [2024-07-25 10:17:32.465916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.610 [2024-07-25 10:17:32.465945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.610 qpair failed and we were unable to recover it. 00:28:47.610 [2024-07-25 10:17:32.466117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.610 [2024-07-25 10:17:32.466146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.610 qpair failed and we were unable to recover it. 00:28:47.610 [2024-07-25 10:17:32.466327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.610 [2024-07-25 10:17:32.466355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.610 qpair failed and we were unable to recover it. 00:28:47.610 [2024-07-25 10:17:32.466505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.610 [2024-07-25 10:17:32.466538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.610 qpair failed and we were unable to recover it. 00:28:47.610 [2024-07-25 10:17:32.466758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.610 [2024-07-25 10:17:32.466787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.610 qpair failed and we were unable to recover it. 00:28:47.610 [2024-07-25 10:17:32.466967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.610 [2024-07-25 10:17:32.466995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.610 qpair failed and we were unable to recover it. 00:28:47.610 [2024-07-25 10:17:32.467172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.610 [2024-07-25 10:17:32.467200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.610 qpair failed and we were unable to recover it. 00:28:47.610 [2024-07-25 10:17:32.467373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.610 [2024-07-25 10:17:32.467401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.610 qpair failed and we were unable to recover it. 00:28:47.610 [2024-07-25 10:17:32.467697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.610 [2024-07-25 10:17:32.467728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.610 qpair failed and we were unable to recover it. 00:28:47.610 [2024-07-25 10:17:32.467963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.610 [2024-07-25 10:17:32.468012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.610 qpair failed and we were unable to recover it. 00:28:47.610 [2024-07-25 10:17:32.468235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.610 [2024-07-25 10:17:32.468266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.610 qpair failed and we were unable to recover it. 00:28:47.610 [2024-07-25 10:17:32.468482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.610 [2024-07-25 10:17:32.468514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.610 qpair failed and we were unable to recover it. 00:28:47.611 [2024-07-25 10:17:32.468687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.611 [2024-07-25 10:17:32.468717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.611 qpair failed and we were unable to recover it. 00:28:47.611 [2024-07-25 10:17:32.468925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.611 [2024-07-25 10:17:32.468955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.611 qpair failed and we were unable to recover it. 00:28:47.611 [2024-07-25 10:17:32.469130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.611 [2024-07-25 10:17:32.469161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.611 qpair failed and we were unable to recover it. 00:28:47.611 [2024-07-25 10:17:32.469362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.611 [2024-07-25 10:17:32.469391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.611 qpair failed and we were unable to recover it. 00:28:47.611 [2024-07-25 10:17:32.469584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.611 [2024-07-25 10:17:32.469614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.611 qpair failed and we were unable to recover it. 00:28:47.611 [2024-07-25 10:17:32.469819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.611 [2024-07-25 10:17:32.469849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.611 qpair failed and we were unable to recover it. 00:28:47.611 [2024-07-25 10:17:32.469997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.611 [2024-07-25 10:17:32.470026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.611 qpair failed and we were unable to recover it. 00:28:47.611 [2024-07-25 10:17:32.470229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.611 [2024-07-25 10:17:32.470259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.611 qpair failed and we were unable to recover it. 00:28:47.611 [2024-07-25 10:17:32.470467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.611 [2024-07-25 10:17:32.470498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa0000b90 with addr=10.0.0.2, port=4420 00:28:47.611 qpair failed and we were unable to recover it. 00:28:47.611 [2024-07-25 10:17:32.470687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.611 [2024-07-25 10:17:32.470719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.611 qpair failed and we were unable to recover it. 00:28:47.611 [2024-07-25 10:17:32.470890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.611 [2024-07-25 10:17:32.470920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.611 qpair failed and we were unable to recover it. 00:28:47.611 [2024-07-25 10:17:32.471117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.611 [2024-07-25 10:17:32.471147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.611 qpair failed and we were unable to recover it. 00:28:47.611 [2024-07-25 10:17:32.471349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.611 [2024-07-25 10:17:32.471378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.611 qpair failed and we were unable to recover it. 00:28:47.611 [2024-07-25 10:17:32.471541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.611 [2024-07-25 10:17:32.471571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.611 qpair failed and we were unable to recover it. 00:28:47.611 [2024-07-25 10:17:32.471742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.611 [2024-07-25 10:17:32.471771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.611 qpair failed and we were unable to recover it. 00:28:47.611 [2024-07-25 10:17:32.471940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.611 [2024-07-25 10:17:32.471968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.611 qpair failed and we were unable to recover it. 00:28:47.611 [2024-07-25 10:17:32.472140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.611 [2024-07-25 10:17:32.472170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.611 qpair failed and we were unable to recover it. 00:28:47.611 [2024-07-25 10:17:32.472374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.611 [2024-07-25 10:17:32.472403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.611 qpair failed and we were unable to recover it. 00:28:47.611 [2024-07-25 10:17:32.472583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.611 [2024-07-25 10:17:32.472613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.611 qpair failed and we were unable to recover it. 00:28:47.611 [2024-07-25 10:17:32.472789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.611 [2024-07-25 10:17:32.472818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.611 qpair failed and we were unable to recover it. 00:28:47.611 [2024-07-25 10:17:32.472990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.611 [2024-07-25 10:17:32.473019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.611 qpair failed and we were unable to recover it. 00:28:47.611 [2024-07-25 10:17:32.473218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.611 [2024-07-25 10:17:32.473248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.611 qpair failed and we were unable to recover it. 00:28:47.611 [2024-07-25 10:17:32.473463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.611 [2024-07-25 10:17:32.473493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.611 qpair failed and we were unable to recover it. 00:28:47.611 [2024-07-25 10:17:32.473649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.611 [2024-07-25 10:17:32.473678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.611 qpair failed and we were unable to recover it. 00:28:47.611 [2024-07-25 10:17:32.473865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.611 [2024-07-25 10:17:32.473899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.611 qpair failed and we were unable to recover it. 00:28:47.611 [2024-07-25 10:17:32.474080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.611 [2024-07-25 10:17:32.474109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.611 qpair failed and we were unable to recover it. 00:28:47.611 [2024-07-25 10:17:32.474319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.611 [2024-07-25 10:17:32.474348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.611 qpair failed and we were unable to recover it. 00:28:47.611 [2024-07-25 10:17:32.474583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.611 [2024-07-25 10:17:32.474613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.611 qpair failed and we were unable to recover it. 00:28:47.611 [2024-07-25 10:17:32.474820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.611 [2024-07-25 10:17:32.474850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.611 qpair failed and we were unable to recover it. 00:28:47.611 [2024-07-25 10:17:32.475019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.611 [2024-07-25 10:17:32.475048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.611 qpair failed and we were unable to recover it. 00:28:47.611 [2024-07-25 10:17:32.475248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.611 [2024-07-25 10:17:32.475276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.611 qpair failed and we were unable to recover it. 00:28:47.611 [2024-07-25 10:17:32.475450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.611 [2024-07-25 10:17:32.475479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.611 qpair failed and we were unable to recover it. 00:28:47.611 [2024-07-25 10:17:32.475661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.611 [2024-07-25 10:17:32.475690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.611 qpair failed and we were unable to recover it. 00:28:47.611 [2024-07-25 10:17:32.475901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.611 [2024-07-25 10:17:32.475930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.611 qpair failed and we were unable to recover it. 00:28:47.611 [2024-07-25 10:17:32.476121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.611 [2024-07-25 10:17:32.476150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.611 qpair failed and we were unable to recover it. 00:28:47.611 [2024-07-25 10:17:32.476335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.611 [2024-07-25 10:17:32.476363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.611 qpair failed and we were unable to recover it. 00:28:47.611 [2024-07-25 10:17:32.476576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.611 [2024-07-25 10:17:32.476606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.611 qpair failed and we were unable to recover it. 00:28:47.611 [2024-07-25 10:17:32.476758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.611 [2024-07-25 10:17:32.476786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.612 qpair failed and we were unable to recover it. 00:28:47.612 [2024-07-25 10:17:32.476995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.612 [2024-07-25 10:17:32.477024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.612 qpair failed and we were unable to recover it. 00:28:47.612 [2024-07-25 10:17:32.477223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.612 [2024-07-25 10:17:32.477251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.612 qpair failed and we were unable to recover it. 00:28:47.612 [2024-07-25 10:17:32.477422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.612 [2024-07-25 10:17:32.477458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.612 qpair failed and we were unable to recover it. 00:28:47.612 [2024-07-25 10:17:32.477662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.612 [2024-07-25 10:17:32.477691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.612 qpair failed and we were unable to recover it. 00:28:47.612 [2024-07-25 10:17:32.477891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.612 [2024-07-25 10:17:32.477920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.612 qpair failed and we were unable to recover it. 00:28:47.612 [2024-07-25 10:17:32.478051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.612 [2024-07-25 10:17:32.478079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.612 qpair failed and we were unable to recover it. 00:28:47.612 [2024-07-25 10:17:32.478244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.612 [2024-07-25 10:17:32.478273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.612 qpair failed and we were unable to recover it. 00:28:47.612 [2024-07-25 10:17:32.478495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.612 [2024-07-25 10:17:32.478525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.612 qpair failed and we were unable to recover it. 00:28:47.612 [2024-07-25 10:17:32.478724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.612 [2024-07-25 10:17:32.478754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.612 qpair failed and we were unable to recover it. 00:28:47.612 [2024-07-25 10:17:32.478921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.612 [2024-07-25 10:17:32.478951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.612 qpair failed and we were unable to recover it. 00:28:47.612 [2024-07-25 10:17:32.479163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.612 [2024-07-25 10:17:32.479209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.612 qpair failed and we were unable to recover it. 00:28:47.612 [2024-07-25 10:17:32.479381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.612 [2024-07-25 10:17:32.479410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.612 qpair failed and we were unable to recover it. 00:28:47.612 [2024-07-25 10:17:32.479626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.612 [2024-07-25 10:17:32.479656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.612 qpair failed and we were unable to recover it. 00:28:47.612 [2024-07-25 10:17:32.479842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.612 [2024-07-25 10:17:32.479883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.612 qpair failed and we were unable to recover it. 00:28:47.612 [2024-07-25 10:17:32.480095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.612 [2024-07-25 10:17:32.480142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.612 qpair failed and we were unable to recover it. 00:28:47.612 [2024-07-25 10:17:32.480344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.612 [2024-07-25 10:17:32.480373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.612 qpair failed and we were unable to recover it. 00:28:47.612 [2024-07-25 10:17:32.480541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.612 [2024-07-25 10:17:32.480571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.612 qpair failed and we were unable to recover it. 00:28:47.612 [2024-07-25 10:17:32.480749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.612 [2024-07-25 10:17:32.480778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.612 qpair failed and we were unable to recover it. 00:28:47.612 [2024-07-25 10:17:32.480968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.612 [2024-07-25 10:17:32.481016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.612 qpair failed and we were unable to recover it. 00:28:47.612 [2024-07-25 10:17:32.481160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.612 [2024-07-25 10:17:32.481208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.612 qpair failed and we were unable to recover it. 00:28:47.612 [2024-07-25 10:17:32.481414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.612 [2024-07-25 10:17:32.481448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.612 qpair failed and we were unable to recover it. 00:28:47.612 [2024-07-25 10:17:32.481635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.612 [2024-07-25 10:17:32.481665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.612 qpair failed and we were unable to recover it. 00:28:47.612 [2024-07-25 10:17:32.481831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.612 [2024-07-25 10:17:32.481877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.612 qpair failed and we were unable to recover it. 00:28:47.612 [2024-07-25 10:17:32.482051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.612 [2024-07-25 10:17:32.482080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.612 qpair failed and we were unable to recover it. 00:28:47.612 [2024-07-25 10:17:32.482285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.612 [2024-07-25 10:17:32.482315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.612 qpair failed and we were unable to recover it. 00:28:47.612 [2024-07-25 10:17:32.482486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.612 [2024-07-25 10:17:32.482516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.612 qpair failed and we were unable to recover it. 00:28:47.612 [2024-07-25 10:17:32.482742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.612 [2024-07-25 10:17:32.482794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.612 qpair failed and we were unable to recover it. 00:28:47.612 [2024-07-25 10:17:32.483010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.612 [2024-07-25 10:17:32.483039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.612 qpair failed and we were unable to recover it. 00:28:47.612 [2024-07-25 10:17:32.483206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.612 [2024-07-25 10:17:32.483235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.612 qpair failed and we were unable to recover it. 00:28:47.612 [2024-07-25 10:17:32.483378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.612 [2024-07-25 10:17:32.483407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.612 qpair failed and we were unable to recover it. 00:28:47.612 [2024-07-25 10:17:32.483597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.612 [2024-07-25 10:17:32.483628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.612 qpair failed and we were unable to recover it. 00:28:47.612 [2024-07-25 10:17:32.483833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.612 [2024-07-25 10:17:32.483863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.612 qpair failed and we were unable to recover it. 00:28:47.612 [2024-07-25 10:17:32.484038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.612 [2024-07-25 10:17:32.484067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.612 qpair failed and we were unable to recover it. 00:28:47.612 [2024-07-25 10:17:32.484271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.612 [2024-07-25 10:17:32.484300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.612 qpair failed and we were unable to recover it. 00:28:47.612 [2024-07-25 10:17:32.484471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.612 [2024-07-25 10:17:32.484501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.612 qpair failed and we were unable to recover it. 00:28:47.612 [2024-07-25 10:17:32.484681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.612 [2024-07-25 10:17:32.484710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.612 qpair failed and we were unable to recover it. 00:28:47.612 [2024-07-25 10:17:32.484885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.612 [2024-07-25 10:17:32.484914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.612 qpair failed and we were unable to recover it. 00:28:47.612 [2024-07-25 10:17:32.485102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.613 [2024-07-25 10:17:32.485131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.613 qpair failed and we were unable to recover it. 00:28:47.613 [2024-07-25 10:17:32.485312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.613 [2024-07-25 10:17:32.485342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.613 qpair failed and we were unable to recover it. 00:28:47.613 [2024-07-25 10:17:32.485539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.613 [2024-07-25 10:17:32.485569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.613 qpair failed and we were unable to recover it. 00:28:47.613 [2024-07-25 10:17:32.485749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.613 [2024-07-25 10:17:32.485779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.613 qpair failed and we were unable to recover it. 00:28:47.613 [2024-07-25 10:17:32.485955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.613 [2024-07-25 10:17:32.485984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.613 qpair failed and we were unable to recover it. 00:28:47.613 [2024-07-25 10:17:32.486113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.613 [2024-07-25 10:17:32.486170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.613 qpair failed and we were unable to recover it. 00:28:47.613 [2024-07-25 10:17:32.486371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.613 [2024-07-25 10:17:32.486401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.613 qpair failed and we were unable to recover it. 00:28:47.613 [2024-07-25 10:17:32.486589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.613 [2024-07-25 10:17:32.486618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.613 qpair failed and we were unable to recover it. 00:28:47.613 [2024-07-25 10:17:32.486798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.613 [2024-07-25 10:17:32.486826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.613 qpair failed and we were unable to recover it. 00:28:47.613 [2024-07-25 10:17:32.487044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.613 [2024-07-25 10:17:32.487092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.613 qpair failed and we were unable to recover it. 00:28:47.613 [2024-07-25 10:17:32.487247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.613 [2024-07-25 10:17:32.487275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.613 qpair failed and we were unable to recover it. 00:28:47.613 [2024-07-25 10:17:32.487411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.613 [2024-07-25 10:17:32.487453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.613 qpair failed and we were unable to recover it. 00:28:47.613 [2024-07-25 10:17:32.487656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.613 [2024-07-25 10:17:32.487685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.613 qpair failed and we were unable to recover it. 00:28:47.613 [2024-07-25 10:17:32.487920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.613 [2024-07-25 10:17:32.487967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.613 qpair failed and we were unable to recover it. 00:28:47.613 [2024-07-25 10:17:32.488148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.613 [2024-07-25 10:17:32.488177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.613 qpair failed and we were unable to recover it. 00:28:47.613 [2024-07-25 10:17:32.488358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.613 [2024-07-25 10:17:32.488387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.613 qpair failed and we were unable to recover it. 00:28:47.613 [2024-07-25 10:17:32.488600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.613 [2024-07-25 10:17:32.488630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.613 qpair failed and we were unable to recover it. 00:28:47.613 [2024-07-25 10:17:32.488887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.613 [2024-07-25 10:17:32.488934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.613 qpair failed and we were unable to recover it. 00:28:47.613 [2024-07-25 10:17:32.489148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.613 [2024-07-25 10:17:32.489177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.613 qpair failed and we were unable to recover it. 00:28:47.613 [2024-07-25 10:17:32.489354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.613 [2024-07-25 10:17:32.489383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.613 qpair failed and we were unable to recover it. 00:28:47.613 [2024-07-25 10:17:32.489525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.613 [2024-07-25 10:17:32.489555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.613 qpair failed and we were unable to recover it. 00:28:47.613 [2024-07-25 10:17:32.489730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.613 [2024-07-25 10:17:32.489776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.613 qpair failed and we were unable to recover it. 00:28:47.613 [2024-07-25 10:17:32.489988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.613 [2024-07-25 10:17:32.490017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.613 qpair failed and we were unable to recover it. 00:28:47.613 [2024-07-25 10:17:32.490191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.613 [2024-07-25 10:17:32.490221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.613 qpair failed and we were unable to recover it. 00:28:47.613 [2024-07-25 10:17:32.490397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.613 [2024-07-25 10:17:32.490426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.613 qpair failed and we were unable to recover it. 00:28:47.613 [2024-07-25 10:17:32.490633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.613 [2024-07-25 10:17:32.490663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.613 qpair failed and we were unable to recover it. 00:28:47.613 [2024-07-25 10:17:32.490877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.613 [2024-07-25 10:17:32.490906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.613 qpair failed and we were unable to recover it. 00:28:47.613 [2024-07-25 10:17:32.491106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.613 [2024-07-25 10:17:32.491135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.613 qpair failed and we were unable to recover it. 00:28:47.613 [2024-07-25 10:17:32.491336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.613 [2024-07-25 10:17:32.491365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.613 qpair failed and we were unable to recover it. 00:28:47.613 [2024-07-25 10:17:32.491518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.613 [2024-07-25 10:17:32.491553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.613 qpair failed and we were unable to recover it. 00:28:47.613 [2024-07-25 10:17:32.491731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.613 [2024-07-25 10:17:32.491761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.613 qpair failed and we were unable to recover it. 00:28:47.613 [2024-07-25 10:17:32.491963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.613 [2024-07-25 10:17:32.491993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.613 qpair failed and we were unable to recover it. 00:28:47.613 [2024-07-25 10:17:32.492167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.613 [2024-07-25 10:17:32.492197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.613 qpair failed and we were unable to recover it. 00:28:47.613 [2024-07-25 10:17:32.492342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.613 [2024-07-25 10:17:32.492372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.613 qpair failed and we were unable to recover it. 00:28:47.613 [2024-07-25 10:17:32.492580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.613 [2024-07-25 10:17:32.492610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.613 qpair failed and we were unable to recover it. 00:28:47.613 [2024-07-25 10:17:32.492767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.613 [2024-07-25 10:17:32.492796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.613 qpair failed and we were unable to recover it. 00:28:47.613 [2024-07-25 10:17:32.492960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.613 [2024-07-25 10:17:32.492989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.613 qpair failed and we were unable to recover it. 00:28:47.613 [2024-07-25 10:17:32.493160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.613 [2024-07-25 10:17:32.493218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.613 qpair failed and we were unable to recover it. 00:28:47.613 [2024-07-25 10:17:32.493421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.614 [2024-07-25 10:17:32.493456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.614 qpair failed and we were unable to recover it. 00:28:47.614 [2024-07-25 10:17:32.493653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.614 [2024-07-25 10:17:32.493683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.614 qpair failed and we were unable to recover it. 00:28:47.614 [2024-07-25 10:17:32.493854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.614 [2024-07-25 10:17:32.493884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.614 qpair failed and we were unable to recover it. 00:28:47.614 [2024-07-25 10:17:32.494068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.614 [2024-07-25 10:17:32.494115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.614 qpair failed and we were unable to recover it. 00:28:47.614 [2024-07-25 10:17:32.494320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.614 [2024-07-25 10:17:32.494350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.614 qpair failed and we were unable to recover it. 00:28:47.614 [2024-07-25 10:17:32.494554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.614 [2024-07-25 10:17:32.494584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.614 qpair failed and we were unable to recover it. 00:28:47.614 [2024-07-25 10:17:32.494793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.614 [2024-07-25 10:17:32.494823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.614 qpair failed and we were unable to recover it. 00:28:47.614 [2024-07-25 10:17:32.495004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.614 [2024-07-25 10:17:32.495051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.614 qpair failed and we were unable to recover it. 00:28:47.614 [2024-07-25 10:17:32.495220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.614 [2024-07-25 10:17:32.495249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.614 qpair failed and we were unable to recover it. 00:28:47.614 [2024-07-25 10:17:32.495461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.614 [2024-07-25 10:17:32.495491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.614 qpair failed and we were unable to recover it. 00:28:47.614 [2024-07-25 10:17:32.495668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.614 [2024-07-25 10:17:32.495697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.614 qpair failed and we were unable to recover it. 00:28:47.614 [2024-07-25 10:17:32.495876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.614 [2024-07-25 10:17:32.495924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.614 qpair failed and we were unable to recover it. 00:28:47.614 [2024-07-25 10:17:32.496108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.614 [2024-07-25 10:17:32.496137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.614 qpair failed and we were unable to recover it. 00:28:47.614 [2024-07-25 10:17:32.496311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.614 [2024-07-25 10:17:32.496340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.614 qpair failed and we were unable to recover it. 00:28:47.614 [2024-07-25 10:17:32.496476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.614 [2024-07-25 10:17:32.496507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.614 qpair failed and we were unable to recover it. 00:28:47.614 [2024-07-25 10:17:32.496688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.614 [2024-07-25 10:17:32.496721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.614 qpair failed and we were unable to recover it. 00:28:47.614 [2024-07-25 10:17:32.496949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.614 [2024-07-25 10:17:32.496978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.614 qpair failed and we were unable to recover it. 00:28:47.614 [2024-07-25 10:17:32.497151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.614 [2024-07-25 10:17:32.497180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.614 qpair failed and we were unable to recover it. 00:28:47.614 [2024-07-25 10:17:32.497351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.614 [2024-07-25 10:17:32.497381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.614 qpair failed and we were unable to recover it. 00:28:47.614 [2024-07-25 10:17:32.497582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.614 [2024-07-25 10:17:32.497613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.614 qpair failed and we were unable to recover it. 00:28:47.614 [2024-07-25 10:17:32.497813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.614 [2024-07-25 10:17:32.497842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.614 qpair failed and we were unable to recover it. 00:28:47.614 [2024-07-25 10:17:32.498050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.614 [2024-07-25 10:17:32.498079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.614 qpair failed and we were unable to recover it. 00:28:47.614 [2024-07-25 10:17:32.498282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.614 [2024-07-25 10:17:32.498311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.614 qpair failed and we were unable to recover it. 00:28:47.614 [2024-07-25 10:17:32.498500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.614 [2024-07-25 10:17:32.498530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.614 qpair failed and we were unable to recover it. 00:28:47.614 [2024-07-25 10:17:32.498728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.614 [2024-07-25 10:17:32.498757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.614 qpair failed and we were unable to recover it. 00:28:47.614 [2024-07-25 10:17:32.498908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.614 [2024-07-25 10:17:32.498937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.614 qpair failed and we were unable to recover it. 00:28:47.614 [2024-07-25 10:17:32.499106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.614 [2024-07-25 10:17:32.499134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.614 qpair failed and we were unable to recover it. 00:28:47.614 [2024-07-25 10:17:32.499335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.614 [2024-07-25 10:17:32.499365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.614 qpair failed and we were unable to recover it. 00:28:47.614 [2024-07-25 10:17:32.499542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.614 [2024-07-25 10:17:32.499572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.614 qpair failed and we were unable to recover it. 00:28:47.614 [2024-07-25 10:17:32.499771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.614 [2024-07-25 10:17:32.499800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.614 qpair failed and we were unable to recover it. 00:28:47.614 [2024-07-25 10:17:32.500012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.614 [2024-07-25 10:17:32.500041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.614 qpair failed and we were unable to recover it. 00:28:47.614 [2024-07-25 10:17:32.500231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.614 [2024-07-25 10:17:32.500280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.614 qpair failed and we were unable to recover it. 00:28:47.614 [2024-07-25 10:17:32.500467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.615 [2024-07-25 10:17:32.500497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.615 qpair failed and we were unable to recover it. 00:28:47.615 [2024-07-25 10:17:32.500655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.615 [2024-07-25 10:17:32.500683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.615 qpair failed and we were unable to recover it. 00:28:47.615 [2024-07-25 10:17:32.500895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.615 [2024-07-25 10:17:32.500923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.615 qpair failed and we were unable to recover it. 00:28:47.615 [2024-07-25 10:17:32.501142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.615 [2024-07-25 10:17:32.501189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.615 qpair failed and we were unable to recover it. 00:28:47.615 [2024-07-25 10:17:32.501359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.615 [2024-07-25 10:17:32.501388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.615 qpair failed and we were unable to recover it. 00:28:47.615 [2024-07-25 10:17:32.501537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.615 [2024-07-25 10:17:32.501565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.615 qpair failed and we were unable to recover it. 00:28:47.615 [2024-07-25 10:17:32.501750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.615 [2024-07-25 10:17:32.501778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.615 qpair failed and we were unable to recover it. 00:28:47.615 [2024-07-25 10:17:32.502003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.615 [2024-07-25 10:17:32.502050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.615 qpair failed and we were unable to recover it. 00:28:47.615 [2024-07-25 10:17:32.502220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.615 [2024-07-25 10:17:32.502249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.615 qpair failed and we were unable to recover it. 00:28:47.615 [2024-07-25 10:17:32.502454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.615 [2024-07-25 10:17:32.502483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.615 qpair failed and we were unable to recover it. 00:28:47.615 [2024-07-25 10:17:32.502633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.615 [2024-07-25 10:17:32.502661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.615 qpair failed and we were unable to recover it. 00:28:47.615 [2024-07-25 10:17:32.502842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.615 [2024-07-25 10:17:32.502889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.615 qpair failed and we were unable to recover it. 00:28:47.615 [2024-07-25 10:17:32.503059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.615 [2024-07-25 10:17:32.503087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.615 qpair failed and we were unable to recover it. 00:28:47.615 [2024-07-25 10:17:32.503305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.615 [2024-07-25 10:17:32.503334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.615 qpair failed and we were unable to recover it. 00:28:47.615 [2024-07-25 10:17:32.503547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.615 [2024-07-25 10:17:32.503577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.615 qpair failed and we were unable to recover it. 00:28:47.615 [2024-07-25 10:17:32.503771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.615 [2024-07-25 10:17:32.503818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.615 qpair failed and we were unable to recover it. 00:28:47.615 [2024-07-25 10:17:32.504029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.615 [2024-07-25 10:17:32.504058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.615 qpair failed and we were unable to recover it. 00:28:47.615 [2024-07-25 10:17:32.504242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.615 [2024-07-25 10:17:32.504271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.615 qpair failed and we were unable to recover it. 00:28:47.615 [2024-07-25 10:17:32.504478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.615 [2024-07-25 10:17:32.504509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.615 qpair failed and we were unable to recover it. 00:28:47.615 [2024-07-25 10:17:32.504698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.615 [2024-07-25 10:17:32.504746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.615 qpair failed and we were unable to recover it. 00:28:47.615 [2024-07-25 10:17:32.504936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.615 [2024-07-25 10:17:32.504966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.615 qpair failed and we were unable to recover it. 00:28:47.615 [2024-07-25 10:17:32.505140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.615 [2024-07-25 10:17:32.505169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.615 qpair failed and we were unable to recover it. 00:28:47.615 [2024-07-25 10:17:32.505377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.615 [2024-07-25 10:17:32.505406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.615 qpair failed and we were unable to recover it. 00:28:47.615 [2024-07-25 10:17:32.505623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.615 [2024-07-25 10:17:32.505653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.615 qpair failed and we were unable to recover it. 00:28:47.615 [2024-07-25 10:17:32.505856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.615 [2024-07-25 10:17:32.505885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.615 qpair failed and we were unable to recover it. 00:28:47.615 [2024-07-25 10:17:32.506087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.615 [2024-07-25 10:17:32.506116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.615 qpair failed and we were unable to recover it. 00:28:47.615 [2024-07-25 10:17:32.506306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.615 [2024-07-25 10:17:32.506335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.615 qpair failed and we were unable to recover it. 00:28:47.615 [2024-07-25 10:17:32.506485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.615 [2024-07-25 10:17:32.506519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.615 qpair failed and we were unable to recover it. 00:28:47.615 [2024-07-25 10:17:32.506730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.615 [2024-07-25 10:17:32.506759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.615 qpair failed and we were unable to recover it. 00:28:47.615 [2024-07-25 10:17:32.506962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.615 [2024-07-25 10:17:32.506991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.615 qpair failed and we were unable to recover it. 00:28:47.615 [2024-07-25 10:17:32.507168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.615 [2024-07-25 10:17:32.507197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.615 qpair failed and we were unable to recover it. 00:28:47.615 [2024-07-25 10:17:32.507396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.615 [2024-07-25 10:17:32.507425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.615 qpair failed and we were unable to recover it. 00:28:47.615 [2024-07-25 10:17:32.507644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.615 [2024-07-25 10:17:32.507674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.615 qpair failed and we were unable to recover it. 00:28:47.615 [2024-07-25 10:17:32.507875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.615 [2024-07-25 10:17:32.507904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.615 qpair failed and we were unable to recover it. 00:28:47.615 [2024-07-25 10:17:32.508087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.615 [2024-07-25 10:17:32.508117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.615 qpair failed and we were unable to recover it. 00:28:47.615 [2024-07-25 10:17:32.508307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.615 [2024-07-25 10:17:32.508348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.615 qpair failed and we were unable to recover it. 00:28:47.615 [2024-07-25 10:17:32.508519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.615 [2024-07-25 10:17:32.508549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.615 qpair failed and we were unable to recover it. 00:28:47.615 [2024-07-25 10:17:32.508755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.615 [2024-07-25 10:17:32.508785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.615 qpair failed and we were unable to recover it. 00:28:47.616 [2024-07-25 10:17:32.508963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.616 [2024-07-25 10:17:32.508992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.616 qpair failed and we were unable to recover it. 00:28:47.616 [2024-07-25 10:17:32.509187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.616 [2024-07-25 10:17:32.509238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.616 qpair failed and we were unable to recover it. 00:28:47.616 [2024-07-25 10:17:32.509442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.616 [2024-07-25 10:17:32.509482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.616 qpair failed and we were unable to recover it. 00:28:47.616 [2024-07-25 10:17:32.509647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.616 [2024-07-25 10:17:32.509677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.616 qpair failed and we were unable to recover it. 00:28:47.616 [2024-07-25 10:17:32.509831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.616 [2024-07-25 10:17:32.509860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.616 qpair failed and we were unable to recover it. 00:28:47.616 [2024-07-25 10:17:32.510084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.616 [2024-07-25 10:17:32.510131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.616 qpair failed and we were unable to recover it. 00:28:47.616 [2024-07-25 10:17:32.510299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.616 [2024-07-25 10:17:32.510328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.616 qpair failed and we were unable to recover it. 00:28:47.616 [2024-07-25 10:17:32.510529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.616 [2024-07-25 10:17:32.510558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.616 qpair failed and we were unable to recover it. 00:28:47.616 [2024-07-25 10:17:32.510729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.616 [2024-07-25 10:17:32.510758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.616 qpair failed and we were unable to recover it. 00:28:47.616 [2024-07-25 10:17:32.510962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.616 [2024-07-25 10:17:32.511010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.616 qpair failed and we were unable to recover it. 00:28:47.616 [2024-07-25 10:17:32.511175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.616 [2024-07-25 10:17:32.511204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.616 qpair failed and we were unable to recover it. 00:28:47.616 [2024-07-25 10:17:32.511418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.616 [2024-07-25 10:17:32.511453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.616 qpair failed and we were unable to recover it. 00:28:47.616 [2024-07-25 10:17:32.511641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.616 [2024-07-25 10:17:32.511670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.616 qpair failed and we were unable to recover it. 00:28:47.616 [2024-07-25 10:17:32.511844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.616 [2024-07-25 10:17:32.511891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.616 qpair failed and we were unable to recover it. 00:28:47.616 [2024-07-25 10:17:32.512058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.616 [2024-07-25 10:17:32.512087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.616 qpair failed and we were unable to recover it. 00:28:47.616 [2024-07-25 10:17:32.512293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.616 [2024-07-25 10:17:32.512322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.616 qpair failed and we were unable to recover it. 00:28:47.616 [2024-07-25 10:17:32.512531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.616 [2024-07-25 10:17:32.512561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.616 qpair failed and we were unable to recover it. 00:28:47.616 [2024-07-25 10:17:32.512689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.616 [2024-07-25 10:17:32.512718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.616 qpair failed and we were unable to recover it. 00:28:47.616 [2024-07-25 10:17:32.512895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.616 [2024-07-25 10:17:32.512924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.616 qpair failed and we were unable to recover it. 00:28:47.616 [2024-07-25 10:17:32.513115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.616 [2024-07-25 10:17:32.513144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.616 qpair failed and we were unable to recover it. 00:28:47.616 [2024-07-25 10:17:32.513318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.616 [2024-07-25 10:17:32.513347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.616 qpair failed and we were unable to recover it. 00:28:47.616 [2024-07-25 10:17:32.513530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.616 [2024-07-25 10:17:32.513560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.616 qpair failed and we were unable to recover it. 00:28:47.616 [2024-07-25 10:17:32.513733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.616 [2024-07-25 10:17:32.513762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.616 qpair failed and we were unable to recover it. 00:28:47.616 [2024-07-25 10:17:32.513930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.616 [2024-07-25 10:17:32.513959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.616 qpair failed and we were unable to recover it. 00:28:47.616 [2024-07-25 10:17:32.514159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.616 [2024-07-25 10:17:32.514188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.616 qpair failed and we were unable to recover it. 00:28:47.616 [2024-07-25 10:17:32.514372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.616 [2024-07-25 10:17:32.514401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.616 qpair failed and we were unable to recover it. 00:28:47.616 [2024-07-25 10:17:32.514573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.616 [2024-07-25 10:17:32.514602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.616 qpair failed and we were unable to recover it. 00:28:47.616 [2024-07-25 10:17:32.514779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.616 [2024-07-25 10:17:32.514809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.616 qpair failed and we were unable to recover it. 00:28:47.616 [2024-07-25 10:17:32.515026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.616 [2024-07-25 10:17:32.515056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.616 qpair failed and we were unable to recover it. 00:28:47.616 [2024-07-25 10:17:32.515264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.616 [2024-07-25 10:17:32.515294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.616 qpair failed and we were unable to recover it. 00:28:47.616 [2024-07-25 10:17:32.515499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.616 [2024-07-25 10:17:32.515529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.616 qpair failed and we were unable to recover it. 00:28:47.616 [2024-07-25 10:17:32.515704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.616 [2024-07-25 10:17:32.515733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.616 qpair failed and we were unable to recover it. 00:28:47.616 [2024-07-25 10:17:32.515942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.616 [2024-07-25 10:17:32.515971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.616 qpair failed and we were unable to recover it. 00:28:47.616 [2024-07-25 10:17:32.516099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.616 [2024-07-25 10:17:32.516147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.616 qpair failed and we were unable to recover it. 00:28:47.616 [2024-07-25 10:17:32.516331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.616 [2024-07-25 10:17:32.516360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.616 qpair failed and we were unable to recover it. 00:28:47.616 [2024-07-25 10:17:32.516546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.616 [2024-07-25 10:17:32.516576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.616 qpair failed and we were unable to recover it. 00:28:47.616 [2024-07-25 10:17:32.516785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.616 [2024-07-25 10:17:32.516814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.616 qpair failed and we were unable to recover it. 00:28:47.616 [2024-07-25 10:17:32.516989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.617 [2024-07-25 10:17:32.517037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.617 qpair failed and we were unable to recover it. 00:28:47.617 [2024-07-25 10:17:32.517245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.617 [2024-07-25 10:17:32.517274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.617 qpair failed and we were unable to recover it. 00:28:47.617 [2024-07-25 10:17:32.517450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.617 [2024-07-25 10:17:32.517480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.617 qpair failed and we were unable to recover it. 00:28:47.617 [2024-07-25 10:17:32.517653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.617 [2024-07-25 10:17:32.517683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.617 qpair failed and we were unable to recover it. 00:28:47.617 [2024-07-25 10:17:32.517856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.617 [2024-07-25 10:17:32.517908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.617 qpair failed and we were unable to recover it. 00:28:47.617 [2024-07-25 10:17:32.518121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.617 [2024-07-25 10:17:32.518151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.617 qpair failed and we were unable to recover it. 00:28:47.617 [2024-07-25 10:17:32.518325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.617 [2024-07-25 10:17:32.518355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.617 qpair failed and we were unable to recover it. 00:28:47.617 [2024-07-25 10:17:32.518560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.617 [2024-07-25 10:17:32.518589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.617 qpair failed and we were unable to recover it. 00:28:47.617 [2024-07-25 10:17:32.518775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.617 [2024-07-25 10:17:32.518824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.617 qpair failed and we were unable to recover it. 00:28:47.617 [2024-07-25 10:17:32.519023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.617 [2024-07-25 10:17:32.519052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.617 qpair failed and we were unable to recover it. 00:28:47.617 [2024-07-25 10:17:32.519236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.617 [2024-07-25 10:17:32.519265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.617 qpair failed and we were unable to recover it. 00:28:47.617 [2024-07-25 10:17:32.519458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.617 [2024-07-25 10:17:32.519493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.617 qpair failed and we were unable to recover it. 00:28:47.617 [2024-07-25 10:17:32.519705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.617 [2024-07-25 10:17:32.519752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.617 qpair failed and we were unable to recover it. 00:28:47.617 [2024-07-25 10:17:32.519922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.617 [2024-07-25 10:17:32.519952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.617 qpair failed and we were unable to recover it. 00:28:47.617 [2024-07-25 10:17:32.520125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.617 [2024-07-25 10:17:32.520154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.617 qpair failed and we were unable to recover it. 00:28:47.617 [2024-07-25 10:17:32.520348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.617 [2024-07-25 10:17:32.520377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.617 qpair failed and we were unable to recover it. 00:28:47.617 [2024-07-25 10:17:32.520542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.617 [2024-07-25 10:17:32.520572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.617 qpair failed and we were unable to recover it. 00:28:47.617 [2024-07-25 10:17:32.520746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.617 [2024-07-25 10:17:32.520776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.617 qpair failed and we were unable to recover it. 00:28:47.617 [2024-07-25 10:17:32.520983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.617 [2024-07-25 10:17:32.521012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.617 qpair failed and we were unable to recover it. 00:28:47.617 [2024-07-25 10:17:32.521191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.617 [2024-07-25 10:17:32.521221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.617 qpair failed and we were unable to recover it. 00:28:47.617 [2024-07-25 10:17:32.521387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.617 [2024-07-25 10:17:32.521416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.617 qpair failed and we were unable to recover it. 00:28:47.617 [2024-07-25 10:17:32.521596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.617 [2024-07-25 10:17:32.521626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.617 qpair failed and we were unable to recover it. 00:28:47.617 [2024-07-25 10:17:32.521802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.617 [2024-07-25 10:17:32.521831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.617 qpair failed and we were unable to recover it. 00:28:47.617 [2024-07-25 10:17:32.522006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.617 [2024-07-25 10:17:32.522035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.617 qpair failed and we were unable to recover it. 00:28:47.617 [2024-07-25 10:17:32.522246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.617 [2024-07-25 10:17:32.522296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.617 qpair failed and we were unable to recover it. 00:28:47.617 [2024-07-25 10:17:32.522499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.617 [2024-07-25 10:17:32.522529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.617 qpair failed and we were unable to recover it. 00:28:47.617 [2024-07-25 10:17:32.522700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.617 [2024-07-25 10:17:32.522729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.617 qpair failed and we were unable to recover it. 00:28:47.617 [2024-07-25 10:17:32.522927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.617 [2024-07-25 10:17:32.522956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.617 qpair failed and we were unable to recover it. 00:28:47.617 [2024-07-25 10:17:32.523159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.617 [2024-07-25 10:17:32.523206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.617 qpair failed and we were unable to recover it. 00:28:47.617 [2024-07-25 10:17:32.523378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.617 [2024-07-25 10:17:32.523406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.617 qpair failed and we were unable to recover it. 00:28:47.617 [2024-07-25 10:17:32.523634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.617 [2024-07-25 10:17:32.523664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.617 qpair failed and we were unable to recover it. 00:28:47.617 [2024-07-25 10:17:32.523867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.617 [2024-07-25 10:17:32.523897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.617 qpair failed and we were unable to recover it. 00:28:47.617 [2024-07-25 10:17:32.524104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.617 [2024-07-25 10:17:32.524150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.617 qpair failed and we were unable to recover it. 00:28:47.617 [2024-07-25 10:17:32.524346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.617 [2024-07-25 10:17:32.524376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.617 qpair failed and we were unable to recover it. 00:28:47.617 [2024-07-25 10:17:32.524590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.617 [2024-07-25 10:17:32.524620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.617 qpair failed and we were unable to recover it. 00:28:47.617 [2024-07-25 10:17:32.524757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.617 [2024-07-25 10:17:32.524786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.617 qpair failed and we were unable to recover it. 00:28:47.617 [2024-07-25 10:17:32.525002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.617 [2024-07-25 10:17:32.525049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.617 qpair failed and we were unable to recover it. 00:28:47.617 [2024-07-25 10:17:32.525236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.617 [2024-07-25 10:17:32.525265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.617 qpair failed and we were unable to recover it. 00:28:47.618 [2024-07-25 10:17:32.525454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.618 [2024-07-25 10:17:32.525484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.618 qpair failed and we were unable to recover it. 00:28:47.618 [2024-07-25 10:17:32.525632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.618 [2024-07-25 10:17:32.525661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.618 qpair failed and we were unable to recover it. 00:28:47.618 [2024-07-25 10:17:32.525837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.618 [2024-07-25 10:17:32.525884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.618 qpair failed and we were unable to recover it. 00:28:47.618 [2024-07-25 10:17:32.526056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.618 [2024-07-25 10:17:32.526085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.618 qpair failed and we were unable to recover it. 00:28:47.618 [2024-07-25 10:17:32.526262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.618 [2024-07-25 10:17:32.526291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.618 qpair failed and we were unable to recover it. 00:28:47.618 [2024-07-25 10:17:32.526492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.618 [2024-07-25 10:17:32.526522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.618 qpair failed and we were unable to recover it. 00:28:47.618 [2024-07-25 10:17:32.526699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.618 [2024-07-25 10:17:32.526753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.618 qpair failed and we were unable to recover it. 00:28:47.618 [2024-07-25 10:17:32.526900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.618 [2024-07-25 10:17:32.526929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.618 qpair failed and we were unable to recover it. 00:28:47.618 [2024-07-25 10:17:32.527112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.618 [2024-07-25 10:17:32.527141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.618 qpair failed and we were unable to recover it. 00:28:47.618 [2024-07-25 10:17:32.527280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.618 [2024-07-25 10:17:32.527309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.618 qpair failed and we were unable to recover it. 00:28:47.618 [2024-07-25 10:17:32.527510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.618 [2024-07-25 10:17:32.527540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.618 qpair failed and we were unable to recover it. 00:28:47.618 [2024-07-25 10:17:32.527688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.618 [2024-07-25 10:17:32.527717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.618 qpair failed and we were unable to recover it. 00:28:47.618 [2024-07-25 10:17:32.527887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.618 [2024-07-25 10:17:32.527917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.618 qpair failed and we were unable to recover it. 00:28:47.618 [2024-07-25 10:17:32.528062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.618 [2024-07-25 10:17:32.528092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.618 qpair failed and we were unable to recover it. 00:28:47.618 [2024-07-25 10:17:32.528258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.618 [2024-07-25 10:17:32.528287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.618 qpair failed and we were unable to recover it. 00:28:47.618 [2024-07-25 10:17:32.528460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.618 [2024-07-25 10:17:32.528489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.618 qpair failed and we were unable to recover it. 00:28:47.618 [2024-07-25 10:17:32.528661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.618 [2024-07-25 10:17:32.528690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.618 qpair failed and we were unable to recover it. 00:28:47.618 [2024-07-25 10:17:32.528892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.618 [2024-07-25 10:17:32.528921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.618 qpair failed and we were unable to recover it. 00:28:47.618 [2024-07-25 10:17:32.529146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.618 [2024-07-25 10:17:32.529175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.618 qpair failed and we were unable to recover it. 00:28:47.618 [2024-07-25 10:17:32.529341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.618 [2024-07-25 10:17:32.529371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.618 qpair failed and we were unable to recover it. 00:28:47.618 [2024-07-25 10:17:32.529525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.618 [2024-07-25 10:17:32.529555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.618 qpair failed and we were unable to recover it. 00:28:47.618 [2024-07-25 10:17:32.529696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.618 [2024-07-25 10:17:32.529725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.618 qpair failed and we were unable to recover it. 00:28:47.618 [2024-07-25 10:17:32.529916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.618 [2024-07-25 10:17:32.529971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.618 qpair failed and we were unable to recover it. 00:28:47.618 [2024-07-25 10:17:32.530108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.618 [2024-07-25 10:17:32.530138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.618 qpair failed and we were unable to recover it. 00:28:47.618 [2024-07-25 10:17:32.530340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.618 [2024-07-25 10:17:32.530370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.618 qpair failed and we were unable to recover it. 00:28:47.618 [2024-07-25 10:17:32.530557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.618 [2024-07-25 10:17:32.530588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.618 qpair failed and we were unable to recover it. 00:28:47.618 [2024-07-25 10:17:32.530794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.618 [2024-07-25 10:17:32.530841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.618 qpair failed and we were unable to recover it. 00:28:47.618 [2024-07-25 10:17:32.531003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.618 [2024-07-25 10:17:32.531033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.618 qpair failed and we were unable to recover it. 00:28:47.618 [2024-07-25 10:17:32.531203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.618 [2024-07-25 10:17:32.531233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.618 qpair failed and we were unable to recover it. 00:28:47.618 [2024-07-25 10:17:32.531408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.618 [2024-07-25 10:17:32.531466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.618 qpair failed and we were unable to recover it. 00:28:47.618 [2024-07-25 10:17:32.531636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.618 [2024-07-25 10:17:32.531665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.618 qpair failed and we were unable to recover it. 00:28:47.618 [2024-07-25 10:17:32.531846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.618 [2024-07-25 10:17:32.531875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.618 qpair failed and we were unable to recover it. 00:28:47.618 [2024-07-25 10:17:32.532012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.618 [2024-07-25 10:17:32.532051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.618 qpair failed and we were unable to recover it. 00:28:47.619 [2024-07-25 10:17:32.532247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.619 [2024-07-25 10:17:32.532280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.619 qpair failed and we were unable to recover it. 00:28:47.619 [2024-07-25 10:17:32.532492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.619 [2024-07-25 10:17:32.532522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.619 qpair failed and we were unable to recover it. 00:28:47.619 [2024-07-25 10:17:32.532701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.619 [2024-07-25 10:17:32.532731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.619 qpair failed and we were unable to recover it. 00:28:47.619 [2024-07-25 10:17:32.532907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.619 [2024-07-25 10:17:32.532936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.619 qpair failed and we were unable to recover it. 00:28:47.619 [2024-07-25 10:17:32.533147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.619 [2024-07-25 10:17:32.533176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.619 qpair failed and we were unable to recover it. 00:28:47.619 [2024-07-25 10:17:32.533309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.619 [2024-07-25 10:17:32.533338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.619 qpair failed and we were unable to recover it. 00:28:47.619 [2024-07-25 10:17:32.533540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.619 [2024-07-25 10:17:32.533570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.619 qpair failed and we were unable to recover it. 00:28:47.619 [2024-07-25 10:17:32.533722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.619 [2024-07-25 10:17:32.533752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.619 qpair failed and we were unable to recover it. 00:28:47.619 [2024-07-25 10:17:32.533948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.619 [2024-07-25 10:17:32.533977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.619 qpair failed and we were unable to recover it. 00:28:47.619 [2024-07-25 10:17:32.534151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.619 [2024-07-25 10:17:32.534199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.619 qpair failed and we were unable to recover it. 00:28:47.619 [2024-07-25 10:17:32.534385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.619 [2024-07-25 10:17:32.534415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.619 qpair failed and we were unable to recover it. 00:28:47.619 [2024-07-25 10:17:32.534585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.619 [2024-07-25 10:17:32.534626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.619 qpair failed and we were unable to recover it. 00:28:47.619 [2024-07-25 10:17:32.534825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.619 [2024-07-25 10:17:32.534855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.619 qpair failed and we were unable to recover it. 00:28:47.619 [2024-07-25 10:17:32.535021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.619 [2024-07-25 10:17:32.535067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.619 qpair failed and we were unable to recover it. 00:28:47.619 [2024-07-25 10:17:32.535212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.619 [2024-07-25 10:17:32.535241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.619 qpair failed and we were unable to recover it. 00:28:47.619 [2024-07-25 10:17:32.535459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.619 [2024-07-25 10:17:32.535489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.619 qpair failed and we were unable to recover it. 00:28:47.619 [2024-07-25 10:17:32.535622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.619 [2024-07-25 10:17:32.535652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.619 qpair failed and we were unable to recover it. 00:28:47.619 [2024-07-25 10:17:32.535865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.619 [2024-07-25 10:17:32.535911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.619 qpair failed and we were unable to recover it. 00:28:47.619 [2024-07-25 10:17:32.536113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.619 [2024-07-25 10:17:32.536143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.619 qpair failed and we were unable to recover it. 00:28:47.619 [2024-07-25 10:17:32.536340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.619 [2024-07-25 10:17:32.536369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.619 qpair failed and we were unable to recover it. 00:28:47.619 [2024-07-25 10:17:32.536567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.619 [2024-07-25 10:17:32.536597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.619 qpair failed and we were unable to recover it. 00:28:47.619 [2024-07-25 10:17:32.536750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.619 [2024-07-25 10:17:32.536796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.619 qpair failed and we were unable to recover it. 00:28:47.619 [2024-07-25 10:17:32.537002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.619 [2024-07-25 10:17:32.537031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.619 qpair failed and we were unable to recover it. 00:28:47.619 [2024-07-25 10:17:32.537215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.619 [2024-07-25 10:17:32.537245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.619 qpair failed and we were unable to recover it. 00:28:47.619 [2024-07-25 10:17:32.537425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.619 [2024-07-25 10:17:32.537459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.619 qpair failed and we were unable to recover it. 00:28:47.619 [2024-07-25 10:17:32.537634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.619 [2024-07-25 10:17:32.537663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.619 qpair failed and we were unable to recover it. 00:28:47.619 [2024-07-25 10:17:32.537821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.619 [2024-07-25 10:17:32.537850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.619 qpair failed and we were unable to recover it. 00:28:47.619 [2024-07-25 10:17:32.538028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.619 [2024-07-25 10:17:32.538058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.619 qpair failed and we were unable to recover it. 00:28:47.619 [2024-07-25 10:17:32.538225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.619 [2024-07-25 10:17:32.538254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.619 qpair failed and we were unable to recover it. 00:28:47.619 [2024-07-25 10:17:32.538474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.619 [2024-07-25 10:17:32.538509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.619 qpair failed and we were unable to recover it. 00:28:47.619 [2024-07-25 10:17:32.538707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.619 [2024-07-25 10:17:32.538736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.619 qpair failed and we were unable to recover it. 00:28:47.619 [2024-07-25 10:17:32.538935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.619 [2024-07-25 10:17:32.538964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.619 qpair failed and we were unable to recover it. 00:28:47.619 [2024-07-25 10:17:32.539166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.619 [2024-07-25 10:17:32.539196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.619 qpair failed and we were unable to recover it. 00:28:47.619 [2024-07-25 10:17:32.539337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.619 [2024-07-25 10:17:32.539366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.619 qpair failed and we were unable to recover it. 00:28:47.619 [2024-07-25 10:17:32.539536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.619 [2024-07-25 10:17:32.539566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.619 qpair failed and we were unable to recover it. 00:28:47.619 [2024-07-25 10:17:32.539764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.619 [2024-07-25 10:17:32.539793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.619 qpair failed and we were unable to recover it. 00:28:47.619 [2024-07-25 10:17:32.539963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.619 [2024-07-25 10:17:32.539992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.620 qpair failed and we were unable to recover it. 00:28:47.620 [2024-07-25 10:17:32.540204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.620 [2024-07-25 10:17:32.540251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.620 qpair failed and we were unable to recover it. 00:28:47.620 [2024-07-25 10:17:32.540400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.620 [2024-07-25 10:17:32.540435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.620 qpair failed and we were unable to recover it. 00:28:47.620 [2024-07-25 10:17:32.540626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.620 [2024-07-25 10:17:32.540656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.620 qpair failed and we were unable to recover it. 00:28:47.620 [2024-07-25 10:17:32.540855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.620 [2024-07-25 10:17:32.540888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.620 qpair failed and we were unable to recover it. 00:28:47.620 [2024-07-25 10:17:32.541090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.620 [2024-07-25 10:17:32.541137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.620 qpair failed and we were unable to recover it. 00:28:47.620 [2024-07-25 10:17:32.541282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.620 [2024-07-25 10:17:32.541311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.620 qpair failed and we were unable to recover it. 00:28:47.620 [2024-07-25 10:17:32.541523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.620 [2024-07-25 10:17:32.541553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.620 qpair failed and we were unable to recover it. 00:28:47.620 [2024-07-25 10:17:32.541752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.620 [2024-07-25 10:17:32.541781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.620 qpair failed and we were unable to recover it. 00:28:47.620 [2024-07-25 10:17:32.541989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.620 [2024-07-25 10:17:32.542036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.620 qpair failed and we were unable to recover it. 00:28:47.620 [2024-07-25 10:17:32.542216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.620 [2024-07-25 10:17:32.542245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.620 qpair failed and we were unable to recover it. 00:28:47.620 [2024-07-25 10:17:32.542383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.620 [2024-07-25 10:17:32.542412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.620 qpair failed and we were unable to recover it. 00:28:47.620 [2024-07-25 10:17:32.542585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.620 [2024-07-25 10:17:32.542615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.620 qpair failed and we were unable to recover it. 00:28:47.620 [2024-07-25 10:17:32.542805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.620 [2024-07-25 10:17:32.542834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.620 qpair failed and we were unable to recover it. 00:28:47.620 [2024-07-25 10:17:32.543039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.620 [2024-07-25 10:17:32.543068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.620 qpair failed and we were unable to recover it. 00:28:47.620 [2024-07-25 10:17:32.543267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.620 [2024-07-25 10:17:32.543297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.620 qpair failed and we were unable to recover it. 00:28:47.620 [2024-07-25 10:17:32.543445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.620 [2024-07-25 10:17:32.543475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.620 qpair failed and we were unable to recover it. 00:28:47.620 [2024-07-25 10:17:32.543641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.620 [2024-07-25 10:17:32.543670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.620 qpair failed and we were unable to recover it. 00:28:47.620 [2024-07-25 10:17:32.543876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.620 [2024-07-25 10:17:32.543905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.620 qpair failed and we were unable to recover it. 00:28:47.620 [2024-07-25 10:17:32.544110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.620 [2024-07-25 10:17:32.544139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.620 qpair failed and we were unable to recover it. 00:28:47.620 [2024-07-25 10:17:32.544319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.620 [2024-07-25 10:17:32.544348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.620 qpair failed and we were unable to recover it. 00:28:47.620 [2024-07-25 10:17:32.544522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.620 [2024-07-25 10:17:32.544552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.620 qpair failed and we were unable to recover it. 00:28:47.620 [2024-07-25 10:17:32.544726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.620 [2024-07-25 10:17:32.544756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.620 qpair failed and we were unable to recover it. 00:28:47.620 [2024-07-25 10:17:32.544896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.620 [2024-07-25 10:17:32.544925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.620 qpair failed and we were unable to recover it. 00:28:47.620 [2024-07-25 10:17:32.545078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.620 [2024-07-25 10:17:32.545107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.620 qpair failed and we were unable to recover it. 00:28:47.620 [2024-07-25 10:17:32.545281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.620 [2024-07-25 10:17:32.545310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.620 qpair failed and we were unable to recover it. 00:28:47.620 [2024-07-25 10:17:32.545489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.620 [2024-07-25 10:17:32.545519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.620 qpair failed and we were unable to recover it. 00:28:47.620 [2024-07-25 10:17:32.545729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.620 [2024-07-25 10:17:32.545758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.620 qpair failed and we were unable to recover it. 00:28:47.620 [2024-07-25 10:17:32.545940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.620 [2024-07-25 10:17:32.545969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.620 qpair failed and we were unable to recover it. 00:28:47.620 [2024-07-25 10:17:32.546168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.620 [2024-07-25 10:17:32.546197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.620 qpair failed and we were unable to recover it. 00:28:47.620 [2024-07-25 10:17:32.546368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.620 [2024-07-25 10:17:32.546398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.620 qpair failed and we were unable to recover it. 00:28:47.620 [2024-07-25 10:17:32.546597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.620 [2024-07-25 10:17:32.546627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.620 qpair failed and we were unable to recover it. 00:28:47.620 [2024-07-25 10:17:32.546798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.620 [2024-07-25 10:17:32.546827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.620 qpair failed and we were unable to recover it. 00:28:47.620 [2024-07-25 10:17:32.547028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.620 [2024-07-25 10:17:32.547075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.620 qpair failed and we were unable to recover it. 00:28:47.620 [2024-07-25 10:17:32.547255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.620 [2024-07-25 10:17:32.547285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.620 qpair failed and we were unable to recover it. 00:28:47.620 [2024-07-25 10:17:32.547465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.620 [2024-07-25 10:17:32.547513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.620 qpair failed and we were unable to recover it. 00:28:47.620 [2024-07-25 10:17:32.547724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.620 [2024-07-25 10:17:32.547753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.620 qpair failed and we were unable to recover it. 00:28:47.620 [2024-07-25 10:17:32.547967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.620 [2024-07-25 10:17:32.548014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.620 qpair failed and we were unable to recover it. 00:28:47.620 [2024-07-25 10:17:32.548192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.621 [2024-07-25 10:17:32.548222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.621 qpair failed and we were unable to recover it. 00:28:47.621 [2024-07-25 10:17:32.548436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.621 [2024-07-25 10:17:32.548466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.621 qpair failed and we were unable to recover it. 00:28:47.621 [2024-07-25 10:17:32.548628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.621 [2024-07-25 10:17:32.548658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.621 qpair failed and we were unable to recover it. 00:28:47.621 [2024-07-25 10:17:32.548821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.621 [2024-07-25 10:17:32.548867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.621 qpair failed and we were unable to recover it. 00:28:47.621 [2024-07-25 10:17:32.549041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.621 [2024-07-25 10:17:32.549070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.621 qpair failed and we were unable to recover it. 00:28:47.621 [2024-07-25 10:17:32.549279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.621 [2024-07-25 10:17:32.549309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.621 qpair failed and we were unable to recover it. 00:28:47.621 [2024-07-25 10:17:32.549498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.621 [2024-07-25 10:17:32.549543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.621 qpair failed and we were unable to recover it. 00:28:47.621 [2024-07-25 10:17:32.549744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.621 [2024-07-25 10:17:32.549791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.621 qpair failed and we were unable to recover it. 00:28:47.621 [2024-07-25 10:17:32.549995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.621 [2024-07-25 10:17:32.550024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.621 qpair failed and we were unable to recover it. 00:28:47.621 [2024-07-25 10:17:32.550197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.621 [2024-07-25 10:17:32.550227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.621 qpair failed and we were unable to recover it. 00:28:47.621 [2024-07-25 10:17:32.550393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.621 [2024-07-25 10:17:32.550422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.621 qpair failed and we were unable to recover it. 00:28:47.621 [2024-07-25 10:17:32.550604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.621 [2024-07-25 10:17:32.550633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.621 qpair failed and we were unable to recover it. 00:28:47.621 [2024-07-25 10:17:32.550807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.621 [2024-07-25 10:17:32.550837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.621 qpair failed and we were unable to recover it. 00:28:47.621 [2024-07-25 10:17:32.551017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.621 [2024-07-25 10:17:32.551047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.621 qpair failed and we were unable to recover it. 00:28:47.621 [2024-07-25 10:17:32.551233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.621 [2024-07-25 10:17:32.551262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.621 qpair failed and we were unable to recover it. 00:28:47.621 [2024-07-25 10:17:32.551453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.621 [2024-07-25 10:17:32.551483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.621 qpair failed and we were unable to recover it. 00:28:47.621 [2024-07-25 10:17:32.551654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.621 [2024-07-25 10:17:32.551684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.621 qpair failed and we were unable to recover it. 00:28:47.621 [2024-07-25 10:17:32.551869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.621 [2024-07-25 10:17:32.551899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.621 qpair failed and we were unable to recover it. 00:28:47.621 [2024-07-25 10:17:32.552070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.621 [2024-07-25 10:17:32.552098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.621 qpair failed and we were unable to recover it. 00:28:47.621 [2024-07-25 10:17:32.552265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.621 [2024-07-25 10:17:32.552294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.621 qpair failed and we were unable to recover it. 00:28:47.621 [2024-07-25 10:17:32.552500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.621 [2024-07-25 10:17:32.552530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.621 qpair failed and we were unable to recover it. 00:28:47.621 [2024-07-25 10:17:32.552691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.621 [2024-07-25 10:17:32.552720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.621 qpair failed and we were unable to recover it. 00:28:47.621 [2024-07-25 10:17:32.552857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.621 [2024-07-25 10:17:32.552886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.621 qpair failed and we were unable to recover it. 00:28:47.621 [2024-07-25 10:17:32.553052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.621 [2024-07-25 10:17:32.553081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.621 qpair failed and we were unable to recover it. 00:28:47.621 [2024-07-25 10:17:32.553285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.621 [2024-07-25 10:17:32.553314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.621 qpair failed and we were unable to recover it. 00:28:47.621 [2024-07-25 10:17:32.553514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.621 [2024-07-25 10:17:32.553544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.621 qpair failed and we were unable to recover it. 00:28:47.621 [2024-07-25 10:17:32.553743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.621 [2024-07-25 10:17:32.553773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.621 qpair failed and we were unable to recover it. 00:28:47.621 [2024-07-25 10:17:32.553972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.621 [2024-07-25 10:17:32.554002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.621 qpair failed and we were unable to recover it. 00:28:47.621 [2024-07-25 10:17:32.554169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.621 [2024-07-25 10:17:32.554198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.621 qpair failed and we were unable to recover it. 00:28:47.621 [2024-07-25 10:17:32.554387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.621 [2024-07-25 10:17:32.554416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.621 qpair failed and we were unable to recover it. 00:28:47.621 [2024-07-25 10:17:32.554571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.621 [2024-07-25 10:17:32.554600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.621 qpair failed and we were unable to recover it. 00:28:47.621 [2024-07-25 10:17:32.554801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.621 [2024-07-25 10:17:32.554848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.621 qpair failed and we were unable to recover it. 00:28:47.621 [2024-07-25 10:17:32.555060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.621 [2024-07-25 10:17:32.555089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.621 qpair failed and we were unable to recover it. 00:28:47.621 [2024-07-25 10:17:32.555284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.621 [2024-07-25 10:17:32.555313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.621 qpair failed and we were unable to recover it. 00:28:47.621 [2024-07-25 10:17:32.555487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.621 [2024-07-25 10:17:32.555517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.621 qpair failed and we were unable to recover it. 00:28:47.621 [2024-07-25 10:17:32.555695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.621 [2024-07-25 10:17:32.555743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.621 qpair failed and we were unable to recover it. 00:28:47.621 [2024-07-25 10:17:32.555910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.621 [2024-07-25 10:17:32.555939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.621 qpair failed and we were unable to recover it. 00:28:47.621 [2024-07-25 10:17:32.556085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.621 [2024-07-25 10:17:32.556115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.621 qpair failed and we were unable to recover it. 00:28:47.621 [2024-07-25 10:17:32.556287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.622 [2024-07-25 10:17:32.556316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.622 qpair failed and we were unable to recover it. 00:28:47.622 [2024-07-25 10:17:32.556490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.622 [2024-07-25 10:17:32.556540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.622 qpair failed and we were unable to recover it. 00:28:47.622 [2024-07-25 10:17:32.556725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.622 [2024-07-25 10:17:32.556754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.622 qpair failed and we were unable to recover it. 00:28:47.622 [2024-07-25 10:17:32.556952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.622 [2024-07-25 10:17:32.556981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.622 qpair failed and we were unable to recover it. 00:28:47.622 [2024-07-25 10:17:32.557164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.622 [2024-07-25 10:17:32.557193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.622 qpair failed and we were unable to recover it. 00:28:47.622 [2024-07-25 10:17:32.557381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.622 [2024-07-25 10:17:32.557421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.622 qpair failed and we were unable to recover it. 00:28:47.622 [2024-07-25 10:17:32.557628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.622 [2024-07-25 10:17:32.557657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.622 qpair failed and we were unable to recover it. 00:28:47.622 [2024-07-25 10:17:32.557856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.622 [2024-07-25 10:17:32.557885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.622 qpair failed and we were unable to recover it. 00:28:47.622 [2024-07-25 10:17:32.558020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.622 [2024-07-25 10:17:32.558053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.622 qpair failed and we were unable to recover it. 00:28:47.622 [2024-07-25 10:17:32.558255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.622 [2024-07-25 10:17:32.558284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.622 qpair failed and we were unable to recover it. 00:28:47.622 [2024-07-25 10:17:32.558469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.622 [2024-07-25 10:17:32.558515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.622 qpair failed and we were unable to recover it. 00:28:47.622 [2024-07-25 10:17:32.558684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.622 [2024-07-25 10:17:32.558713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.622 qpair failed and we were unable to recover it. 00:28:47.622 [2024-07-25 10:17:32.558927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.622 [2024-07-25 10:17:32.558956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.622 qpair failed and we were unable to recover it. 00:28:47.622 [2024-07-25 10:17:32.559162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.622 [2024-07-25 10:17:32.559209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.622 qpair failed and we were unable to recover it. 00:28:47.622 [2024-07-25 10:17:32.559406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.622 [2024-07-25 10:17:32.559452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.622 qpair failed and we were unable to recover it. 00:28:47.622 [2024-07-25 10:17:32.559656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.622 [2024-07-25 10:17:32.559686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.622 qpair failed and we were unable to recover it. 00:28:47.622 [2024-07-25 10:17:32.559885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.622 [2024-07-25 10:17:32.559915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.622 qpair failed and we were unable to recover it. 00:28:47.622 [2024-07-25 10:17:32.560105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.622 [2024-07-25 10:17:32.560152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.622 qpair failed and we were unable to recover it. 00:28:47.622 [2024-07-25 10:17:32.560365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.622 [2024-07-25 10:17:32.560394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.622 qpair failed and we were unable to recover it. 00:28:47.622 [2024-07-25 10:17:32.560607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.622 [2024-07-25 10:17:32.560637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.622 qpair failed and we were unable to recover it. 00:28:47.622 [2024-07-25 10:17:32.560803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.622 [2024-07-25 10:17:32.560831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.622 qpair failed and we were unable to recover it. 00:28:47.622 [2024-07-25 10:17:32.560987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.622 [2024-07-25 10:17:32.561034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.622 qpair failed and we were unable to recover it. 00:28:47.622 [2024-07-25 10:17:32.561214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.622 [2024-07-25 10:17:32.561243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.622 qpair failed and we were unable to recover it. 00:28:47.622 [2024-07-25 10:17:32.561414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.622 [2024-07-25 10:17:32.561449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.622 qpair failed and we were unable to recover it. 00:28:47.622 [2024-07-25 10:17:32.561624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.622 [2024-07-25 10:17:32.561653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.622 qpair failed and we were unable to recover it. 00:28:47.622 [2024-07-25 10:17:32.561809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.622 [2024-07-25 10:17:32.561857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.622 qpair failed and we were unable to recover it. 00:28:47.622 [2024-07-25 10:17:32.562032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.622 [2024-07-25 10:17:32.562061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.622 qpair failed and we were unable to recover it. 00:28:47.622 [2024-07-25 10:17:32.562256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.622 [2024-07-25 10:17:32.562285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.622 qpair failed and we were unable to recover it. 00:28:47.622 [2024-07-25 10:17:32.562485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.622 [2024-07-25 10:17:32.562514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.622 qpair failed and we were unable to recover it. 00:28:47.622 [2024-07-25 10:17:32.562687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.622 [2024-07-25 10:17:32.562721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.622 qpair failed and we were unable to recover it. 00:28:47.622 [2024-07-25 10:17:32.562941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.622 [2024-07-25 10:17:32.562971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.622 qpair failed and we were unable to recover it. 00:28:47.622 [2024-07-25 10:17:32.563168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.622 [2024-07-25 10:17:32.563196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.622 qpair failed and we were unable to recover it. 00:28:47.622 [2024-07-25 10:17:32.563393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.622 [2024-07-25 10:17:32.563422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.622 qpair failed and we were unable to recover it. 00:28:47.622 [2024-07-25 10:17:32.563588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.622 [2024-07-25 10:17:32.563618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.622 qpair failed and we were unable to recover it. 00:28:47.622 [2024-07-25 10:17:32.563796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.622 [2024-07-25 10:17:32.563825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.622 qpair failed and we were unable to recover it. 00:28:47.622 [2024-07-25 10:17:32.564030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.622 [2024-07-25 10:17:32.564060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.622 qpair failed and we were unable to recover it. 00:28:47.622 [2024-07-25 10:17:32.564260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.622 [2024-07-25 10:17:32.564290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.622 qpair failed and we were unable to recover it. 00:28:47.622 [2024-07-25 10:17:32.564492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.622 [2024-07-25 10:17:32.564522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.622 qpair failed and we were unable to recover it. 00:28:47.623 [2024-07-25 10:17:32.564725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.623 [2024-07-25 10:17:32.564754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.623 qpair failed and we were unable to recover it. 00:28:47.623 [2024-07-25 10:17:32.564952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.623 [2024-07-25 10:17:32.564981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.623 qpair failed and we were unable to recover it. 00:28:47.623 [2024-07-25 10:17:32.565191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.623 [2024-07-25 10:17:32.565221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.623 qpair failed and we were unable to recover it. 00:28:47.623 [2024-07-25 10:17:32.565382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.623 [2024-07-25 10:17:32.565411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.623 qpair failed and we were unable to recover it. 00:28:47.623 [2024-07-25 10:17:32.565632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.623 [2024-07-25 10:17:32.565661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.623 qpair failed and we were unable to recover it. 00:28:47.623 [2024-07-25 10:17:32.565821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.623 [2024-07-25 10:17:32.565851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.623 qpair failed and we were unable to recover it. 00:28:47.623 [2024-07-25 10:17:32.566056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.623 [2024-07-25 10:17:32.566085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.623 qpair failed and we were unable to recover it. 00:28:47.623 [2024-07-25 10:17:32.566269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.623 [2024-07-25 10:17:32.566298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.623 qpair failed and we were unable to recover it. 00:28:47.623 [2024-07-25 10:17:32.566514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.623 [2024-07-25 10:17:32.566545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.623 qpair failed and we were unable to recover it. 00:28:47.623 [2024-07-25 10:17:32.566723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.623 [2024-07-25 10:17:32.566752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.623 qpair failed and we were unable to recover it. 00:28:47.623 [2024-07-25 10:17:32.566922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.623 [2024-07-25 10:17:32.566956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.623 qpair failed and we were unable to recover it. 00:28:47.623 [2024-07-25 10:17:32.567170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.623 [2024-07-25 10:17:32.567217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.623 qpair failed and we were unable to recover it. 00:28:47.623 [2024-07-25 10:17:32.567399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.623 [2024-07-25 10:17:32.567433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.623 qpair failed and we were unable to recover it. 00:28:47.623 [2024-07-25 10:17:32.567644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.623 [2024-07-25 10:17:32.567674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.623 qpair failed and we were unable to recover it. 00:28:47.623 [2024-07-25 10:17:32.567848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.623 [2024-07-25 10:17:32.567878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.623 qpair failed and we were unable to recover it. 00:28:47.623 [2024-07-25 10:17:32.568036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.623 [2024-07-25 10:17:32.568082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.623 qpair failed and we were unable to recover it. 00:28:47.623 [2024-07-25 10:17:32.568265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.623 [2024-07-25 10:17:32.568294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.623 qpair failed and we were unable to recover it. 00:28:47.623 [2024-07-25 10:17:32.568469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.623 [2024-07-25 10:17:32.568499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.623 qpair failed and we were unable to recover it. 00:28:47.623 [2024-07-25 10:17:32.568677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.623 [2024-07-25 10:17:32.568707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.623 qpair failed and we were unable to recover it. 00:28:47.623 [2024-07-25 10:17:32.568854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.623 [2024-07-25 10:17:32.568900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.623 qpair failed and we were unable to recover it. 00:28:47.623 [2024-07-25 10:17:32.569074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.623 [2024-07-25 10:17:32.569104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.623 qpair failed and we were unable to recover it. 00:28:47.623 [2024-07-25 10:17:32.569321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.623 [2024-07-25 10:17:32.569350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.623 qpair failed and we were unable to recover it. 00:28:47.623 [2024-07-25 10:17:32.569510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.623 [2024-07-25 10:17:32.569539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.623 qpair failed and we were unable to recover it. 00:28:47.623 [2024-07-25 10:17:32.569730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.623 [2024-07-25 10:17:32.569778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.623 qpair failed and we were unable to recover it. 00:28:47.623 [2024-07-25 10:17:32.569990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.623 [2024-07-25 10:17:32.570020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.623 qpair failed and we were unable to recover it. 00:28:47.623 [2024-07-25 10:17:32.570210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.623 [2024-07-25 10:17:32.570239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.623 qpair failed and we were unable to recover it. 00:28:47.623 [2024-07-25 10:17:32.570444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.623 [2024-07-25 10:17:32.570474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.623 qpair failed and we were unable to recover it. 00:28:47.623 [2024-07-25 10:17:32.570643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.623 [2024-07-25 10:17:32.570672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.623 qpair failed and we were unable to recover it. 00:28:47.623 [2024-07-25 10:17:32.570846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.623 [2024-07-25 10:17:32.570875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.623 qpair failed and we were unable to recover it. 00:28:47.623 [2024-07-25 10:17:32.571077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.623 [2024-07-25 10:17:32.571106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.623 qpair failed and we were unable to recover it. 00:28:47.623 [2024-07-25 10:17:32.571308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.623 [2024-07-25 10:17:32.571337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.623 qpair failed and we were unable to recover it. 00:28:47.623 [2024-07-25 10:17:32.571541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.623 [2024-07-25 10:17:32.571571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.623 qpair failed and we were unable to recover it. 00:28:47.624 [2024-07-25 10:17:32.571782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.624 [2024-07-25 10:17:32.571812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.624 qpair failed and we were unable to recover it. 00:28:47.624 [2024-07-25 10:17:32.571991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.624 [2024-07-25 10:17:32.572021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.624 qpair failed and we were unable to recover it. 00:28:47.624 [2024-07-25 10:17:32.572232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.624 [2024-07-25 10:17:32.572261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.624 qpair failed and we were unable to recover it. 00:28:47.624 [2024-07-25 10:17:32.572448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.624 [2024-07-25 10:17:32.572488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.624 qpair failed and we were unable to recover it. 00:28:47.624 [2024-07-25 10:17:32.572649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.624 [2024-07-25 10:17:32.572679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.624 qpair failed and we were unable to recover it. 00:28:47.624 [2024-07-25 10:17:32.572903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.624 [2024-07-25 10:17:32.572932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.624 qpair failed and we were unable to recover it. 00:28:47.624 [2024-07-25 10:17:32.573103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.624 [2024-07-25 10:17:32.573132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.624 qpair failed and we were unable to recover it. 00:28:47.624 [2024-07-25 10:17:32.573279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.624 [2024-07-25 10:17:32.573308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.624 qpair failed and we were unable to recover it. 00:28:47.624 [2024-07-25 10:17:32.573513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.624 [2024-07-25 10:17:32.573543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.624 qpair failed and we were unable to recover it. 00:28:47.624 [2024-07-25 10:17:32.573724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.624 [2024-07-25 10:17:32.573754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.624 qpair failed and we were unable to recover it. 00:28:47.624 [2024-07-25 10:17:32.573930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.624 [2024-07-25 10:17:32.573959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.624 qpair failed and we were unable to recover it. 00:28:47.624 [2024-07-25 10:17:32.574142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.624 [2024-07-25 10:17:32.574171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.624 qpair failed and we were unable to recover it. 00:28:47.624 [2024-07-25 10:17:32.574307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.624 [2024-07-25 10:17:32.574336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.624 qpair failed and we were unable to recover it. 00:28:47.624 [2024-07-25 10:17:32.574543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.624 [2024-07-25 10:17:32.574573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.624 qpair failed and we were unable to recover it. 00:28:47.624 [2024-07-25 10:17:32.574747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.624 [2024-07-25 10:17:32.574777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.624 qpair failed and we were unable to recover it. 00:28:47.624 [2024-07-25 10:17:32.574920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.624 [2024-07-25 10:17:32.574948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.624 qpair failed and we were unable to recover it. 00:28:47.624 [2024-07-25 10:17:32.575148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.624 [2024-07-25 10:17:32.575178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.624 qpair failed and we were unable to recover it. 00:28:47.624 [2024-07-25 10:17:32.575366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.624 [2024-07-25 10:17:32.575396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.624 qpair failed and we were unable to recover it. 00:28:47.624 [2024-07-25 10:17:32.575572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.624 [2024-07-25 10:17:32.575606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.624 qpair failed and we were unable to recover it. 00:28:47.624 [2024-07-25 10:17:32.575812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.624 [2024-07-25 10:17:32.575841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.624 qpair failed and we were unable to recover it. 00:28:47.624 [2024-07-25 10:17:32.576003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.624 [2024-07-25 10:17:32.576033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.624 qpair failed and we were unable to recover it. 00:28:47.624 [2024-07-25 10:17:32.576207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.624 [2024-07-25 10:17:32.576236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.624 qpair failed and we were unable to recover it. 00:28:47.624 [2024-07-25 10:17:32.576401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.624 [2024-07-25 10:17:32.576434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.624 qpair failed and we were unable to recover it. 00:28:47.624 [2024-07-25 10:17:32.576607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.624 [2024-07-25 10:17:32.576635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.624 qpair failed and we were unable to recover it. 00:28:47.624 [2024-07-25 10:17:32.576850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.624 [2024-07-25 10:17:32.576878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.624 qpair failed and we were unable to recover it. 00:28:47.624 [2024-07-25 10:17:32.577036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.624 [2024-07-25 10:17:32.577065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.624 qpair failed and we were unable to recover it. 00:28:47.624 [2024-07-25 10:17:32.577283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.624 [2024-07-25 10:17:32.577312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.624 qpair failed and we were unable to recover it. 00:28:47.624 [2024-07-25 10:17:32.577481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.624 [2024-07-25 10:17:32.577512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.624 qpair failed and we were unable to recover it. 00:28:47.624 [2024-07-25 10:17:32.577714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.624 [2024-07-25 10:17:32.577743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.624 qpair failed and we were unable to recover it. 00:28:47.624 [2024-07-25 10:17:32.577923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.624 [2024-07-25 10:17:32.577951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.624 qpair failed and we were unable to recover it. 00:28:47.624 [2024-07-25 10:17:32.578124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.624 [2024-07-25 10:17:32.578152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.624 qpair failed and we were unable to recover it. 00:28:47.624 [2024-07-25 10:17:32.578326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.624 [2024-07-25 10:17:32.578355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.624 qpair failed and we were unable to recover it. 00:28:47.624 [2024-07-25 10:17:32.578559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.624 [2024-07-25 10:17:32.578589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.624 qpair failed and we were unable to recover it. 00:28:47.624 [2024-07-25 10:17:32.578772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.624 [2024-07-25 10:17:32.578802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.624 qpair failed and we were unable to recover it. 00:28:47.624 [2024-07-25 10:17:32.578983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.624 [2024-07-25 10:17:32.579011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.624 qpair failed and we were unable to recover it. 00:28:47.624 [2024-07-25 10:17:32.579186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.624 [2024-07-25 10:17:32.579214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.624 qpair failed and we were unable to recover it. 00:28:47.624 [2024-07-25 10:17:32.579382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.624 [2024-07-25 10:17:32.579411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.624 qpair failed and we were unable to recover it. 00:28:47.624 [2024-07-25 10:17:32.579621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.625 [2024-07-25 10:17:32.579650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.625 qpair failed and we were unable to recover it. 00:28:47.625 [2024-07-25 10:17:32.579822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.625 [2024-07-25 10:17:32.579852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.625 qpair failed and we were unable to recover it. 00:28:47.625 [2024-07-25 10:17:32.580041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.625 [2024-07-25 10:17:32.580070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.625 qpair failed and we were unable to recover it. 00:28:47.625 [2024-07-25 10:17:32.580208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.625 [2024-07-25 10:17:32.580236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.625 qpair failed and we were unable to recover it. 00:28:47.625 [2024-07-25 10:17:32.580410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.625 [2024-07-25 10:17:32.580446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.625 qpair failed and we were unable to recover it. 00:28:47.625 [2024-07-25 10:17:32.580623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.625 [2024-07-25 10:17:32.580652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.625 qpair failed and we were unable to recover it. 00:28:47.625 [2024-07-25 10:17:32.580857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.625 [2024-07-25 10:17:32.580886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.625 qpair failed and we were unable to recover it. 00:28:47.625 [2024-07-25 10:17:32.581082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.625 [2024-07-25 10:17:32.581111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.625 qpair failed and we were unable to recover it. 00:28:47.625 [2024-07-25 10:17:32.581316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.625 [2024-07-25 10:17:32.581344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.625 qpair failed and we were unable to recover it. 00:28:47.625 [2024-07-25 10:17:32.581552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.625 [2024-07-25 10:17:32.581581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.625 qpair failed and we were unable to recover it. 00:28:47.625 [2024-07-25 10:17:32.581792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.625 [2024-07-25 10:17:32.581821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.625 qpair failed and we were unable to recover it. 00:28:47.625 [2024-07-25 10:17:32.581978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.625 [2024-07-25 10:17:32.582006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.625 qpair failed and we were unable to recover it. 00:28:47.625 [2024-07-25 10:17:32.582193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.625 [2024-07-25 10:17:32.582222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.625 qpair failed and we were unable to recover it. 00:28:47.625 [2024-07-25 10:17:32.582402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.625 [2024-07-25 10:17:32.582438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.625 qpair failed and we were unable to recover it. 00:28:47.625 [2024-07-25 10:17:32.582642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.625 [2024-07-25 10:17:32.582671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.625 qpair failed and we were unable to recover it. 00:28:47.625 [2024-07-25 10:17:32.582872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.625 [2024-07-25 10:17:32.582901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.625 qpair failed and we were unable to recover it. 00:28:47.625 [2024-07-25 10:17:32.583067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.625 [2024-07-25 10:17:32.583095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.625 qpair failed and we were unable to recover it. 00:28:47.625 [2024-07-25 10:17:32.583295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.625 [2024-07-25 10:17:32.583325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.625 qpair failed and we were unable to recover it. 00:28:47.625 [2024-07-25 10:17:32.583477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.625 [2024-07-25 10:17:32.583507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.625 qpair failed and we were unable to recover it. 00:28:47.625 [2024-07-25 10:17:32.583679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.625 [2024-07-25 10:17:32.583708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.625 qpair failed and we were unable to recover it. 00:28:47.625 [2024-07-25 10:17:32.583882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.625 [2024-07-25 10:17:32.583910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.625 qpair failed and we were unable to recover it. 00:28:47.625 [2024-07-25 10:17:32.584077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.625 [2024-07-25 10:17:32.584110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.625 qpair failed and we were unable to recover it. 00:28:47.625 [2024-07-25 10:17:32.584312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.625 [2024-07-25 10:17:32.584341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.625 qpair failed and we were unable to recover it. 00:28:47.625 [2024-07-25 10:17:32.584501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.625 [2024-07-25 10:17:32.584531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.625 qpair failed and we were unable to recover it. 00:28:47.625 [2024-07-25 10:17:32.584699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.625 [2024-07-25 10:17:32.584729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.625 qpair failed and we were unable to recover it. 00:28:47.625 [2024-07-25 10:17:32.584898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.625 [2024-07-25 10:17:32.584926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.625 qpair failed and we were unable to recover it. 00:28:47.625 [2024-07-25 10:17:32.585111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.625 [2024-07-25 10:17:32.585145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.625 qpair failed and we were unable to recover it. 00:28:47.625 [2024-07-25 10:17:32.585313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.625 [2024-07-25 10:17:32.585343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.625 qpair failed and we were unable to recover it. 00:28:47.625 [2024-07-25 10:17:32.585518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.625 [2024-07-25 10:17:32.585547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.625 qpair failed and we were unable to recover it. 00:28:47.625 [2024-07-25 10:17:32.585716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.625 [2024-07-25 10:17:32.585746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.625 qpair failed and we were unable to recover it. 00:28:47.625 [2024-07-25 10:17:32.585923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.625 [2024-07-25 10:17:32.585951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.625 qpair failed and we were unable to recover it. 00:28:47.625 [2024-07-25 10:17:32.586120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.625 [2024-07-25 10:17:32.586148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.625 qpair failed and we were unable to recover it. 00:28:47.625 [2024-07-25 10:17:32.586328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.625 [2024-07-25 10:17:32.586358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.625 qpair failed and we were unable to recover it. 00:28:47.625 [2024-07-25 10:17:32.586516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.625 [2024-07-25 10:17:32.586546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.625 qpair failed and we were unable to recover it. 00:28:47.625 [2024-07-25 10:17:32.586751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.625 [2024-07-25 10:17:32.586781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.625 qpair failed and we were unable to recover it. 00:28:47.625 [2024-07-25 10:17:32.586957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.625 [2024-07-25 10:17:32.586985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.625 qpair failed and we were unable to recover it. 00:28:47.625 [2024-07-25 10:17:32.587185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.625 [2024-07-25 10:17:32.587214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.625 qpair failed and we were unable to recover it. 00:28:47.625 [2024-07-25 10:17:32.587397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.625 [2024-07-25 10:17:32.587426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.625 qpair failed and we were unable to recover it. 00:28:47.626 [2024-07-25 10:17:32.587606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.626 [2024-07-25 10:17:32.587636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.626 qpair failed and we were unable to recover it. 00:28:47.626 [2024-07-25 10:17:32.587800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.626 [2024-07-25 10:17:32.587829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.626 qpair failed and we were unable to recover it. 00:28:47.626 10:17:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:47.626 [2024-07-25 10:17:32.587986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.626 [2024-07-25 10:17:32.588016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.626 10:17:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:28:47.626 qpair failed and we were unable to recover it. 00:28:47.626 [2024-07-25 10:17:32.588178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.626 10:17:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:47.626 [2024-07-25 10:17:32.588207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.626 qpair failed and we were unable to recover it. 00:28:47.626 10:17:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:47.626 [2024-07-25 10:17:32.588372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.626 [2024-07-25 10:17:32.588402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.626 qpair failed and we were unable to recover it. 00:28:47.626 10:17:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:47.626 [2024-07-25 10:17:32.588568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.626 [2024-07-25 10:17:32.588599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.626 qpair failed and we were unable to recover it. 00:28:47.626 [2024-07-25 10:17:32.588784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.626 [2024-07-25 10:17:32.588812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.626 qpair failed and we were unable to recover it. 00:28:47.626 [2024-07-25 10:17:32.588975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.626 [2024-07-25 10:17:32.589004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.626 qpair failed and we were unable to recover it. 00:28:47.626 [2024-07-25 10:17:32.589193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.626 [2024-07-25 10:17:32.589240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.626 qpair failed and we were unable to recover it. 00:28:47.626 [2024-07-25 10:17:32.589454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.626 [2024-07-25 10:17:32.589492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.626 qpair failed and we were unable to recover it. 00:28:47.626 [2024-07-25 10:17:32.589653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.626 [2024-07-25 10:17:32.589689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.626 qpair failed and we were unable to recover it. 00:28:47.626 [2024-07-25 10:17:32.589860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.626 [2024-07-25 10:17:32.589890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.626 qpair failed and we were unable to recover it. 00:28:47.626 [2024-07-25 10:17:32.590094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.626 [2024-07-25 10:17:32.590140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.626 qpair failed and we were unable to recover it. 00:28:47.626 [2024-07-25 10:17:32.590311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.626 [2024-07-25 10:17:32.590339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.626 qpair failed and we were unable to recover it. 00:28:47.626 [2024-07-25 10:17:32.590528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.626 [2024-07-25 10:17:32.590556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.626 qpair failed and we were unable to recover it. 00:28:47.626 [2024-07-25 10:17:32.590709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.626 [2024-07-25 10:17:32.590738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.626 qpair failed and we were unable to recover it. 00:28:47.626 [2024-07-25 10:17:32.590960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.626 [2024-07-25 10:17:32.591007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.626 qpair failed and we were unable to recover it. 00:28:47.626 [2024-07-25 10:17:32.591193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.626 [2024-07-25 10:17:32.591222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.626 qpair failed and we were unable to recover it. 00:28:47.626 [2024-07-25 10:17:32.591434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.626 [2024-07-25 10:17:32.591464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.626 qpair failed and we were unable to recover it. 00:28:47.626 [2024-07-25 10:17:32.591641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.626 [2024-07-25 10:17:32.591680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.626 qpair failed and we were unable to recover it. 00:28:47.626 [2024-07-25 10:17:32.591893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.626 [2024-07-25 10:17:32.591941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.626 qpair failed and we were unable to recover it. 00:28:47.626 [2024-07-25 10:17:32.592098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.626 [2024-07-25 10:17:32.592131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.626 qpair failed and we were unable to recover it. 00:28:47.626 [2024-07-25 10:17:32.592321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.626 [2024-07-25 10:17:32.592349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.626 qpair failed and we were unable to recover it. 00:28:47.626 [2024-07-25 10:17:32.592557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.626 [2024-07-25 10:17:32.592586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.626 qpair failed and we were unable to recover it. 00:28:47.626 [2024-07-25 10:17:32.592771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.626 [2024-07-25 10:17:32.592817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.626 qpair failed and we were unable to recover it. 00:28:47.626 [2024-07-25 10:17:32.593007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.626 [2024-07-25 10:17:32.593036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.626 qpair failed and we were unable to recover it. 00:28:47.626 [2024-07-25 10:17:32.593211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.626 [2024-07-25 10:17:32.593240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.626 qpair failed and we were unable to recover it. 00:28:47.626 [2024-07-25 10:17:32.593411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.626 [2024-07-25 10:17:32.593446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.626 qpair failed and we were unable to recover it. 00:28:47.626 [2024-07-25 10:17:32.593612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.626 [2024-07-25 10:17:32.593642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.626 qpair failed and we were unable to recover it. 00:28:47.626 [2024-07-25 10:17:32.593799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.626 [2024-07-25 10:17:32.593828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.626 qpair failed and we were unable to recover it. 00:28:47.626 [2024-07-25 10:17:32.594051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.626 [2024-07-25 10:17:32.594081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.626 qpair failed and we were unable to recover it. 00:28:47.626 [2024-07-25 10:17:32.594251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.626 [2024-07-25 10:17:32.594280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.626 qpair failed and we were unable to recover it. 00:28:47.626 [2024-07-25 10:17:32.594523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.626 [2024-07-25 10:17:32.594553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.626 qpair failed and we were unable to recover it. 00:28:47.626 [2024-07-25 10:17:32.594751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.626 [2024-07-25 10:17:32.594781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.626 qpair failed and we were unable to recover it. 00:28:47.626 [2024-07-25 10:17:32.594950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.626 [2024-07-25 10:17:32.594979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.626 qpair failed and we were unable to recover it. 00:28:47.626 [2024-07-25 10:17:32.595152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.626 [2024-07-25 10:17:32.595181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.626 qpair failed and we were unable to recover it. 00:28:47.627 [2024-07-25 10:17:32.595382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.627 [2024-07-25 10:17:32.595412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.627 qpair failed and we were unable to recover it. 00:28:47.627 [2024-07-25 10:17:32.595572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.627 [2024-07-25 10:17:32.595602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.627 qpair failed and we were unable to recover it. 00:28:47.627 [2024-07-25 10:17:32.595786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.627 [2024-07-25 10:17:32.595816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.627 qpair failed and we were unable to recover it. 00:28:47.627 [2024-07-25 10:17:32.596017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.627 [2024-07-25 10:17:32.596046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.627 qpair failed and we were unable to recover it. 00:28:47.627 [2024-07-25 10:17:32.596218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.627 [2024-07-25 10:17:32.596265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.627 qpair failed and we were unable to recover it. 00:28:47.627 [2024-07-25 10:17:32.596444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.627 [2024-07-25 10:17:32.596483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.627 qpair failed and we were unable to recover it. 00:28:47.627 [2024-07-25 10:17:32.596648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.627 [2024-07-25 10:17:32.596677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.627 qpair failed and we were unable to recover it. 00:28:47.627 [2024-07-25 10:17:32.596867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.627 [2024-07-25 10:17:32.596895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.627 qpair failed and we were unable to recover it. 00:28:47.627 [2024-07-25 10:17:32.597105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.627 [2024-07-25 10:17:32.597151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.627 qpair failed and we were unable to recover it. 00:28:47.627 [2024-07-25 10:17:32.597325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.627 [2024-07-25 10:17:32.597354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.627 qpair failed and we were unable to recover it. 00:28:47.627 [2024-07-25 10:17:32.597526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.627 [2024-07-25 10:17:32.597556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.627 qpair failed and we were unable to recover it. 00:28:47.627 [2024-07-25 10:17:32.597711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.627 [2024-07-25 10:17:32.597740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.627 qpair failed and we were unable to recover it. 00:28:47.627 [2024-07-25 10:17:32.597921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.627 [2024-07-25 10:17:32.597976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.627 qpair failed and we were unable to recover it. 00:28:47.627 [2024-07-25 10:17:32.598179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.627 [2024-07-25 10:17:32.598207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.627 qpair failed and we were unable to recover it. 00:28:47.627 [2024-07-25 10:17:32.598377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.627 [2024-07-25 10:17:32.598406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.627 qpair failed and we were unable to recover it. 00:28:47.627 [2024-07-25 10:17:32.598565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.627 [2024-07-25 10:17:32.598594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.627 qpair failed and we were unable to recover it. 00:28:47.627 [2024-07-25 10:17:32.598723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.627 [2024-07-25 10:17:32.598757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.627 qpair failed and we were unable to recover it. 00:28:47.627 [2024-07-25 10:17:32.598967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.627 [2024-07-25 10:17:32.598996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.627 qpair failed and we were unable to recover it. 00:28:47.627 [2024-07-25 10:17:32.599139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.627 [2024-07-25 10:17:32.599179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.627 qpair failed and we were unable to recover it. 00:28:47.627 [2024-07-25 10:17:32.599343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.627 [2024-07-25 10:17:32.599372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.627 qpair failed and we were unable to recover it. 00:28:47.627 [2024-07-25 10:17:32.599552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.627 [2024-07-25 10:17:32.599582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.627 qpair failed and we were unable to recover it. 00:28:47.627 [2024-07-25 10:17:32.599719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.627 [2024-07-25 10:17:32.599747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.627 qpair failed and we were unable to recover it. 00:28:47.627 [2024-07-25 10:17:32.599928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.627 [2024-07-25 10:17:32.599956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.627 qpair failed and we were unable to recover it. 00:28:47.627 [2024-07-25 10:17:32.600158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.627 [2024-07-25 10:17:32.600187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.627 qpair failed and we were unable to recover it. 00:28:47.627 [2024-07-25 10:17:32.600367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.627 [2024-07-25 10:17:32.600396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.627 qpair failed and we were unable to recover it. 00:28:47.627 [2024-07-25 10:17:32.600536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.627 [2024-07-25 10:17:32.600570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.627 qpair failed and we were unable to recover it. 00:28:47.627 [2024-07-25 10:17:32.600713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.627 [2024-07-25 10:17:32.600741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.627 qpair failed and we were unable to recover it. 00:28:47.627 [2024-07-25 10:17:32.600916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.627 [2024-07-25 10:17:32.600945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.627 qpair failed and we were unable to recover it. 00:28:47.627 [2024-07-25 10:17:32.601162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.627 [2024-07-25 10:17:32.601216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.627 qpair failed and we were unable to recover it. 00:28:47.627 [2024-07-25 10:17:32.601437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.627 [2024-07-25 10:17:32.601477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.627 qpair failed and we were unable to recover it. 00:28:47.627 [2024-07-25 10:17:32.601618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.627 [2024-07-25 10:17:32.601648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.627 qpair failed and we were unable to recover it. 00:28:47.627 [2024-07-25 10:17:32.601820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.627 [2024-07-25 10:17:32.601848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.627 qpair failed and we were unable to recover it. 00:28:47.627 [2024-07-25 10:17:32.602037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.627 [2024-07-25 10:17:32.602093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.627 qpair failed and we were unable to recover it. 00:28:47.627 [2024-07-25 10:17:32.602269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.627 [2024-07-25 10:17:32.602297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.627 qpair failed and we were unable to recover it. 00:28:47.627 [2024-07-25 10:17:32.602508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.627 [2024-07-25 10:17:32.602537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.627 qpair failed and we were unable to recover it. 00:28:47.627 [2024-07-25 10:17:32.602698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.627 [2024-07-25 10:17:32.602726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.627 qpair failed and we were unable to recover it. 00:28:47.627 [2024-07-25 10:17:32.602886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.627 [2024-07-25 10:17:32.602916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.627 qpair failed and we were unable to recover it. 00:28:47.627 [2024-07-25 10:17:32.603093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.628 [2024-07-25 10:17:32.603122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.628 qpair failed and we were unable to recover it. 00:28:47.628 [2024-07-25 10:17:32.603266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.628 [2024-07-25 10:17:32.603294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.628 qpair failed and we were unable to recover it. 00:28:47.628 [2024-07-25 10:17:32.603530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.628 [2024-07-25 10:17:32.603561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.628 qpair failed and we were unable to recover it. 00:28:47.628 [2024-07-25 10:17:32.603701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.628 [2024-07-25 10:17:32.603731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.628 qpair failed and we were unable to recover it. 00:28:47.628 [2024-07-25 10:17:32.603892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.628 [2024-07-25 10:17:32.603921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.628 qpair failed and we were unable to recover it. 00:28:47.628 [2024-07-25 10:17:32.604079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.628 [2024-07-25 10:17:32.604107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.628 qpair failed and we were unable to recover it. 00:28:47.628 [2024-07-25 10:17:32.604310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.628 [2024-07-25 10:17:32.604338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.628 qpair failed and we were unable to recover it. 00:28:47.628 [2024-07-25 10:17:32.604502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.628 [2024-07-25 10:17:32.604535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.628 qpair failed and we were unable to recover it. 00:28:47.628 [2024-07-25 10:17:32.604675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.628 [2024-07-25 10:17:32.604704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.628 qpair failed and we were unable to recover it. 00:28:47.628 [2024-07-25 10:17:32.604835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.628 [2024-07-25 10:17:32.604863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.628 qpair failed and we were unable to recover it. 00:28:47.628 [2024-07-25 10:17:32.605094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.628 [2024-07-25 10:17:32.605123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.628 qpair failed and we were unable to recover it. 00:28:47.628 [2024-07-25 10:17:32.605338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.628 [2024-07-25 10:17:32.605368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.628 qpair failed and we were unable to recover it. 00:28:47.628 [2024-07-25 10:17:32.605528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.628 [2024-07-25 10:17:32.605558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.628 qpair failed and we were unable to recover it. 00:28:47.628 [2024-07-25 10:17:32.605699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.628 [2024-07-25 10:17:32.605728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.628 qpair failed and we were unable to recover it. 00:28:47.628 [2024-07-25 10:17:32.605947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.628 [2024-07-25 10:17:32.605976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.628 qpair failed and we were unable to recover it. 00:28:47.628 [2024-07-25 10:17:32.606193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.628 [2024-07-25 10:17:32.606241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.628 qpair failed and we were unable to recover it. 00:28:47.628 [2024-07-25 10:17:32.606402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.628 [2024-07-25 10:17:32.606436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.628 qpair failed and we were unable to recover it. 00:28:47.628 [2024-07-25 10:17:32.606577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.628 [2024-07-25 10:17:32.606615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.628 qpair failed and we were unable to recover it. 00:28:47.628 [2024-07-25 10:17:32.606804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.628 [2024-07-25 10:17:32.606834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.628 qpair failed and we were unable to recover it. 00:28:47.628 [2024-07-25 10:17:32.607057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.628 [2024-07-25 10:17:32.607104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.628 qpair failed and we were unable to recover it. 00:28:47.628 [2024-07-25 10:17:32.607259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.628 [2024-07-25 10:17:32.607288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.628 qpair failed and we were unable to recover it. 00:28:47.628 [2024-07-25 10:17:32.607445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.628 [2024-07-25 10:17:32.607488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.628 qpair failed and we were unable to recover it. 00:28:47.628 [2024-07-25 10:17:32.607622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.628 [2024-07-25 10:17:32.607652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.628 qpair failed and we were unable to recover it. 00:28:47.628 [2024-07-25 10:17:32.607841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.628 [2024-07-25 10:17:32.607887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.628 qpair failed and we were unable to recover it. 00:28:47.628 [2024-07-25 10:17:32.608056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.628 [2024-07-25 10:17:32.608085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.628 qpair failed and we were unable to recover it. 00:28:47.628 [2024-07-25 10:17:32.608262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.628 [2024-07-25 10:17:32.608293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.628 qpair failed and we were unable to recover it. 00:28:47.628 [2024-07-25 10:17:32.608509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.628 [2024-07-25 10:17:32.608539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.628 qpair failed and we were unable to recover it. 00:28:47.628 [2024-07-25 10:17:32.608691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.628 [2024-07-25 10:17:32.608725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.628 qpair failed and we were unable to recover it. 00:28:47.628 [2024-07-25 10:17:32.608955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.628 [2024-07-25 10:17:32.608989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.628 qpair failed and we were unable to recover it. 00:28:47.628 [2024-07-25 10:17:32.609202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.628 [2024-07-25 10:17:32.609243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.628 qpair failed and we were unable to recover it. 00:28:47.628 [2024-07-25 10:17:32.609413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.628 [2024-07-25 10:17:32.609454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.628 qpair failed and we were unable to recover it. 00:28:47.628 [2024-07-25 10:17:32.609610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.628 [2024-07-25 10:17:32.609640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.628 qpair failed and we were unable to recover it. 00:28:47.628 [2024-07-25 10:17:32.609797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.629 [2024-07-25 10:17:32.609826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.629 qpair failed and we were unable to recover it. 00:28:47.629 [2024-07-25 10:17:32.610057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.629 [2024-07-25 10:17:32.610086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.629 qpair failed and we were unable to recover it. 00:28:47.629 [2024-07-25 10:17:32.610250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.629 [2024-07-25 10:17:32.610279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.629 qpair failed and we were unable to recover it. 00:28:47.629 10:17:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:47.629 [2024-07-25 10:17:32.610452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.629 [2024-07-25 10:17:32.610501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.629 10:17:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:47.629 qpair failed and we were unable to recover it. 00:28:47.629 [2024-07-25 10:17:32.610640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.629 10:17:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:47.629 [2024-07-25 10:17:32.610669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.629 qpair failed and we were unable to recover it. 00:28:47.629 10:17:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:47.629 [2024-07-25 10:17:32.610854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.629 [2024-07-25 10:17:32.610884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.629 qpair failed and we were unable to recover it. 00:28:47.629 [2024-07-25 10:17:32.611106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.629 [2024-07-25 10:17:32.611135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.629 qpair failed and we were unable to recover it. 00:28:47.629 [2024-07-25 10:17:32.611316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.629 [2024-07-25 10:17:32.611346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.629 qpair failed and we were unable to recover it. 00:28:47.629 [2024-07-25 10:17:32.611524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.629 [2024-07-25 10:17:32.611554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.629 qpair failed and we were unable to recover it. 00:28:47.629 [2024-07-25 10:17:32.611702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.629 [2024-07-25 10:17:32.611732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.629 qpair failed and we were unable to recover it. 00:28:47.629 [2024-07-25 10:17:32.611926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.629 [2024-07-25 10:17:32.611955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.629 qpair failed and we were unable to recover it. 00:28:47.629 [2024-07-25 10:17:32.612182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.629 [2024-07-25 10:17:32.612229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.629 qpair failed and we were unable to recover it. 00:28:47.629 [2024-07-25 10:17:32.612387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.629 [2024-07-25 10:17:32.612416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.629 qpair failed and we were unable to recover it. 00:28:47.629 [2024-07-25 10:17:32.612574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.629 [2024-07-25 10:17:32.612604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.629 qpair failed and we were unable to recover it. 00:28:47.629 [2024-07-25 10:17:32.612823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.629 [2024-07-25 10:17:32.612852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.629 qpair failed and we were unable to recover it. 00:28:47.629 [2024-07-25 10:17:32.613044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.629 [2024-07-25 10:17:32.613091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.629 qpair failed and we were unable to recover it. 00:28:47.629 [2024-07-25 10:17:32.613247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.629 [2024-07-25 10:17:32.613276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.629 qpair failed and we were unable to recover it. 00:28:47.629 [2024-07-25 10:17:32.613456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.629 [2024-07-25 10:17:32.613493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.629 qpair failed and we were unable to recover it. 00:28:47.629 [2024-07-25 10:17:32.613629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.629 [2024-07-25 10:17:32.613658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.629 qpair failed and we were unable to recover it. 00:28:47.629 [2024-07-25 10:17:32.613846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.629 [2024-07-25 10:17:32.613893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.629 qpair failed and we were unable to recover it. 00:28:47.629 [2024-07-25 10:17:32.614098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.629 [2024-07-25 10:17:32.614127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.629 qpair failed and we were unable to recover it. 00:28:47.629 [2024-07-25 10:17:32.614303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.629 [2024-07-25 10:17:32.614333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.629 qpair failed and we were unable to recover it. 00:28:47.629 [2024-07-25 10:17:32.614491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.629 [2024-07-25 10:17:32.614522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.629 qpair failed and we were unable to recover it. 00:28:47.629 [2024-07-25 10:17:32.614659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.629 [2024-07-25 10:17:32.614688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.629 qpair failed and we were unable to recover it. 00:28:47.629 [2024-07-25 10:17:32.614904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.629 [2024-07-25 10:17:32.614934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.629 qpair failed and we were unable to recover it. 00:28:47.629 [2024-07-25 10:17:32.615078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.629 [2024-07-25 10:17:32.615107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.629 qpair failed and we were unable to recover it. 00:28:47.629 [2024-07-25 10:17:32.615248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.629 [2024-07-25 10:17:32.615277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.629 qpair failed and we were unable to recover it. 00:28:47.629 [2024-07-25 10:17:32.615435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.629 [2024-07-25 10:17:32.615485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.629 qpair failed and we were unable to recover it. 00:28:47.629 [2024-07-25 10:17:32.615628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.629 [2024-07-25 10:17:32.615657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.629 qpair failed and we were unable to recover it. 00:28:47.629 [2024-07-25 10:17:32.615826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.629 [2024-07-25 10:17:32.615855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.629 qpair failed and we were unable to recover it. 00:28:47.629 [2024-07-25 10:17:32.616023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.629 [2024-07-25 10:17:32.616052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.629 qpair failed and we were unable to recover it. 00:28:47.629 [2024-07-25 10:17:32.616227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.629 [2024-07-25 10:17:32.616256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.629 qpair failed and we were unable to recover it. 00:28:47.629 [2024-07-25 10:17:32.616403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.629 [2024-07-25 10:17:32.616438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.629 qpair failed and we were unable to recover it. 00:28:47.629 [2024-07-25 10:17:32.616605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.629 [2024-07-25 10:17:32.616634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.629 qpair failed and we were unable to recover it. 00:28:47.629 [2024-07-25 10:17:32.619571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.629 [2024-07-25 10:17:32.619608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.629 qpair failed and we were unable to recover it. 00:28:47.629 [2024-07-25 10:17:32.619744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.629 [2024-07-25 10:17:32.619774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.629 qpair failed and we were unable to recover it. 00:28:47.629 [2024-07-25 10:17:32.619993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.630 [2024-07-25 10:17:32.620023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.630 qpair failed and we were unable to recover it. 00:28:47.630 [2024-07-25 10:17:32.620264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.630 [2024-07-25 10:17:32.620293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.630 qpair failed and we were unable to recover it. 00:28:47.630 [2024-07-25 10:17:32.620516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.630 [2024-07-25 10:17:32.620545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.630 qpair failed and we were unable to recover it. 00:28:47.630 [2024-07-25 10:17:32.620714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.630 [2024-07-25 10:17:32.620761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.630 qpair failed and we were unable to recover it. 00:28:47.630 [2024-07-25 10:17:32.620983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.630 [2024-07-25 10:17:32.621023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.630 qpair failed and we were unable to recover it. 00:28:47.630 [2024-07-25 10:17:32.621198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.630 [2024-07-25 10:17:32.621227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.630 qpair failed and we were unable to recover it. 00:28:47.630 [2024-07-25 10:17:32.621450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.630 [2024-07-25 10:17:32.621491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.630 qpair failed and we were unable to recover it. 00:28:47.630 [2024-07-25 10:17:32.621634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.630 [2024-07-25 10:17:32.621663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.630 qpair failed and we were unable to recover it. 00:28:47.630 [2024-07-25 10:17:32.621864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.630 [2024-07-25 10:17:32.621892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.630 qpair failed and we were unable to recover it. 00:28:47.630 [2024-07-25 10:17:32.622042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.630 [2024-07-25 10:17:32.622071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.630 qpair failed and we were unable to recover it. 00:28:47.630 [2024-07-25 10:17:32.622285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.630 [2024-07-25 10:17:32.622314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.630 qpair failed and we were unable to recover it. 00:28:47.630 [2024-07-25 10:17:32.622464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.630 [2024-07-25 10:17:32.622502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.630 qpair failed and we were unable to recover it. 00:28:47.630 [2024-07-25 10:17:32.622680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.630 [2024-07-25 10:17:32.622709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.630 qpair failed and we were unable to recover it. 00:28:47.630 [2024-07-25 10:17:32.622915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.630 [2024-07-25 10:17:32.622944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.630 qpair failed and we were unable to recover it. 00:28:47.630 [2024-07-25 10:17:32.623170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.630 [2024-07-25 10:17:32.623200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.630 qpair failed and we were unable to recover it. 00:28:47.630 [2024-07-25 10:17:32.623371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.630 [2024-07-25 10:17:32.623409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.630 qpair failed and we were unable to recover it. 00:28:47.630 [2024-07-25 10:17:32.623592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.630 [2024-07-25 10:17:32.623621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.630 qpair failed and we were unable to recover it. 00:28:47.630 [2024-07-25 10:17:32.623777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.630 [2024-07-25 10:17:32.623806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.630 qpair failed and we were unable to recover it. 00:28:47.630 [2024-07-25 10:17:32.623995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.630 [2024-07-25 10:17:32.624033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.630 qpair failed and we were unable to recover it. 00:28:47.630 [2024-07-25 10:17:32.624190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.630 [2024-07-25 10:17:32.624238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.630 qpair failed and we were unable to recover it. 00:28:47.630 [2024-07-25 10:17:32.624448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.630 [2024-07-25 10:17:32.624477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.630 qpair failed and we were unable to recover it. 00:28:47.630 [2024-07-25 10:17:32.624639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.630 [2024-07-25 10:17:32.624667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.630 qpair failed and we were unable to recover it. 00:28:47.630 [2024-07-25 10:17:32.624849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.630 [2024-07-25 10:17:32.624877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.630 qpair failed and we were unable to recover it. 00:28:47.630 [2024-07-25 10:17:32.625050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.630 [2024-07-25 10:17:32.625097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.630 qpair failed and we were unable to recover it. 00:28:47.630 [2024-07-25 10:17:32.625243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.630 [2024-07-25 10:17:32.625273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.630 qpair failed and we were unable to recover it. 00:28:47.630 [2024-07-25 10:17:32.625494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.630 [2024-07-25 10:17:32.625525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.630 qpair failed and we were unable to recover it. 00:28:47.630 [2024-07-25 10:17:32.625658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.630 [2024-07-25 10:17:32.625685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.630 qpair failed and we were unable to recover it. 00:28:47.630 [2024-07-25 10:17:32.625819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.630 [2024-07-25 10:17:32.625877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.630 qpair failed and we were unable to recover it. 00:28:47.630 [2024-07-25 10:17:32.626068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.630 [2024-07-25 10:17:32.626097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.630 qpair failed and we were unable to recover it. 00:28:47.630 [2024-07-25 10:17:32.626312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.630 [2024-07-25 10:17:32.626340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.630 qpair failed and we were unable to recover it. 00:28:47.630 [2024-07-25 10:17:32.626498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.630 [2024-07-25 10:17:32.626528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.630 qpair failed and we were unable to recover it. 00:28:47.630 [2024-07-25 10:17:32.626700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.630 [2024-07-25 10:17:32.626729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.630 qpair failed and we were unable to recover it. 00:28:47.630 [2024-07-25 10:17:32.626916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.630 [2024-07-25 10:17:32.626944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.630 qpair failed and we were unable to recover it. 00:28:47.630 [2024-07-25 10:17:32.627139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.630 [2024-07-25 10:17:32.627168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.630 qpair failed and we were unable to recover it. 00:28:47.630 [2024-07-25 10:17:32.627336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.630 [2024-07-25 10:17:32.627364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.630 qpair failed and we were unable to recover it. 00:28:47.630 [2024-07-25 10:17:32.627548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.630 [2024-07-25 10:17:32.627576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.630 qpair failed and we were unable to recover it. 00:28:47.630 [2024-07-25 10:17:32.627722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.630 [2024-07-25 10:17:32.627750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.630 qpair failed and we were unable to recover it. 00:28:47.630 [2024-07-25 10:17:32.627929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.630 [2024-07-25 10:17:32.627959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.630 qpair failed and we were unable to recover it. 00:28:47.631 [2024-07-25 10:17:32.628141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.631 [2024-07-25 10:17:32.628174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.631 qpair failed and we were unable to recover it. 00:28:47.631 [2024-07-25 10:17:32.628356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.631 [2024-07-25 10:17:32.628386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.631 qpair failed and we were unable to recover it. 00:28:47.631 [2024-07-25 10:17:32.628582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.631 [2024-07-25 10:17:32.628613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.631 qpair failed and we were unable to recover it. 00:28:47.631 [2024-07-25 10:17:32.628804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.631 [2024-07-25 10:17:32.628834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.631 qpair failed and we were unable to recover it. 00:28:47.631 [2024-07-25 10:17:32.629031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.631 [2024-07-25 10:17:32.629060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.631 qpair failed and we were unable to recover it. 00:28:47.631 [2024-07-25 10:17:32.629249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.631 [2024-07-25 10:17:32.629279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.631 qpair failed and we were unable to recover it. 00:28:47.631 [2024-07-25 10:17:32.629493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.631 [2024-07-25 10:17:32.629523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.631 qpair failed and we were unable to recover it. 00:28:47.631 [2024-07-25 10:17:32.629648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.631 [2024-07-25 10:17:32.629676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.631 qpair failed and we were unable to recover it. 00:28:47.631 [2024-07-25 10:17:32.629864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.631 [2024-07-25 10:17:32.629893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.631 qpair failed and we were unable to recover it. 00:28:47.631 [2024-07-25 10:17:32.630051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.631 [2024-07-25 10:17:32.630097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.631 qpair failed and we were unable to recover it. 00:28:47.631 [2024-07-25 10:17:32.630296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.631 [2024-07-25 10:17:32.630325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.631 qpair failed and we were unable to recover it. 00:28:47.631 [2024-07-25 10:17:32.630477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.631 [2024-07-25 10:17:32.630506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.631 qpair failed and we were unable to recover it. 00:28:47.631 [2024-07-25 10:17:32.630661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.631 [2024-07-25 10:17:32.630689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.631 qpair failed and we were unable to recover it. 00:28:47.631 [2024-07-25 10:17:32.630846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.631 [2024-07-25 10:17:32.630893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.631 qpair failed and we were unable to recover it. 00:28:47.631 [2024-07-25 10:17:32.631080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.631 [2024-07-25 10:17:32.631110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.631 qpair failed and we were unable to recover it. 00:28:47.631 [2024-07-25 10:17:32.631308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.631 [2024-07-25 10:17:32.631337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.631 qpair failed and we were unable to recover it. 00:28:47.631 [2024-07-25 10:17:32.631550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.631 [2024-07-25 10:17:32.631580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.631 qpair failed and we were unable to recover it. 00:28:47.631 [2024-07-25 10:17:32.631750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.631 [2024-07-25 10:17:32.631796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.631 qpair failed and we were unable to recover it. 00:28:47.631 [2024-07-25 10:17:32.631957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.631 [2024-07-25 10:17:32.631994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.631 qpair failed and we were unable to recover it. 00:28:47.631 [2024-07-25 10:17:32.632204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.631 [2024-07-25 10:17:32.632244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.631 qpair failed and we were unable to recover it. 00:28:47.631 [2024-07-25 10:17:32.632391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.631 [2024-07-25 10:17:32.632421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.631 qpair failed and we were unable to recover it. 00:28:47.631 [2024-07-25 10:17:32.632587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.631 [2024-07-25 10:17:32.632616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.631 qpair failed and we were unable to recover it. 00:28:47.631 [2024-07-25 10:17:32.632841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.631 [2024-07-25 10:17:32.632870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.631 qpair failed and we were unable to recover it. 00:28:47.631 [2024-07-25 10:17:32.633053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.631 [2024-07-25 10:17:32.633081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.631 qpair failed and we were unable to recover it. 00:28:47.631 [2024-07-25 10:17:32.633226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.631 [2024-07-25 10:17:32.633254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.631 qpair failed and we were unable to recover it. 00:28:47.631 [2024-07-25 10:17:32.633462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.631 [2024-07-25 10:17:32.633500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.631 qpair failed and we were unable to recover it. 00:28:47.631 [2024-07-25 10:17:32.633708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.631 [2024-07-25 10:17:32.633736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.631 qpair failed and we were unable to recover it. 00:28:47.631 [2024-07-25 10:17:32.633908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.631 [2024-07-25 10:17:32.633948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.631 qpair failed and we were unable to recover it. 00:28:47.631 [2024-07-25 10:17:32.634126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.631 [2024-07-25 10:17:32.634155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.631 qpair failed and we were unable to recover it. 00:28:47.631 [2024-07-25 10:17:32.634340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.631 [2024-07-25 10:17:32.634369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.631 qpair failed and we were unable to recover it. 00:28:47.631 [2024-07-25 10:17:32.634525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.631 [2024-07-25 10:17:32.634555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.631 qpair failed and we were unable to recover it. 00:28:47.631 [2024-07-25 10:17:32.634713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.631 [2024-07-25 10:17:32.634742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.631 qpair failed and we were unable to recover it. 00:28:47.631 [2024-07-25 10:17:32.634943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.631 [2024-07-25 10:17:32.634972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.631 qpair failed and we were unable to recover it. 00:28:47.631 [2024-07-25 10:17:32.635146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.631 [2024-07-25 10:17:32.635193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.631 qpair failed and we were unable to recover it. 00:28:47.631 [2024-07-25 10:17:32.635369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.631 [2024-07-25 10:17:32.635397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.631 qpair failed and we were unable to recover it. 00:28:47.631 [2024-07-25 10:17:32.635591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.631 [2024-07-25 10:17:32.635620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.631 qpair failed and we were unable to recover it. 00:28:47.631 [2024-07-25 10:17:32.635803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.631 [2024-07-25 10:17:32.635831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.631 qpair failed and we were unable to recover it. 00:28:47.631 [2024-07-25 10:17:32.635990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.631 [2024-07-25 10:17:32.636037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.632 qpair failed and we were unable to recover it. 00:28:47.632 [2024-07-25 10:17:32.636213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.632 [2024-07-25 10:17:32.636242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.632 qpair failed and we were unable to recover it. 00:28:47.632 [2024-07-25 10:17:32.636444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.632 [2024-07-25 10:17:32.636484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.632 qpair failed and we were unable to recover it. 00:28:47.632 [2024-07-25 10:17:32.636621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.632 [2024-07-25 10:17:32.636654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.632 qpair failed and we were unable to recover it. 00:28:47.632 [2024-07-25 10:17:32.636852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.632 [2024-07-25 10:17:32.636899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.632 qpair failed and we were unable to recover it. 00:28:47.632 [2024-07-25 10:17:32.637074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.632 [2024-07-25 10:17:32.637103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.632 qpair failed and we were unable to recover it. 00:28:47.632 [2024-07-25 10:17:32.637303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.632 [2024-07-25 10:17:32.637332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.632 qpair failed and we were unable to recover it. 00:28:47.632 [2024-07-25 10:17:32.637541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.632 [2024-07-25 10:17:32.637571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.632 qpair failed and we were unable to recover it. 00:28:47.632 Malloc0 00:28:47.632 [2024-07-25 10:17:32.637718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.632 [2024-07-25 10:17:32.637750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.632 qpair failed and we were unable to recover it. 00:28:47.632 [2024-07-25 10:17:32.637922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.632 [2024-07-25 10:17:32.637951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.632 qpair failed and we were unable to recover it. 00:28:47.632 10:17:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:47.632 [2024-07-25 10:17:32.638161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.632 [2024-07-25 10:17:32.638190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.632 qpair failed and we were unable to recover it. 00:28:47.632 10:17:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:28:47.632 [2024-07-25 10:17:32.638346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.632 [2024-07-25 10:17:32.638376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.632 qpair failed and we were unable to recover it. 00:28:47.632 10:17:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:47.632 10:17:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:47.632 [2024-07-25 10:17:32.638568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.632 [2024-07-25 10:17:32.638598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.632 qpair failed and we were unable to recover it. 00:28:47.632 [2024-07-25 10:17:32.638787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.632 [2024-07-25 10:17:32.638816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.632 qpair failed and we were unable to recover it. 00:28:47.632 [2024-07-25 10:17:32.638993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.632 [2024-07-25 10:17:32.639021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.632 qpair failed and we were unable to recover it. 00:28:47.632 [2024-07-25 10:17:32.639232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.632 [2024-07-25 10:17:32.639260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.632 qpair failed and we were unable to recover it. 00:28:47.632 [2024-07-25 10:17:32.639496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.632 [2024-07-25 10:17:32.639525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.632 qpair failed and we were unable to recover it. 00:28:47.632 [2024-07-25 10:17:32.639746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.632 [2024-07-25 10:17:32.639774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.632 qpair failed and we were unable to recover it. 00:28:47.632 [2024-07-25 10:17:32.639972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.632 [2024-07-25 10:17:32.640001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.632 qpair failed and we were unable to recover it. 00:28:47.632 [2024-07-25 10:17:32.640171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.632 [2024-07-25 10:17:32.640201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.632 qpair failed and we were unable to recover it. 00:28:47.632 [2024-07-25 10:17:32.640400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.632 [2024-07-25 10:17:32.640434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.632 qpair failed and we were unable to recover it. 00:28:47.632 [2024-07-25 10:17:32.640606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.632 [2024-07-25 10:17:32.640635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.632 qpair failed and we were unable to recover it. 00:28:47.632 [2024-07-25 10:17:32.640831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.632 [2024-07-25 10:17:32.640872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.632 qpair failed and we were unable to recover it. 00:28:47.632 [2024-07-25 10:17:32.641042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.632 [2024-07-25 10:17:32.641070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.632 qpair failed and we were unable to recover it. 00:28:47.632 [2024-07-25 10:17:32.641275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.632 [2024-07-25 10:17:32.641302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.632 qpair failed and we were unable to recover it. 00:28:47.632 [2024-07-25 10:17:32.641371] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:47.632 [2024-07-25 10:17:32.641483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.632 [2024-07-25 10:17:32.641511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.632 qpair failed and we were unable to recover it. 00:28:47.632 [2024-07-25 10:17:32.641674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.632 [2024-07-25 10:17:32.641704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.632 qpair failed and we were unable to recover it. 00:28:47.632 [2024-07-25 10:17:32.641866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.632 [2024-07-25 10:17:32.641895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.632 qpair failed and we were unable to recover it. 00:28:47.632 [2024-07-25 10:17:32.642091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.632 [2024-07-25 10:17:32.642130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.632 qpair failed and we were unable to recover it. 00:28:47.632 [2024-07-25 10:17:32.642306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.632 [2024-07-25 10:17:32.642334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.632 qpair failed and we were unable to recover it. 00:28:47.632 [2024-07-25 10:17:32.642491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.632 [2024-07-25 10:17:32.642520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.633 qpair failed and we were unable to recover it. 00:28:47.633 [2024-07-25 10:17:32.642644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.633 [2024-07-25 10:17:32.642674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.633 qpair failed and we were unable to recover it. 00:28:47.633 [2024-07-25 10:17:32.642855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.633 [2024-07-25 10:17:32.642883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.633 qpair failed and we were unable to recover it. 00:28:47.633 [2024-07-25 10:17:32.643056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.633 [2024-07-25 10:17:32.643084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.633 qpair failed and we were unable to recover it. 00:28:47.633 [2024-07-25 10:17:32.643260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.633 [2024-07-25 10:17:32.643289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.633 qpair failed and we were unable to recover it. 00:28:47.633 [2024-07-25 10:17:32.643503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.633 [2024-07-25 10:17:32.643533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.633 qpair failed and we were unable to recover it. 00:28:47.633 [2024-07-25 10:17:32.643666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.633 [2024-07-25 10:17:32.643695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.633 qpair failed and we were unable to recover it. 00:28:47.633 [2024-07-25 10:17:32.643874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.633 [2024-07-25 10:17:32.643903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.633 qpair failed and we were unable to recover it. 00:28:47.633 [2024-07-25 10:17:32.644076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.633 [2024-07-25 10:17:32.644104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.633 qpair failed and we were unable to recover it. 00:28:47.633 [2024-07-25 10:17:32.644279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.633 [2024-07-25 10:17:32.644308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.633 qpair failed and we were unable to recover it. 00:28:47.633 [2024-07-25 10:17:32.644496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.633 [2024-07-25 10:17:32.644525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.633 qpair failed and we were unable to recover it. 00:28:47.633 [2024-07-25 10:17:32.644717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.633 [2024-07-25 10:17:32.644750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.633 qpair failed and we were unable to recover it. 00:28:47.633 [2024-07-25 10:17:32.644963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.633 [2024-07-25 10:17:32.644992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.633 qpair failed and we were unable to recover it. 00:28:47.633 [2024-07-25 10:17:32.645200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.633 [2024-07-25 10:17:32.645229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.633 qpair failed and we were unable to recover it. 00:28:47.633 [2024-07-25 10:17:32.645397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.633 [2024-07-25 10:17:32.645426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.633 qpair failed and we were unable to recover it. 00:28:47.633 [2024-07-25 10:17:32.645589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.633 [2024-07-25 10:17:32.645618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.633 qpair failed and we were unable to recover it. 00:28:47.633 [2024-07-25 10:17:32.645831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.633 [2024-07-25 10:17:32.645860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.633 qpair failed and we were unable to recover it. 00:28:47.633 [2024-07-25 10:17:32.646028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.633 [2024-07-25 10:17:32.646057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.633 qpair failed and we were unable to recover it. 00:28:47.633 [2024-07-25 10:17:32.646218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.633 [2024-07-25 10:17:32.646246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.633 qpair failed and we were unable to recover it. 00:28:47.633 [2024-07-25 10:17:32.646457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.633 [2024-07-25 10:17:32.646496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.633 qpair failed and we were unable to recover it. 00:28:47.633 [2024-07-25 10:17:32.646635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.633 [2024-07-25 10:17:32.646664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.633 qpair failed and we were unable to recover it. 00:28:47.633 [2024-07-25 10:17:32.646855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.633 [2024-07-25 10:17:32.646890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.633 qpair failed and we were unable to recover it. 00:28:47.633 [2024-07-25 10:17:32.647094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.633 [2024-07-25 10:17:32.647122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.633 qpair failed and we were unable to recover it. 00:28:47.633 [2024-07-25 10:17:32.647268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.633 [2024-07-25 10:17:32.647297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.633 qpair failed and we were unable to recover it. 00:28:47.633 [2024-07-25 10:17:32.647507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.633 [2024-07-25 10:17:32.647537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.633 qpair failed and we were unable to recover it. 00:28:47.633 [2024-07-25 10:17:32.647716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.633 [2024-07-25 10:17:32.647746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.633 qpair failed and we were unable to recover it. 00:28:47.633 [2024-07-25 10:17:32.647895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.633 [2024-07-25 10:17:32.647923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.633 qpair failed and we were unable to recover it. 00:28:47.633 [2024-07-25 10:17:32.648093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.633 [2024-07-25 10:17:32.648121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.633 qpair failed and we were unable to recover it. 00:28:47.633 [2024-07-25 10:17:32.648294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.633 [2024-07-25 10:17:32.648324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.633 qpair failed and we were unable to recover it. 00:28:47.633 [2024-07-25 10:17:32.648472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.633 [2024-07-25 10:17:32.648501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.633 qpair failed and we were unable to recover it. 00:28:47.633 [2024-07-25 10:17:32.648678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.633 [2024-07-25 10:17:32.648706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.633 qpair failed and we were unable to recover it. 00:28:47.633 [2024-07-25 10:17:32.648878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.633 [2024-07-25 10:17:32.648906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.633 qpair failed and we were unable to recover it. 00:28:47.633 [2024-07-25 10:17:32.649109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.633 [2024-07-25 10:17:32.649138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.633 qpair failed and we were unable to recover it. 00:28:47.633 [2024-07-25 10:17:32.649348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.633 [2024-07-25 10:17:32.649377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.633 qpair failed and we were unable to recover it. 00:28:47.633 [2024-07-25 10:17:32.649558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.633 [2024-07-25 10:17:32.649587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.633 qpair failed and we were unable to recover it. 00:28:47.633 10:17:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:47.633 [2024-07-25 10:17:32.649796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.633 10:17:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:47.633 [2024-07-25 10:17:32.649825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.633 qpair failed and we were unable to recover it. 00:28:47.633 10:17:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:47.633 [2024-07-25 10:17:32.649967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.633 [2024-07-25 10:17:32.649996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.633 qpair failed and we were unable to recover it. 00:28:47.634 10:17:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:47.634 [2024-07-25 10:17:32.650169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.634 [2024-07-25 10:17:32.650197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.634 qpair failed and we were unable to recover it. 00:28:47.634 [2024-07-25 10:17:32.650365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.634 [2024-07-25 10:17:32.650393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.634 qpair failed and we were unable to recover it. 00:28:47.634 [2024-07-25 10:17:32.650543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.634 [2024-07-25 10:17:32.650573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.634 qpair failed and we were unable to recover it. 00:28:47.634 [2024-07-25 10:17:32.650761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.634 [2024-07-25 10:17:32.650790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.634 qpair failed and we were unable to recover it. 00:28:47.634 [2024-07-25 10:17:32.650959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.634 [2024-07-25 10:17:32.650987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.634 qpair failed and we were unable to recover it. 00:28:47.634 [2024-07-25 10:17:32.651170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.634 [2024-07-25 10:17:32.651198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.634 qpair failed and we were unable to recover it. 00:28:47.634 [2024-07-25 10:17:32.651397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.634 [2024-07-25 10:17:32.651435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.634 qpair failed and we were unable to recover it. 00:28:47.634 [2024-07-25 10:17:32.651640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.634 [2024-07-25 10:17:32.651668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.634 qpair failed and we were unable to recover it. 00:28:47.634 [2024-07-25 10:17:32.651849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.634 [2024-07-25 10:17:32.651878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.634 qpair failed and we were unable to recover it. 00:28:47.634 [2024-07-25 10:17:32.652013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.634 [2024-07-25 10:17:32.652042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.634 qpair failed and we were unable to recover it. 00:28:47.634 [2024-07-25 10:17:32.652203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.634 [2024-07-25 10:17:32.652231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.634 qpair failed and we were unable to recover it. 00:28:47.634 [2024-07-25 10:17:32.652379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.634 [2024-07-25 10:17:32.652407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.634 qpair failed and we were unable to recover it. 00:28:47.634 [2024-07-25 10:17:32.652587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.634 [2024-07-25 10:17:32.652622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.634 qpair failed and we were unable to recover it. 00:28:47.634 [2024-07-25 10:17:32.652830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.634 [2024-07-25 10:17:32.652859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.634 qpair failed and we were unable to recover it. 00:28:47.634 [2024-07-25 10:17:32.653029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.634 [2024-07-25 10:17:32.653058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.634 qpair failed and we were unable to recover it. 00:28:47.634 [2024-07-25 10:17:32.653263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.634 [2024-07-25 10:17:32.653290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.634 qpair failed and we were unable to recover it. 00:28:47.634 [2024-07-25 10:17:32.653471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.634 [2024-07-25 10:17:32.653500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.634 qpair failed and we were unable to recover it. 00:28:47.634 [2024-07-25 10:17:32.653675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.634 [2024-07-25 10:17:32.653704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.634 qpair failed and we were unable to recover it. 00:28:47.634 [2024-07-25 10:17:32.653911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.634 [2024-07-25 10:17:32.653940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.634 qpair failed and we were unable to recover it. 00:28:47.634 [2024-07-25 10:17:32.654156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.634 [2024-07-25 10:17:32.654185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.634 qpair failed and we were unable to recover it. 00:28:47.634 [2024-07-25 10:17:32.654340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.634 [2024-07-25 10:17:32.654368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.634 qpair failed and we were unable to recover it. 00:28:47.634 [2024-07-25 10:17:32.654537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.634 [2024-07-25 10:17:32.654566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.634 qpair failed and we were unable to recover it. 00:28:47.634 [2024-07-25 10:17:32.654788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.634 [2024-07-25 10:17:32.654817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.634 qpair failed and we were unable to recover it. 00:28:47.634 [2024-07-25 10:17:32.654961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.634 [2024-07-25 10:17:32.654989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.634 qpair failed and we were unable to recover it. 00:28:47.634 [2024-07-25 10:17:32.655163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.634 [2024-07-25 10:17:32.655193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.634 qpair failed and we were unable to recover it. 00:28:47.634 [2024-07-25 10:17:32.655394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.634 [2024-07-25 10:17:32.655423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.634 qpair failed and we were unable to recover it. 00:28:47.634 [2024-07-25 10:17:32.655654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.634 [2024-07-25 10:17:32.655683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.634 qpair failed and we were unable to recover it. 00:28:47.634 [2024-07-25 10:17:32.655893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.634 [2024-07-25 10:17:32.655921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.634 qpair failed and we were unable to recover it. 00:28:47.634 [2024-07-25 10:17:32.656132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.634 [2024-07-25 10:17:32.656161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.634 qpair failed and we were unable to recover it. 00:28:47.634 [2024-07-25 10:17:32.656358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.634 [2024-07-25 10:17:32.656387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.634 qpair failed and we were unable to recover it. 00:28:47.634 [2024-07-25 10:17:32.656608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.634 [2024-07-25 10:17:32.656637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.634 qpair failed and we were unable to recover it. 00:28:47.634 [2024-07-25 10:17:32.656845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.634 [2024-07-25 10:17:32.656874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.634 qpair failed and we were unable to recover it. 00:28:47.634 [2024-07-25 10:17:32.657083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.634 [2024-07-25 10:17:32.657112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.634 qpair failed and we were unable to recover it. 00:28:47.634 [2024-07-25 10:17:32.657309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.634 [2024-07-25 10:17:32.657338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.634 qpair failed and we were unable to recover it. 00:28:47.634 [2024-07-25 10:17:32.657544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.634 [2024-07-25 10:17:32.657574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.634 qpair failed and we were unable to recover it. 00:28:47.634 10:17:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:47.634 [2024-07-25 10:17:32.657747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.634 [2024-07-25 10:17:32.657776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.634 qpair failed and we were unable to recover it. 00:28:47.634 10:17:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:47.634 10:17:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:47.635 [2024-07-25 10:17:32.657992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.635 [2024-07-25 10:17:32.658022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.635 qpair failed and we were unable to recover it. 00:28:47.635 10:17:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:47.635 [2024-07-25 10:17:32.658191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.635 [2024-07-25 10:17:32.658224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.635 qpair failed and we were unable to recover it. 00:28:47.635 [2024-07-25 10:17:32.658439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.635 [2024-07-25 10:17:32.658469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.635 qpair failed and we were unable to recover it. 00:28:47.635 [2024-07-25 10:17:32.658624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.635 [2024-07-25 10:17:32.658653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.635 qpair failed and we were unable to recover it. 00:28:47.635 [2024-07-25 10:17:32.658828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.635 [2024-07-25 10:17:32.658857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.635 qpair failed and we were unable to recover it. 00:28:47.635 [2024-07-25 10:17:32.659034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.635 [2024-07-25 10:17:32.659063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.635 qpair failed and we were unable to recover it. 00:28:47.635 [2024-07-25 10:17:32.659263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.635 [2024-07-25 10:17:32.659291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.635 qpair failed and we were unable to recover it. 00:28:47.635 [2024-07-25 10:17:32.659498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.635 [2024-07-25 10:17:32.659527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.635 qpair failed and we were unable to recover it. 00:28:47.635 [2024-07-25 10:17:32.659735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.635 [2024-07-25 10:17:32.659764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.635 qpair failed and we were unable to recover it. 00:28:47.635 [2024-07-25 10:17:32.659969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.635 [2024-07-25 10:17:32.659998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.635 qpair failed and we were unable to recover it. 00:28:47.635 [2024-07-25 10:17:32.660152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.635 [2024-07-25 10:17:32.660181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.635 qpair failed and we were unable to recover it. 00:28:47.635 [2024-07-25 10:17:32.660379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.635 [2024-07-25 10:17:32.660409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.635 qpair failed and we were unable to recover it. 00:28:47.635 [2024-07-25 10:17:32.660585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.635 [2024-07-25 10:17:32.660614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.635 qpair failed and we were unable to recover it. 00:28:47.635 [2024-07-25 10:17:32.660749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.635 [2024-07-25 10:17:32.660778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.635 qpair failed and we were unable to recover it. 00:28:47.635 [2024-07-25 10:17:32.660923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.635 [2024-07-25 10:17:32.660952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.635 qpair failed and we were unable to recover it. 00:28:47.635 [2024-07-25 10:17:32.661155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.635 [2024-07-25 10:17:32.661185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.635 qpair failed and we were unable to recover it. 00:28:47.635 [2024-07-25 10:17:32.661394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.635 [2024-07-25 10:17:32.661424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.635 qpair failed and we were unable to recover it. 00:28:47.635 [2024-07-25 10:17:32.661631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.635 [2024-07-25 10:17:32.661660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.635 qpair failed and we were unable to recover it. 00:28:47.635 [2024-07-25 10:17:32.661866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.635 [2024-07-25 10:17:32.661895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.635 qpair failed and we were unable to recover it. 00:28:47.635 [2024-07-25 10:17:32.662062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.635 [2024-07-25 10:17:32.662091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.635 qpair failed and we were unable to recover it. 00:28:47.635 [2024-07-25 10:17:32.662265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.635 [2024-07-25 10:17:32.662294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.635 qpair failed and we were unable to recover it. 00:28:47.635 [2024-07-25 10:17:32.662467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.635 [2024-07-25 10:17:32.662496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.635 qpair failed and we were unable to recover it. 00:28:47.635 [2024-07-25 10:17:32.662697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.635 [2024-07-25 10:17:32.662726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.635 qpair failed and we were unable to recover it. 00:28:47.635 [2024-07-25 10:17:32.662892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.635 [2024-07-25 10:17:32.662921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.635 qpair failed and we were unable to recover it. 00:28:47.635 [2024-07-25 10:17:32.663128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.635 [2024-07-25 10:17:32.663156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.635 qpair failed and we were unable to recover it. 00:28:47.635 [2024-07-25 10:17:32.663361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.635 [2024-07-25 10:17:32.663390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.635 qpair failed and we were unable to recover it. 00:28:47.635 [2024-07-25 10:17:32.663574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.635 [2024-07-25 10:17:32.663604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.635 qpair failed and we were unable to recover it. 00:28:47.635 [2024-07-25 10:17:32.663807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.635 [2024-07-25 10:17:32.663836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.635 qpair failed and we were unable to recover it. 00:28:47.635 [2024-07-25 10:17:32.664007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.635 [2024-07-25 10:17:32.664037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.635 qpair failed and we were unable to recover it. 00:28:47.635 [2024-07-25 10:17:32.664205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.635 [2024-07-25 10:17:32.664234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.635 qpair failed and we were unable to recover it. 00:28:47.635 [2024-07-25 10:17:32.664399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.635 [2024-07-25 10:17:32.664434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.635 qpair failed and we were unable to recover it. 00:28:47.635 [2024-07-25 10:17:32.664637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.635 [2024-07-25 10:17:32.664666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.635 qpair failed and we were unable to recover it. 00:28:47.635 [2024-07-25 10:17:32.664812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.635 [2024-07-25 10:17:32.664842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.635 qpair failed and we were unable to recover it. 00:28:47.635 [2024-07-25 10:17:32.665045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.635 [2024-07-25 10:17:32.665074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.635 qpair failed and we were unable to recover it. 00:28:47.635 [2024-07-25 10:17:32.665248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.635 [2024-07-25 10:17:32.665277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.635 qpair failed and we were unable to recover it. 00:28:47.635 [2024-07-25 10:17:32.665419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.635 [2024-07-25 10:17:32.665454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.635 qpair failed and we were unable to recover it. 00:28:47.635 [2024-07-25 10:17:32.665635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.635 [2024-07-25 10:17:32.665664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.635 qpair failed and we were unable to recover it. 00:28:47.636 10:17:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:47.636 10:17:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:47.636 [2024-07-25 10:17:32.665870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.636 [2024-07-25 10:17:32.665900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.636 qpair failed and we were unable to recover it. 00:28:47.636 10:17:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:47.636 [2024-07-25 10:17:32.666075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.636 [2024-07-25 10:17:32.666105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.636 10:17:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:47.636 qpair failed and we were unable to recover it. 00:28:47.636 [2024-07-25 10:17:32.666315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.636 [2024-07-25 10:17:32.666349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.636 qpair failed and we were unable to recover it. 00:28:47.636 [2024-07-25 10:17:32.666561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.636 [2024-07-25 10:17:32.666591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.636 qpair failed and we were unable to recover it. 00:28:47.636 [2024-07-25 10:17:32.666803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.636 [2024-07-25 10:17:32.666833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.636 qpair failed and we were unable to recover it. 00:28:47.636 [2024-07-25 10:17:32.666960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.636 [2024-07-25 10:17:32.666999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.636 qpair failed and we were unable to recover it. 00:28:47.636 [2024-07-25 10:17:32.667205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.636 [2024-07-25 10:17:32.667234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.636 qpair failed and we were unable to recover it. 00:28:47.636 [2024-07-25 10:17:32.667381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.636 [2024-07-25 10:17:32.667410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.636 qpair failed and we were unable to recover it. 00:28:47.636 [2024-07-25 10:17:32.667594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.636 [2024-07-25 10:17:32.667623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.636 qpair failed and we were unable to recover it. 00:28:47.636 [2024-07-25 10:17:32.667794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.636 [2024-07-25 10:17:32.667823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.636 qpair failed and we were unable to recover it. 00:28:47.636 [2024-07-25 10:17:32.668021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.636 [2024-07-25 10:17:32.668050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.636 qpair failed and we were unable to recover it. 00:28:47.636 [2024-07-25 10:17:32.668251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.636 [2024-07-25 10:17:32.668281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.636 qpair failed and we were unable to recover it. 00:28:47.636 [2024-07-25 10:17:32.668456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.636 [2024-07-25 10:17:32.668486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.636 qpair failed and we were unable to recover it. 00:28:47.636 [2024-07-25 10:17:32.668698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.636 [2024-07-25 10:17:32.668727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.636 qpair failed and we were unable to recover it. 00:28:47.636 [2024-07-25 10:17:32.668907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.636 [2024-07-25 10:17:32.668936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.636 qpair failed and we were unable to recover it. 00:28:47.636 [2024-07-25 10:17:32.669108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.636 [2024-07-25 10:17:32.669136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.636 qpair failed and we were unable to recover it. 00:28:47.636 [2024-07-25 10:17:32.669332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.636 [2024-07-25 10:17:32.669361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.636 qpair failed and we were unable to recover it. 00:28:47.636 [2024-07-25 10:17:32.669546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:47.636 [2024-07-25 10:17:32.669576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7effa8000b90 with addr=10.0.0.2, port=4420 00:28:47.636 qpair failed and we were unable to recover it. 00:28:47.636 [2024-07-25 10:17:32.669681] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:47.636 [2024-07-25 10:17:32.672149] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.636 [2024-07-25 10:17:32.672302] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.636 [2024-07-25 10:17:32.672332] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.636 [2024-07-25 10:17:32.672350] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.636 [2024-07-25 10:17:32.672364] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:47.636 [2024-07-25 10:17:32.672404] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:47.636 qpair failed and we were unable to recover it. 00:28:47.636 10:17:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:47.636 10:17:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:47.636 10:17:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:47.636 10:17:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:47.636 10:17:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:47.636 10:17:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 552838 00:28:47.636 [2024-07-25 10:17:32.682004] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.636 [2024-07-25 10:17:32.682141] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.636 [2024-07-25 10:17:32.682172] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.636 [2024-07-25 10:17:32.682189] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.636 [2024-07-25 10:17:32.682204] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:47.636 [2024-07-25 10:17:32.682238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:47.636 qpair failed and we were unable to recover it. 00:28:47.636 [2024-07-25 10:17:32.692038] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.636 [2024-07-25 10:17:32.692194] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.636 [2024-07-25 10:17:32.692224] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.636 [2024-07-25 10:17:32.692241] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.636 [2024-07-25 10:17:32.692261] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:47.636 [2024-07-25 10:17:32.692296] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:47.636 qpair failed and we were unable to recover it. 00:28:47.636 [2024-07-25 10:17:32.702048] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.636 [2024-07-25 10:17:32.702203] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.636 [2024-07-25 10:17:32.702232] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.636 [2024-07-25 10:17:32.702250] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.636 [2024-07-25 10:17:32.702264] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:47.636 [2024-07-25 10:17:32.702297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:47.636 qpair failed and we were unable to recover it. 00:28:47.636 [2024-07-25 10:17:32.712009] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.636 [2024-07-25 10:17:32.712143] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.636 [2024-07-25 10:17:32.712174] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.637 [2024-07-25 10:17:32.712192] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.637 [2024-07-25 10:17:32.712206] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:47.637 [2024-07-25 10:17:32.712240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:47.637 qpair failed and we were unable to recover it. 00:28:47.637 [2024-07-25 10:17:32.722068] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.637 [2024-07-25 10:17:32.722246] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.637 [2024-07-25 10:17:32.722276] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.637 [2024-07-25 10:17:32.722293] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.637 [2024-07-25 10:17:32.722308] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:47.637 [2024-07-25 10:17:32.722342] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:47.637 qpair failed and we were unable to recover it. 00:28:47.637 [2024-07-25 10:17:32.732058] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.637 [2024-07-25 10:17:32.732194] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.637 [2024-07-25 10:17:32.732224] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.637 [2024-07-25 10:17:32.732241] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.637 [2024-07-25 10:17:32.732256] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:47.637 [2024-07-25 10:17:32.732289] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:47.637 qpair failed and we were unable to recover it. 00:28:47.896 [2024-07-25 10:17:32.742093] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.896 [2024-07-25 10:17:32.742244] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.896 [2024-07-25 10:17:32.742273] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.896 [2024-07-25 10:17:32.742291] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.896 [2024-07-25 10:17:32.742306] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:47.896 [2024-07-25 10:17:32.742340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:47.896 qpair failed and we were unable to recover it. 00:28:47.896 [2024-07-25 10:17:32.752093] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.896 [2024-07-25 10:17:32.752244] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.896 [2024-07-25 10:17:32.752273] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.896 [2024-07-25 10:17:32.752291] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.896 [2024-07-25 10:17:32.752305] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:47.896 [2024-07-25 10:17:32.752338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:47.896 qpair failed and we were unable to recover it. 00:28:47.896 [2024-07-25 10:17:32.762126] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.896 [2024-07-25 10:17:32.762253] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.896 [2024-07-25 10:17:32.762282] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.896 [2024-07-25 10:17:32.762299] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.896 [2024-07-25 10:17:32.762313] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:47.896 [2024-07-25 10:17:32.762347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:47.896 qpair failed and we were unable to recover it. 00:28:47.897 [2024-07-25 10:17:32.772164] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.897 [2024-07-25 10:17:32.772291] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.897 [2024-07-25 10:17:32.772321] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.897 [2024-07-25 10:17:32.772339] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.897 [2024-07-25 10:17:32.772353] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:47.897 [2024-07-25 10:17:32.772387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:47.897 qpair failed and we were unable to recover it. 00:28:47.897 [2024-07-25 10:17:32.782188] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.897 [2024-07-25 10:17:32.782334] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.897 [2024-07-25 10:17:32.782365] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.897 [2024-07-25 10:17:32.782394] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.897 [2024-07-25 10:17:32.782426] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:47.897 [2024-07-25 10:17:32.782484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:47.897 qpair failed and we were unable to recover it. 00:28:47.897 [2024-07-25 10:17:32.792289] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.897 [2024-07-25 10:17:32.792466] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.897 [2024-07-25 10:17:32.792498] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.897 [2024-07-25 10:17:32.792516] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.897 [2024-07-25 10:17:32.792531] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:47.897 [2024-07-25 10:17:32.792566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:47.897 qpair failed and we were unable to recover it. 00:28:47.897 [2024-07-25 10:17:32.802286] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.897 [2024-07-25 10:17:32.802434] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.897 [2024-07-25 10:17:32.802464] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.897 [2024-07-25 10:17:32.802482] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.897 [2024-07-25 10:17:32.802496] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:47.897 [2024-07-25 10:17:32.802530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:47.897 qpair failed and we were unable to recover it. 00:28:47.897 [2024-07-25 10:17:32.812307] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.897 [2024-07-25 10:17:32.812453] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.897 [2024-07-25 10:17:32.812484] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.897 [2024-07-25 10:17:32.812501] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.897 [2024-07-25 10:17:32.812515] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:47.897 [2024-07-25 10:17:32.812549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:47.897 qpair failed and we were unable to recover it. 00:28:47.897 [2024-07-25 10:17:32.822305] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.897 [2024-07-25 10:17:32.822448] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.897 [2024-07-25 10:17:32.822485] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.897 [2024-07-25 10:17:32.822502] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.897 [2024-07-25 10:17:32.822516] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:47.897 [2024-07-25 10:17:32.822550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:47.897 qpair failed and we were unable to recover it. 00:28:47.897 [2024-07-25 10:17:32.832356] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.897 [2024-07-25 10:17:32.832492] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.897 [2024-07-25 10:17:32.832522] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.897 [2024-07-25 10:17:32.832539] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.897 [2024-07-25 10:17:32.832554] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:47.897 [2024-07-25 10:17:32.832587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:47.897 qpair failed and we were unable to recover it. 00:28:47.897 [2024-07-25 10:17:32.842354] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.897 [2024-07-25 10:17:32.842506] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.897 [2024-07-25 10:17:32.842535] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.897 [2024-07-25 10:17:32.842553] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.897 [2024-07-25 10:17:32.842567] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:47.897 [2024-07-25 10:17:32.842601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:47.897 qpair failed and we were unable to recover it. 00:28:47.897 [2024-07-25 10:17:32.852384] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.897 [2024-07-25 10:17:32.852534] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.897 [2024-07-25 10:17:32.852564] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.897 [2024-07-25 10:17:32.852582] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.897 [2024-07-25 10:17:32.852596] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:47.897 [2024-07-25 10:17:32.852630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:47.897 qpair failed and we were unable to recover it. 00:28:47.897 [2024-07-25 10:17:32.862456] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.897 [2024-07-25 10:17:32.862598] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.897 [2024-07-25 10:17:32.862626] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.897 [2024-07-25 10:17:32.862643] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.897 [2024-07-25 10:17:32.862658] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:47.897 [2024-07-25 10:17:32.862692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:47.897 qpair failed and we were unable to recover it. 00:28:47.897 [2024-07-25 10:17:32.872528] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.897 [2024-07-25 10:17:32.872659] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.897 [2024-07-25 10:17:32.872687] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.897 [2024-07-25 10:17:32.872710] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.897 [2024-07-25 10:17:32.872725] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:47.897 [2024-07-25 10:17:32.872758] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:47.897 qpair failed and we were unable to recover it. 00:28:47.897 [2024-07-25 10:17:32.882479] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.897 [2024-07-25 10:17:32.882603] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.897 [2024-07-25 10:17:32.882632] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.897 [2024-07-25 10:17:32.882649] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.897 [2024-07-25 10:17:32.882663] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:47.897 [2024-07-25 10:17:32.882697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:47.897 qpair failed and we were unable to recover it. 00:28:47.897 [2024-07-25 10:17:32.892541] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.897 [2024-07-25 10:17:32.892675] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.897 [2024-07-25 10:17:32.892705] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.897 [2024-07-25 10:17:32.892722] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.897 [2024-07-25 10:17:32.892736] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:47.897 [2024-07-25 10:17:32.892770] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:47.897 qpair failed and we were unable to recover it. 00:28:47.897 [2024-07-25 10:17:32.902534] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.897 [2024-07-25 10:17:32.902668] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.898 [2024-07-25 10:17:32.902697] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.898 [2024-07-25 10:17:32.902715] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.898 [2024-07-25 10:17:32.902729] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:47.898 [2024-07-25 10:17:32.902762] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:47.898 qpair failed and we were unable to recover it. 00:28:47.898 [2024-07-25 10:17:32.912560] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.898 [2024-07-25 10:17:32.912787] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.898 [2024-07-25 10:17:32.912818] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.898 [2024-07-25 10:17:32.912835] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.898 [2024-07-25 10:17:32.912849] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:47.898 [2024-07-25 10:17:32.912882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:47.898 qpair failed and we were unable to recover it. 00:28:47.898 [2024-07-25 10:17:32.922602] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.898 [2024-07-25 10:17:32.922736] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.898 [2024-07-25 10:17:32.922763] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.898 [2024-07-25 10:17:32.922780] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.898 [2024-07-25 10:17:32.922795] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:47.898 [2024-07-25 10:17:32.922828] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:47.898 qpair failed and we were unable to recover it. 00:28:47.898 [2024-07-25 10:17:32.932642] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.898 [2024-07-25 10:17:32.932768] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.898 [2024-07-25 10:17:32.932797] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.898 [2024-07-25 10:17:32.932815] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.898 [2024-07-25 10:17:32.932829] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:47.898 [2024-07-25 10:17:32.932862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:47.898 qpair failed and we were unable to recover it. 00:28:47.898 [2024-07-25 10:17:32.942676] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.898 [2024-07-25 10:17:32.942818] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.898 [2024-07-25 10:17:32.942848] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.898 [2024-07-25 10:17:32.942865] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.898 [2024-07-25 10:17:32.942879] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:47.898 [2024-07-25 10:17:32.942913] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:47.898 qpair failed and we were unable to recover it. 00:28:47.898 [2024-07-25 10:17:32.952693] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.898 [2024-07-25 10:17:32.952824] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.898 [2024-07-25 10:17:32.952854] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.898 [2024-07-25 10:17:32.952871] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.898 [2024-07-25 10:17:32.952886] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:47.898 [2024-07-25 10:17:32.952920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:47.898 qpair failed and we were unable to recover it. 00:28:47.898 [2024-07-25 10:17:32.962719] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.898 [2024-07-25 10:17:32.962850] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.898 [2024-07-25 10:17:32.962884] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.898 [2024-07-25 10:17:32.962902] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.898 [2024-07-25 10:17:32.962916] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:47.898 [2024-07-25 10:17:32.962949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:47.898 qpair failed and we were unable to recover it. 00:28:47.898 [2024-07-25 10:17:32.972839] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.898 [2024-07-25 10:17:32.972968] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.898 [2024-07-25 10:17:32.972998] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.898 [2024-07-25 10:17:32.973015] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.898 [2024-07-25 10:17:32.973028] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:47.898 [2024-07-25 10:17:32.973062] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:47.898 qpair failed and we were unable to recover it. 00:28:47.898 [2024-07-25 10:17:32.982755] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.898 [2024-07-25 10:17:32.982902] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.898 [2024-07-25 10:17:32.982929] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.898 [2024-07-25 10:17:32.982945] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.898 [2024-07-25 10:17:32.982959] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:47.898 [2024-07-25 10:17:32.982992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:47.898 qpair failed and we were unable to recover it. 00:28:47.898 [2024-07-25 10:17:32.992798] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.898 [2024-07-25 10:17:32.992931] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.898 [2024-07-25 10:17:32.992959] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.898 [2024-07-25 10:17:32.992976] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.898 [2024-07-25 10:17:32.992989] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:47.898 [2024-07-25 10:17:32.993022] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:47.898 qpair failed and we were unable to recover it. 00:28:47.898 [2024-07-25 10:17:33.002813] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.898 [2024-07-25 10:17:33.002946] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.898 [2024-07-25 10:17:33.002974] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.898 [2024-07-25 10:17:33.002991] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.898 [2024-07-25 10:17:33.003005] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:47.898 [2024-07-25 10:17:33.003044] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:47.898 qpair failed and we were unable to recover it. 00:28:47.898 [2024-07-25 10:17:33.012839] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.898 [2024-07-25 10:17:33.012973] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.898 [2024-07-25 10:17:33.013003] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.898 [2024-07-25 10:17:33.013019] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.898 [2024-07-25 10:17:33.013033] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:47.898 [2024-07-25 10:17:33.013066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:47.898 qpair failed and we were unable to recover it. 00:28:47.898 [2024-07-25 10:17:33.022983] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.898 [2024-07-25 10:17:33.023116] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.898 [2024-07-25 10:17:33.023144] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.898 [2024-07-25 10:17:33.023161] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.898 [2024-07-25 10:17:33.023176] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:47.898 [2024-07-25 10:17:33.023209] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:47.898 qpair failed and we were unable to recover it. 00:28:47.898 [2024-07-25 10:17:33.032885] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.898 [2024-07-25 10:17:33.033027] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.898 [2024-07-25 10:17:33.033057] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.899 [2024-07-25 10:17:33.033075] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.899 [2024-07-25 10:17:33.033089] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:47.899 [2024-07-25 10:17:33.033122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:47.899 qpair failed and we were unable to recover it. 00:28:47.899 [2024-07-25 10:17:33.042965] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.899 [2024-07-25 10:17:33.043091] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.899 [2024-07-25 10:17:33.043119] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.899 [2024-07-25 10:17:33.043136] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.899 [2024-07-25 10:17:33.043150] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:47.899 [2024-07-25 10:17:33.043183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:47.899 qpair failed and we were unable to recover it. 00:28:47.899 [2024-07-25 10:17:33.052971] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:47.899 [2024-07-25 10:17:33.053104] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:47.899 [2024-07-25 10:17:33.053137] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:47.899 [2024-07-25 10:17:33.053155] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:47.899 [2024-07-25 10:17:33.053169] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:47.899 [2024-07-25 10:17:33.053202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:47.899 qpair failed and we were unable to recover it. 00:28:48.157 [2024-07-25 10:17:33.062994] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.157 [2024-07-25 10:17:33.063130] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.157 [2024-07-25 10:17:33.063158] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.157 [2024-07-25 10:17:33.063175] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.157 [2024-07-25 10:17:33.063189] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:48.157 [2024-07-25 10:17:33.063222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:48.157 qpair failed and we were unable to recover it. 00:28:48.157 [2024-07-25 10:17:33.073140] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.157 [2024-07-25 10:17:33.073277] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.157 [2024-07-25 10:17:33.073306] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.157 [2024-07-25 10:17:33.073322] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.157 [2024-07-25 10:17:33.073336] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:48.157 [2024-07-25 10:17:33.073369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:48.157 qpair failed and we were unable to recover it. 00:28:48.157 [2024-07-25 10:17:33.083037] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.157 [2024-07-25 10:17:33.083160] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.157 [2024-07-25 10:17:33.083188] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.157 [2024-07-25 10:17:33.083205] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.157 [2024-07-25 10:17:33.083218] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:48.157 [2024-07-25 10:17:33.083251] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:48.157 qpair failed and we were unable to recover it. 00:28:48.157 [2024-07-25 10:17:33.093071] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.157 [2024-07-25 10:17:33.093244] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.157 [2024-07-25 10:17:33.093273] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.157 [2024-07-25 10:17:33.093289] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.157 [2024-07-25 10:17:33.093308] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:48.157 [2024-07-25 10:17:33.093341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:48.157 qpair failed and we were unable to recover it. 00:28:48.157 [2024-07-25 10:17:33.103130] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.157 [2024-07-25 10:17:33.103266] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.157 [2024-07-25 10:17:33.103296] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.157 [2024-07-25 10:17:33.103312] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.157 [2024-07-25 10:17:33.103326] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:48.157 [2024-07-25 10:17:33.103359] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:48.157 qpair failed and we were unable to recover it. 00:28:48.157 [2024-07-25 10:17:33.113124] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.157 [2024-07-25 10:17:33.113255] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.157 [2024-07-25 10:17:33.113284] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.157 [2024-07-25 10:17:33.113301] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.157 [2024-07-25 10:17:33.113314] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:48.157 [2024-07-25 10:17:33.113347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:48.157 qpair failed and we were unable to recover it. 00:28:48.157 [2024-07-25 10:17:33.123184] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.157 [2024-07-25 10:17:33.123349] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.157 [2024-07-25 10:17:33.123378] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.157 [2024-07-25 10:17:33.123394] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.157 [2024-07-25 10:17:33.123409] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:48.157 [2024-07-25 10:17:33.123450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:48.157 qpair failed and we were unable to recover it. 00:28:48.157 [2024-07-25 10:17:33.133236] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.157 [2024-07-25 10:17:33.133379] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.157 [2024-07-25 10:17:33.133407] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.157 [2024-07-25 10:17:33.133423] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.157 [2024-07-25 10:17:33.133445] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:48.157 [2024-07-25 10:17:33.133479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:48.157 qpair failed and we were unable to recover it. 00:28:48.157 [2024-07-25 10:17:33.143224] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.157 [2024-07-25 10:17:33.143363] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.157 [2024-07-25 10:17:33.143391] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.157 [2024-07-25 10:17:33.143407] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.157 [2024-07-25 10:17:33.143421] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:48.157 [2024-07-25 10:17:33.143462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:48.157 qpair failed and we were unable to recover it. 00:28:48.157 [2024-07-25 10:17:33.153253] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.157 [2024-07-25 10:17:33.153386] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.157 [2024-07-25 10:17:33.153413] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.157 [2024-07-25 10:17:33.153438] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.157 [2024-07-25 10:17:33.153454] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:48.158 [2024-07-25 10:17:33.153487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:48.158 qpair failed and we were unable to recover it. 00:28:48.158 [2024-07-25 10:17:33.163281] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.158 [2024-07-25 10:17:33.163414] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.158 [2024-07-25 10:17:33.163453] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.158 [2024-07-25 10:17:33.163470] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.158 [2024-07-25 10:17:33.163483] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:48.158 [2024-07-25 10:17:33.163517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:48.158 qpair failed and we were unable to recover it. 00:28:48.158 [2024-07-25 10:17:33.173297] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.158 [2024-07-25 10:17:33.173444] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.158 [2024-07-25 10:17:33.173473] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.158 [2024-07-25 10:17:33.173489] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.158 [2024-07-25 10:17:33.173503] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:48.158 [2024-07-25 10:17:33.173537] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:48.158 qpair failed and we were unable to recover it. 00:28:48.158 [2024-07-25 10:17:33.183342] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.158 [2024-07-25 10:17:33.183483] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.158 [2024-07-25 10:17:33.183513] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.158 [2024-07-25 10:17:33.183530] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.158 [2024-07-25 10:17:33.183551] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:48.158 [2024-07-25 10:17:33.183585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:48.158 qpair failed and we were unable to recover it. 00:28:48.158 [2024-07-25 10:17:33.193369] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.158 [2024-07-25 10:17:33.193505] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.158 [2024-07-25 10:17:33.193535] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.158 [2024-07-25 10:17:33.193552] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.158 [2024-07-25 10:17:33.193567] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:48.158 [2024-07-25 10:17:33.193601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:48.158 qpair failed and we were unable to recover it. 00:28:48.158 [2024-07-25 10:17:33.203421] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.158 [2024-07-25 10:17:33.203559] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.158 [2024-07-25 10:17:33.203588] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.158 [2024-07-25 10:17:33.203605] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.158 [2024-07-25 10:17:33.203619] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:48.158 [2024-07-25 10:17:33.203652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:48.158 qpair failed and we were unable to recover it. 00:28:48.158 [2024-07-25 10:17:33.213420] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.158 [2024-07-25 10:17:33.213575] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.158 [2024-07-25 10:17:33.213606] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.158 [2024-07-25 10:17:33.213623] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.158 [2024-07-25 10:17:33.213637] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:48.158 [2024-07-25 10:17:33.213671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:48.158 qpair failed and we were unable to recover it. 00:28:48.158 [2024-07-25 10:17:33.223502] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.158 [2024-07-25 10:17:33.223633] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.158 [2024-07-25 10:17:33.223663] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.158 [2024-07-25 10:17:33.223680] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.158 [2024-07-25 10:17:33.223695] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:48.158 [2024-07-25 10:17:33.223728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:48.158 qpair failed and we were unable to recover it. 00:28:48.158 [2024-07-25 10:17:33.233483] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.158 [2024-07-25 10:17:33.233625] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.158 [2024-07-25 10:17:33.233653] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.158 [2024-07-25 10:17:33.233670] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.158 [2024-07-25 10:17:33.233685] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:48.158 [2024-07-25 10:17:33.233718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:48.158 qpair failed and we were unable to recover it. 00:28:48.158 [2024-07-25 10:17:33.243558] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.158 [2024-07-25 10:17:33.243686] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.158 [2024-07-25 10:17:33.243715] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.158 [2024-07-25 10:17:33.243732] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.158 [2024-07-25 10:17:33.243746] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:48.158 [2024-07-25 10:17:33.243780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:48.158 qpair failed and we were unable to recover it. 00:28:48.158 [2024-07-25 10:17:33.253587] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.158 [2024-07-25 10:17:33.253759] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.158 [2024-07-25 10:17:33.253788] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.158 [2024-07-25 10:17:33.253806] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.158 [2024-07-25 10:17:33.253821] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:48.158 [2024-07-25 10:17:33.253855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:48.158 qpair failed and we were unable to recover it. 00:28:48.158 [2024-07-25 10:17:33.263570] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.158 [2024-07-25 10:17:33.263715] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.158 [2024-07-25 10:17:33.263744] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.158 [2024-07-25 10:17:33.263761] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.158 [2024-07-25 10:17:33.263775] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:48.158 [2024-07-25 10:17:33.263808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:48.158 qpair failed and we were unable to recover it. 00:28:48.158 [2024-07-25 10:17:33.273600] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.158 [2024-07-25 10:17:33.273735] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.158 [2024-07-25 10:17:33.273764] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.158 [2024-07-25 10:17:33.273787] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.158 [2024-07-25 10:17:33.273802] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:48.158 [2024-07-25 10:17:33.273835] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:48.158 qpair failed and we were unable to recover it. 00:28:48.158 [2024-07-25 10:17:33.283608] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.158 [2024-07-25 10:17:33.283734] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.158 [2024-07-25 10:17:33.283763] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.158 [2024-07-25 10:17:33.283779] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.158 [2024-07-25 10:17:33.283794] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:48.158 [2024-07-25 10:17:33.283827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:48.158 qpair failed and we were unable to recover it. 00:28:48.159 [2024-07-25 10:17:33.293652] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.159 [2024-07-25 10:17:33.293824] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.159 [2024-07-25 10:17:33.293853] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.159 [2024-07-25 10:17:33.293870] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.159 [2024-07-25 10:17:33.293885] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:48.159 [2024-07-25 10:17:33.293919] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:48.159 qpair failed and we were unable to recover it. 00:28:48.159 [2024-07-25 10:17:33.303710] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.159 [2024-07-25 10:17:33.303874] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.159 [2024-07-25 10:17:33.303903] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.159 [2024-07-25 10:17:33.303920] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.159 [2024-07-25 10:17:33.303935] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:48.159 [2024-07-25 10:17:33.303968] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:48.159 qpair failed and we were unable to recover it. 00:28:48.159 [2024-07-25 10:17:33.313720] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.159 [2024-07-25 10:17:33.313874] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.159 [2024-07-25 10:17:33.313903] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.159 [2024-07-25 10:17:33.313920] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.159 [2024-07-25 10:17:33.313935] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:48.159 [2024-07-25 10:17:33.313968] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:48.159 qpair failed and we were unable to recover it. 00:28:48.416 [2024-07-25 10:17:33.323800] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.416 [2024-07-25 10:17:33.323935] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.416 [2024-07-25 10:17:33.323965] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.416 [2024-07-25 10:17:33.323981] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.416 [2024-07-25 10:17:33.323995] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:48.416 [2024-07-25 10:17:33.324029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:48.416 qpair failed and we were unable to recover it. 00:28:48.416 [2024-07-25 10:17:33.333825] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.416 [2024-07-25 10:17:33.333976] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.416 [2024-07-25 10:17:33.334006] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.416 [2024-07-25 10:17:33.334023] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.417 [2024-07-25 10:17:33.334037] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:48.417 [2024-07-25 10:17:33.334070] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:48.417 qpair failed and we were unable to recover it. 00:28:48.417 [2024-07-25 10:17:33.343818] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.417 [2024-07-25 10:17:33.343962] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.417 [2024-07-25 10:17:33.343991] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.417 [2024-07-25 10:17:33.344008] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.417 [2024-07-25 10:17:33.344022] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:48.417 [2024-07-25 10:17:33.344057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:48.417 qpair failed and we were unable to recover it. 00:28:48.417 [2024-07-25 10:17:33.353829] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.417 [2024-07-25 10:17:33.353957] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.417 [2024-07-25 10:17:33.353986] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.417 [2024-07-25 10:17:33.354004] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.417 [2024-07-25 10:17:33.354018] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:48.417 [2024-07-25 10:17:33.354052] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:48.417 qpair failed and we were unable to recover it. 00:28:48.417 [2024-07-25 10:17:33.363884] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.417 [2024-07-25 10:17:33.364014] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.417 [2024-07-25 10:17:33.364049] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.417 [2024-07-25 10:17:33.364067] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.417 [2024-07-25 10:17:33.364081] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:48.417 [2024-07-25 10:17:33.364114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:48.417 qpair failed and we were unable to recover it. 00:28:48.417 [2024-07-25 10:17:33.374002] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.417 [2024-07-25 10:17:33.374169] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.417 [2024-07-25 10:17:33.374198] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.417 [2024-07-25 10:17:33.374215] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.417 [2024-07-25 10:17:33.374229] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:48.417 [2024-07-25 10:17:33.374262] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:48.417 qpair failed and we were unable to recover it. 00:28:48.417 [2024-07-25 10:17:33.383968] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.417 [2024-07-25 10:17:33.384105] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.417 [2024-07-25 10:17:33.384134] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.417 [2024-07-25 10:17:33.384151] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.417 [2024-07-25 10:17:33.384165] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:48.417 [2024-07-25 10:17:33.384198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:48.417 qpair failed and we were unable to recover it. 00:28:48.417 [2024-07-25 10:17:33.393943] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.417 [2024-07-25 10:17:33.394114] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.417 [2024-07-25 10:17:33.394143] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.417 [2024-07-25 10:17:33.394160] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.417 [2024-07-25 10:17:33.394174] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:48.417 [2024-07-25 10:17:33.394208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:48.417 qpair failed and we were unable to recover it. 00:28:48.417 [2024-07-25 10:17:33.404025] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.417 [2024-07-25 10:17:33.404182] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.417 [2024-07-25 10:17:33.404211] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.417 [2024-07-25 10:17:33.404228] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.417 [2024-07-25 10:17:33.404243] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:48.417 [2024-07-25 10:17:33.404282] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:48.417 qpair failed and we were unable to recover it. 00:28:48.417 [2024-07-25 10:17:33.414079] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.417 [2024-07-25 10:17:33.414207] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.417 [2024-07-25 10:17:33.414237] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.417 [2024-07-25 10:17:33.414254] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.417 [2024-07-25 10:17:33.414268] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:48.417 [2024-07-25 10:17:33.414302] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:48.417 qpair failed and we were unable to recover it. 00:28:48.417 [2024-07-25 10:17:33.424123] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.417 [2024-07-25 10:17:33.424269] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.417 [2024-07-25 10:17:33.424297] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.417 [2024-07-25 10:17:33.424314] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.417 [2024-07-25 10:17:33.424328] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:48.417 [2024-07-25 10:17:33.424361] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:48.417 qpair failed and we were unable to recover it. 00:28:48.417 [2024-07-25 10:17:33.434121] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.417 [2024-07-25 10:17:33.434252] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.417 [2024-07-25 10:17:33.434282] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.417 [2024-07-25 10:17:33.434299] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.417 [2024-07-25 10:17:33.434313] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:48.417 [2024-07-25 10:17:33.434346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:48.417 qpair failed and we were unable to recover it. 00:28:48.417 [2024-07-25 10:17:33.444094] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.417 [2024-07-25 10:17:33.444269] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.417 [2024-07-25 10:17:33.444298] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.417 [2024-07-25 10:17:33.444316] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.417 [2024-07-25 10:17:33.444330] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:48.417 [2024-07-25 10:17:33.444363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:48.417 qpair failed and we were unable to recover it. 00:28:48.417 [2024-07-25 10:17:33.454339] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.417 [2024-07-25 10:17:33.454531] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.417 [2024-07-25 10:17:33.454570] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.417 [2024-07-25 10:17:33.454588] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.417 [2024-07-25 10:17:33.454603] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:48.417 [2024-07-25 10:17:33.454636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:48.417 qpair failed and we were unable to recover it. 00:28:48.417 [2024-07-25 10:17:33.464281] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.417 [2024-07-25 10:17:33.464463] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.417 [2024-07-25 10:17:33.464493] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.417 [2024-07-25 10:17:33.464510] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.417 [2024-07-25 10:17:33.464525] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:48.417 [2024-07-25 10:17:33.464558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:48.417 qpair failed and we were unable to recover it. 00:28:48.417 [2024-07-25 10:17:33.474254] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.417 [2024-07-25 10:17:33.474383] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.417 [2024-07-25 10:17:33.474412] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.417 [2024-07-25 10:17:33.474437] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.417 [2024-07-25 10:17:33.474453] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:48.417 [2024-07-25 10:17:33.474488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:48.417 qpair failed and we were unable to recover it. 00:28:48.417 [2024-07-25 10:17:33.484314] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.417 [2024-07-25 10:17:33.484464] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.417 [2024-07-25 10:17:33.484494] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.417 [2024-07-25 10:17:33.484511] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.417 [2024-07-25 10:17:33.484525] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:48.417 [2024-07-25 10:17:33.484560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:48.417 qpair failed and we were unable to recover it. 00:28:48.417 [2024-07-25 10:17:33.494323] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.417 [2024-07-25 10:17:33.494493] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.417 [2024-07-25 10:17:33.494524] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.417 [2024-07-25 10:17:33.494542] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.417 [2024-07-25 10:17:33.494557] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:48.417 [2024-07-25 10:17:33.494597] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:48.417 qpair failed and we were unable to recover it. 00:28:48.417 [2024-07-25 10:17:33.504336] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.417 [2024-07-25 10:17:33.504525] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.417 [2024-07-25 10:17:33.504555] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.417 [2024-07-25 10:17:33.504573] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.417 [2024-07-25 10:17:33.504587] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:48.417 [2024-07-25 10:17:33.504621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:48.417 qpair failed and we were unable to recover it. 00:28:48.417 [2024-07-25 10:17:33.514373] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.417 [2024-07-25 10:17:33.514546] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.418 [2024-07-25 10:17:33.514576] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.418 [2024-07-25 10:17:33.514594] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.418 [2024-07-25 10:17:33.514608] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:48.418 [2024-07-25 10:17:33.514642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:48.418 qpair failed and we were unable to recover it. 00:28:48.418 [2024-07-25 10:17:33.524316] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.418 [2024-07-25 10:17:33.524450] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.418 [2024-07-25 10:17:33.524480] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.418 [2024-07-25 10:17:33.524496] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.418 [2024-07-25 10:17:33.524510] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:48.418 [2024-07-25 10:17:33.524544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:48.418 qpair failed and we were unable to recover it. 00:28:48.418 [2024-07-25 10:17:33.534402] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.418 [2024-07-25 10:17:33.534581] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.418 [2024-07-25 10:17:33.534611] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.418 [2024-07-25 10:17:33.534628] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.418 [2024-07-25 10:17:33.534643] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:48.418 [2024-07-25 10:17:33.534676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:48.418 qpair failed and we were unable to recover it. 00:28:48.418 [2024-07-25 10:17:33.544416] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.418 [2024-07-25 10:17:33.544563] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.418 [2024-07-25 10:17:33.544592] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.418 [2024-07-25 10:17:33.544609] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.418 [2024-07-25 10:17:33.544624] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:48.418 [2024-07-25 10:17:33.544657] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:48.418 qpair failed and we were unable to recover it. 00:28:48.418 [2024-07-25 10:17:33.554486] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.418 [2024-07-25 10:17:33.554622] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.418 [2024-07-25 10:17:33.554650] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.418 [2024-07-25 10:17:33.554668] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.418 [2024-07-25 10:17:33.554682] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:48.418 [2024-07-25 10:17:33.554715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:48.418 qpair failed and we were unable to recover it. 00:28:48.418 [2024-07-25 10:17:33.564474] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.418 [2024-07-25 10:17:33.564596] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.418 [2024-07-25 10:17:33.564625] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.418 [2024-07-25 10:17:33.564643] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.418 [2024-07-25 10:17:33.564657] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:48.418 [2024-07-25 10:17:33.564691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:48.418 qpair failed and we were unable to recover it. 00:28:48.418 [2024-07-25 10:17:33.574481] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.418 [2024-07-25 10:17:33.574613] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.418 [2024-07-25 10:17:33.574642] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.418 [2024-07-25 10:17:33.574659] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.418 [2024-07-25 10:17:33.574673] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:48.418 [2024-07-25 10:17:33.574707] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:48.418 qpair failed and we were unable to recover it. 00:28:48.676 [2024-07-25 10:17:33.584511] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.676 [2024-07-25 10:17:33.584638] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.676 [2024-07-25 10:17:33.584666] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.676 [2024-07-25 10:17:33.584683] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.676 [2024-07-25 10:17:33.584703] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:48.676 [2024-07-25 10:17:33.584737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:48.676 qpair failed and we were unable to recover it. 00:28:48.676 [2024-07-25 10:17:33.594530] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.676 [2024-07-25 10:17:33.594657] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.676 [2024-07-25 10:17:33.594687] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.676 [2024-07-25 10:17:33.594704] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.676 [2024-07-25 10:17:33.594718] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:48.676 [2024-07-25 10:17:33.594751] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:48.676 qpair failed and we were unable to recover it. 00:28:48.676 [2024-07-25 10:17:33.604613] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.676 [2024-07-25 10:17:33.604754] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.676 [2024-07-25 10:17:33.604783] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.676 [2024-07-25 10:17:33.604800] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.676 [2024-07-25 10:17:33.604814] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:48.676 [2024-07-25 10:17:33.604847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:48.676 qpair failed and we were unable to recover it. 00:28:48.676 [2024-07-25 10:17:33.614670] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.676 [2024-07-25 10:17:33.614792] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.676 [2024-07-25 10:17:33.614822] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.676 [2024-07-25 10:17:33.614839] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.676 [2024-07-25 10:17:33.614854] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:48.676 [2024-07-25 10:17:33.614887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:48.676 qpair failed and we were unable to recover it. 00:28:48.676 [2024-07-25 10:17:33.624649] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.676 [2024-07-25 10:17:33.624781] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.676 [2024-07-25 10:17:33.624810] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.676 [2024-07-25 10:17:33.624827] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.676 [2024-07-25 10:17:33.624841] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:48.676 [2024-07-25 10:17:33.624875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:48.676 qpair failed and we were unable to recover it. 00:28:48.676 [2024-07-25 10:17:33.634665] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.676 [2024-07-25 10:17:33.634807] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.676 [2024-07-25 10:17:33.634838] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.676 [2024-07-25 10:17:33.634856] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.676 [2024-07-25 10:17:33.634871] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:48.676 [2024-07-25 10:17:33.634904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:48.676 qpair failed and we were unable to recover it. 00:28:48.676 [2024-07-25 10:17:33.644668] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.676 [2024-07-25 10:17:33.644793] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.676 [2024-07-25 10:17:33.644822] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.676 [2024-07-25 10:17:33.644840] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.676 [2024-07-25 10:17:33.644854] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:48.676 [2024-07-25 10:17:33.644888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:48.676 qpair failed and we were unable to recover it. 00:28:48.676 [2024-07-25 10:17:33.654710] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.677 [2024-07-25 10:17:33.654843] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.677 [2024-07-25 10:17:33.654872] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.677 [2024-07-25 10:17:33.654889] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.677 [2024-07-25 10:17:33.654904] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:48.677 [2024-07-25 10:17:33.654937] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:48.677 qpair failed and we were unable to recover it. 00:28:48.677 [2024-07-25 10:17:33.664800] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.677 [2024-07-25 10:17:33.664936] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.677 [2024-07-25 10:17:33.664965] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.677 [2024-07-25 10:17:33.664982] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.677 [2024-07-25 10:17:33.664996] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:48.677 [2024-07-25 10:17:33.665033] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:48.677 qpair failed and we were unable to recover it. 00:28:48.677 [2024-07-25 10:17:33.674803] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.677 [2024-07-25 10:17:33.674944] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.677 [2024-07-25 10:17:33.674980] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.677 [2024-07-25 10:17:33.675003] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.677 [2024-07-25 10:17:33.675019] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:48.677 [2024-07-25 10:17:33.675053] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:48.677 qpair failed and we were unable to recover it. 00:28:48.677 [2024-07-25 10:17:33.684917] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.677 [2024-07-25 10:17:33.685047] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.677 [2024-07-25 10:17:33.685076] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.677 [2024-07-25 10:17:33.685096] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.677 [2024-07-25 10:17:33.685110] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:48.677 [2024-07-25 10:17:33.685144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:48.677 qpair failed and we were unable to recover it. 00:28:48.677 [2024-07-25 10:17:33.694849] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.677 [2024-07-25 10:17:33.694975] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.677 [2024-07-25 10:17:33.695003] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.677 [2024-07-25 10:17:33.695020] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.677 [2024-07-25 10:17:33.695035] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:48.677 [2024-07-25 10:17:33.695068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:48.677 qpair failed and we were unable to recover it. 00:28:48.677 [2024-07-25 10:17:33.704943] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.677 [2024-07-25 10:17:33.705089] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.677 [2024-07-25 10:17:33.705117] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.677 [2024-07-25 10:17:33.705134] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.677 [2024-07-25 10:17:33.705148] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:48.677 [2024-07-25 10:17:33.705182] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:48.677 qpair failed and we were unable to recover it. 00:28:48.677 [2024-07-25 10:17:33.714911] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.677 [2024-07-25 10:17:33.715041] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.677 [2024-07-25 10:17:33.715072] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.677 [2024-07-25 10:17:33.715089] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.677 [2024-07-25 10:17:33.715104] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:48.677 [2024-07-25 10:17:33.715137] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:48.677 qpair failed and we were unable to recover it. 00:28:48.677 [2024-07-25 10:17:33.724983] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.677 [2024-07-25 10:17:33.725171] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.677 [2024-07-25 10:17:33.725202] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.677 [2024-07-25 10:17:33.725220] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.677 [2024-07-25 10:17:33.725235] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:48.677 [2024-07-25 10:17:33.725270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:48.677 qpair failed and we were unable to recover it. 00:28:48.677 [2024-07-25 10:17:33.735025] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.677 [2024-07-25 10:17:33.735186] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.677 [2024-07-25 10:17:33.735216] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.677 [2024-07-25 10:17:33.735233] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.677 [2024-07-25 10:17:33.735247] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:48.677 [2024-07-25 10:17:33.735280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:48.677 qpair failed and we were unable to recover it. 00:28:48.677 [2024-07-25 10:17:33.744989] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.677 [2024-07-25 10:17:33.745143] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.677 [2024-07-25 10:17:33.745172] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.677 [2024-07-25 10:17:33.745189] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.677 [2024-07-25 10:17:33.745204] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:48.677 [2024-07-25 10:17:33.745237] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:48.677 qpair failed and we were unable to recover it. 00:28:48.677 [2024-07-25 10:17:33.755043] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.677 [2024-07-25 10:17:33.755207] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.677 [2024-07-25 10:17:33.755237] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.677 [2024-07-25 10:17:33.755254] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.677 [2024-07-25 10:17:33.755269] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:48.677 [2024-07-25 10:17:33.755304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:48.677 qpair failed and we were unable to recover it. 00:28:48.677 [2024-07-25 10:17:33.765138] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.677 [2024-07-25 10:17:33.765265] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.677 [2024-07-25 10:17:33.765294] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.677 [2024-07-25 10:17:33.765319] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.677 [2024-07-25 10:17:33.765336] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:48.677 [2024-07-25 10:17:33.765370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:48.677 qpair failed and we were unable to recover it. 00:28:48.677 [2024-07-25 10:17:33.775062] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.677 [2024-07-25 10:17:33.775200] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.677 [2024-07-25 10:17:33.775230] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.677 [2024-07-25 10:17:33.775247] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.677 [2024-07-25 10:17:33.775261] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:48.677 [2024-07-25 10:17:33.775296] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:48.677 qpair failed and we were unable to recover it. 00:28:48.677 [2024-07-25 10:17:33.785172] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.677 [2024-07-25 10:17:33.785337] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.677 [2024-07-25 10:17:33.785371] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.677 [2024-07-25 10:17:33.785388] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.677 [2024-07-25 10:17:33.785403] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:48.677 [2024-07-25 10:17:33.785447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:48.677 qpair failed and we were unable to recover it. 00:28:48.677 [2024-07-25 10:17:33.795157] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.677 [2024-07-25 10:17:33.795297] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.677 [2024-07-25 10:17:33.795326] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.677 [2024-07-25 10:17:33.795350] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.677 [2024-07-25 10:17:33.795365] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:48.677 [2024-07-25 10:17:33.795399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:48.677 qpair failed and we were unable to recover it. 00:28:48.677 [2024-07-25 10:17:33.805165] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.677 [2024-07-25 10:17:33.805298] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.677 [2024-07-25 10:17:33.805327] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.677 [2024-07-25 10:17:33.805344] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.677 [2024-07-25 10:17:33.805358] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:48.677 [2024-07-25 10:17:33.805393] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:48.677 qpair failed and we were unable to recover it. 00:28:48.677 [2024-07-25 10:17:33.815247] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.677 [2024-07-25 10:17:33.815377] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.677 [2024-07-25 10:17:33.815406] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.677 [2024-07-25 10:17:33.815424] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.677 [2024-07-25 10:17:33.815447] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:48.677 [2024-07-25 10:17:33.815495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:48.677 qpair failed and we were unable to recover it. 00:28:48.677 [2024-07-25 10:17:33.825276] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.677 [2024-07-25 10:17:33.825412] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.677 [2024-07-25 10:17:33.825448] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.677 [2024-07-25 10:17:33.825466] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.677 [2024-07-25 10:17:33.825480] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:48.677 [2024-07-25 10:17:33.825515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:48.677 qpair failed and we were unable to recover it. 00:28:48.677 [2024-07-25 10:17:33.835233] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.677 [2024-07-25 10:17:33.835401] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.677 [2024-07-25 10:17:33.835436] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.677 [2024-07-25 10:17:33.835465] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.677 [2024-07-25 10:17:33.835482] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:48.677 [2024-07-25 10:17:33.835517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:48.677 qpair failed and we were unable to recover it. 00:28:48.936 [2024-07-25 10:17:33.845338] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.936 [2024-07-25 10:17:33.845485] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.936 [2024-07-25 10:17:33.845514] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.936 [2024-07-25 10:17:33.845532] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.936 [2024-07-25 10:17:33.845547] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:48.936 [2024-07-25 10:17:33.845581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:48.936 qpair failed and we were unable to recover it. 00:28:48.936 [2024-07-25 10:17:33.855325] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.936 [2024-07-25 10:17:33.855460] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.936 [2024-07-25 10:17:33.855495] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.936 [2024-07-25 10:17:33.855514] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.936 [2024-07-25 10:17:33.855528] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:48.936 [2024-07-25 10:17:33.855563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:48.936 qpair failed and we were unable to recover it. 00:28:48.936 [2024-07-25 10:17:33.865383] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.936 [2024-07-25 10:17:33.865525] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.936 [2024-07-25 10:17:33.865555] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.936 [2024-07-25 10:17:33.865572] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.936 [2024-07-25 10:17:33.865587] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:48.936 [2024-07-25 10:17:33.865622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:48.936 qpair failed and we were unable to recover it. 00:28:48.936 [2024-07-25 10:17:33.875408] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.936 [2024-07-25 10:17:33.875574] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.936 [2024-07-25 10:17:33.875604] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.936 [2024-07-25 10:17:33.875621] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.936 [2024-07-25 10:17:33.875635] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:48.936 [2024-07-25 10:17:33.875670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:48.936 qpair failed and we were unable to recover it. 00:28:48.936 [2024-07-25 10:17:33.885409] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.936 [2024-07-25 10:17:33.885553] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.936 [2024-07-25 10:17:33.885582] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.936 [2024-07-25 10:17:33.885599] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.936 [2024-07-25 10:17:33.885613] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:48.936 [2024-07-25 10:17:33.885648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:48.936 qpair failed and we were unable to recover it. 00:28:48.936 [2024-07-25 10:17:33.895450] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.937 [2024-07-25 10:17:33.895588] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.937 [2024-07-25 10:17:33.895617] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.937 [2024-07-25 10:17:33.895634] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.937 [2024-07-25 10:17:33.895648] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:48.937 [2024-07-25 10:17:33.895689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:48.937 qpair failed and we were unable to recover it. 00:28:48.937 [2024-07-25 10:17:33.905476] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.937 [2024-07-25 10:17:33.905611] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.937 [2024-07-25 10:17:33.905639] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.937 [2024-07-25 10:17:33.905668] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.937 [2024-07-25 10:17:33.905682] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:48.937 [2024-07-25 10:17:33.905716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:48.937 qpair failed and we were unable to recover it. 00:28:48.937 [2024-07-25 10:17:33.915505] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.937 [2024-07-25 10:17:33.915640] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.937 [2024-07-25 10:17:33.915669] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.937 [2024-07-25 10:17:33.915686] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.937 [2024-07-25 10:17:33.915701] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:48.937 [2024-07-25 10:17:33.915736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:48.937 qpair failed and we were unable to recover it. 00:28:48.937 [2024-07-25 10:17:33.925519] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.937 [2024-07-25 10:17:33.925704] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.937 [2024-07-25 10:17:33.925733] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.937 [2024-07-25 10:17:33.925750] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.937 [2024-07-25 10:17:33.925765] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:48.937 [2024-07-25 10:17:33.925801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:48.937 qpair failed and we were unable to recover it. 00:28:48.937 [2024-07-25 10:17:33.935603] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.937 [2024-07-25 10:17:33.935729] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.937 [2024-07-25 10:17:33.935758] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.937 [2024-07-25 10:17:33.935776] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.937 [2024-07-25 10:17:33.935791] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:48.937 [2024-07-25 10:17:33.935825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:48.937 qpair failed and we were unable to recover it. 00:28:48.937 [2024-07-25 10:17:33.945574] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.937 [2024-07-25 10:17:33.945705] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.937 [2024-07-25 10:17:33.945739] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.937 [2024-07-25 10:17:33.945758] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.937 [2024-07-25 10:17:33.945773] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:48.937 [2024-07-25 10:17:33.945808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:48.937 qpair failed and we were unable to recover it. 00:28:48.937 [2024-07-25 10:17:33.955616] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.937 [2024-07-25 10:17:33.955761] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.937 [2024-07-25 10:17:33.955789] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.937 [2024-07-25 10:17:33.955806] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.937 [2024-07-25 10:17:33.955821] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:48.937 [2024-07-25 10:17:33.955856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:48.937 qpair failed and we were unable to recover it. 00:28:48.937 [2024-07-25 10:17:33.965656] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.937 [2024-07-25 10:17:33.965808] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.937 [2024-07-25 10:17:33.965838] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.937 [2024-07-25 10:17:33.965855] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.937 [2024-07-25 10:17:33.965871] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:48.937 [2024-07-25 10:17:33.965905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:48.937 qpair failed and we were unable to recover it. 00:28:48.937 [2024-07-25 10:17:33.975648] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.937 [2024-07-25 10:17:33.975777] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.937 [2024-07-25 10:17:33.975806] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.937 [2024-07-25 10:17:33.975824] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.937 [2024-07-25 10:17:33.975838] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:48.937 [2024-07-25 10:17:33.975874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:48.937 qpair failed and we were unable to recover it. 00:28:48.937 [2024-07-25 10:17:33.985705] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.937 [2024-07-25 10:17:33.985840] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.937 [2024-07-25 10:17:33.985869] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.937 [2024-07-25 10:17:33.985887] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.937 [2024-07-25 10:17:33.985908] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:48.937 [2024-07-25 10:17:33.985943] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:48.937 qpair failed and we were unable to recover it. 00:28:48.937 [2024-07-25 10:17:33.995688] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.937 [2024-07-25 10:17:33.995831] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.937 [2024-07-25 10:17:33.995861] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.937 [2024-07-25 10:17:33.995878] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.937 [2024-07-25 10:17:33.995893] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:48.937 [2024-07-25 10:17:33.995928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:48.937 qpair failed and we were unable to recover it. 00:28:48.937 [2024-07-25 10:17:34.005738] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.937 [2024-07-25 10:17:34.005881] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.937 [2024-07-25 10:17:34.005910] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.937 [2024-07-25 10:17:34.005927] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.937 [2024-07-25 10:17:34.005943] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:48.937 [2024-07-25 10:17:34.005978] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:48.937 qpair failed and we were unable to recover it. 00:28:48.937 [2024-07-25 10:17:34.015760] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.938 [2024-07-25 10:17:34.015898] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.938 [2024-07-25 10:17:34.015928] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.938 [2024-07-25 10:17:34.015946] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.938 [2024-07-25 10:17:34.015961] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:48.938 [2024-07-25 10:17:34.015997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:48.938 qpair failed and we were unable to recover it. 00:28:48.938 [2024-07-25 10:17:34.025829] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.938 [2024-07-25 10:17:34.025964] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.938 [2024-07-25 10:17:34.025994] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.938 [2024-07-25 10:17:34.026013] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.938 [2024-07-25 10:17:34.026029] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:48.938 [2024-07-25 10:17:34.026064] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:48.938 qpair failed and we were unable to recover it. 00:28:48.938 [2024-07-25 10:17:34.035854] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.938 [2024-07-25 10:17:34.036004] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.938 [2024-07-25 10:17:34.036037] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.938 [2024-07-25 10:17:34.036055] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.938 [2024-07-25 10:17:34.036072] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:48.938 [2024-07-25 10:17:34.036107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:48.938 qpair failed and we were unable to recover it. 00:28:48.938 [2024-07-25 10:17:34.045882] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.938 [2024-07-25 10:17:34.046007] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.938 [2024-07-25 10:17:34.046037] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.938 [2024-07-25 10:17:34.046055] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.938 [2024-07-25 10:17:34.046070] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:48.938 [2024-07-25 10:17:34.046116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:48.938 qpair failed and we were unable to recover it. 00:28:48.938 [2024-07-25 10:17:34.055855] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.938 [2024-07-25 10:17:34.056000] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.938 [2024-07-25 10:17:34.056031] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.938 [2024-07-25 10:17:34.056049] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.938 [2024-07-25 10:17:34.056064] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:48.938 [2024-07-25 10:17:34.056098] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:48.938 qpair failed and we were unable to recover it. 00:28:48.938 [2024-07-25 10:17:34.065995] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.938 [2024-07-25 10:17:34.066133] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.938 [2024-07-25 10:17:34.066162] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.938 [2024-07-25 10:17:34.066180] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.938 [2024-07-25 10:17:34.066195] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:48.938 [2024-07-25 10:17:34.066228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:48.938 qpair failed and we were unable to recover it. 00:28:48.938 [2024-07-25 10:17:34.075937] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.938 [2024-07-25 10:17:34.076068] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.938 [2024-07-25 10:17:34.076098] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.938 [2024-07-25 10:17:34.076121] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.938 [2024-07-25 10:17:34.076138] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:48.938 [2024-07-25 10:17:34.076184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:48.938 qpair failed and we were unable to recover it. 00:28:48.938 [2024-07-25 10:17:34.085969] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.938 [2024-07-25 10:17:34.086147] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.938 [2024-07-25 10:17:34.086176] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.938 [2024-07-25 10:17:34.086193] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.938 [2024-07-25 10:17:34.086218] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:48.938 [2024-07-25 10:17:34.086253] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:48.938 qpair failed and we were unable to recover it. 00:28:48.938 [2024-07-25 10:17:34.096006] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:48.938 [2024-07-25 10:17:34.096148] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:48.938 [2024-07-25 10:17:34.096179] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:48.938 [2024-07-25 10:17:34.096196] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:48.938 [2024-07-25 10:17:34.096211] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:48.938 [2024-07-25 10:17:34.096246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:48.938 qpair failed and we were unable to recover it. 00:28:49.197 [2024-07-25 10:17:34.106024] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.197 [2024-07-25 10:17:34.106173] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.197 [2024-07-25 10:17:34.106209] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.197 [2024-07-25 10:17:34.106227] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.197 [2024-07-25 10:17:34.106242] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:49.197 [2024-07-25 10:17:34.106276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:49.197 qpair failed and we were unable to recover it. 00:28:49.197 [2024-07-25 10:17:34.116056] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.197 [2024-07-25 10:17:34.116201] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.197 [2024-07-25 10:17:34.116231] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.197 [2024-07-25 10:17:34.116248] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.197 [2024-07-25 10:17:34.116263] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:49.198 [2024-07-25 10:17:34.116309] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:49.198 qpair failed and we were unable to recover it. 00:28:49.198 [2024-07-25 10:17:34.126108] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.198 [2024-07-25 10:17:34.126231] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.198 [2024-07-25 10:17:34.126260] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.198 [2024-07-25 10:17:34.126277] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.198 [2024-07-25 10:17:34.126292] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:49.198 [2024-07-25 10:17:34.126327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:49.198 qpair failed and we were unable to recover it. 00:28:49.198 [2024-07-25 10:17:34.136123] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.198 [2024-07-25 10:17:34.136290] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.198 [2024-07-25 10:17:34.136321] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.198 [2024-07-25 10:17:34.136338] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.198 [2024-07-25 10:17:34.136354] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:49.198 [2024-07-25 10:17:34.136388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:49.198 qpair failed and we were unable to recover it. 00:28:49.198 [2024-07-25 10:17:34.146150] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.198 [2024-07-25 10:17:34.146331] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.198 [2024-07-25 10:17:34.146360] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.198 [2024-07-25 10:17:34.146378] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.198 [2024-07-25 10:17:34.146392] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:49.198 [2024-07-25 10:17:34.146435] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:49.198 qpair failed and we were unable to recover it. 00:28:49.198 [2024-07-25 10:17:34.156155] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.198 [2024-07-25 10:17:34.156301] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.198 [2024-07-25 10:17:34.156330] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.198 [2024-07-25 10:17:34.156348] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.198 [2024-07-25 10:17:34.156364] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:49.198 [2024-07-25 10:17:34.156400] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:49.198 qpair failed and we were unable to recover it. 00:28:49.198 [2024-07-25 10:17:34.166164] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.198 [2024-07-25 10:17:34.166299] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.198 [2024-07-25 10:17:34.166328] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.198 [2024-07-25 10:17:34.166351] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.198 [2024-07-25 10:17:34.166368] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:49.198 [2024-07-25 10:17:34.166403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:49.198 qpair failed and we were unable to recover it. 00:28:49.198 [2024-07-25 10:17:34.176226] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.198 [2024-07-25 10:17:34.176366] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.198 [2024-07-25 10:17:34.176403] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.198 [2024-07-25 10:17:34.176421] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.198 [2024-07-25 10:17:34.176444] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:49.198 [2024-07-25 10:17:34.176486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:49.198 qpair failed and we were unable to recover it. 00:28:49.198 [2024-07-25 10:17:34.186269] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.198 [2024-07-25 10:17:34.186401] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.198 [2024-07-25 10:17:34.186436] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.198 [2024-07-25 10:17:34.186455] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.198 [2024-07-25 10:17:34.186474] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:49.198 [2024-07-25 10:17:34.186508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:49.198 qpair failed and we were unable to recover it. 00:28:49.198 [2024-07-25 10:17:34.196287] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.198 [2024-07-25 10:17:34.196420] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.198 [2024-07-25 10:17:34.196465] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.198 [2024-07-25 10:17:34.196482] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.198 [2024-07-25 10:17:34.196499] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:49.198 [2024-07-25 10:17:34.196533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:49.198 qpair failed and we were unable to recover it. 00:28:49.198 [2024-07-25 10:17:34.206359] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.198 [2024-07-25 10:17:34.206506] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.198 [2024-07-25 10:17:34.206535] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.198 [2024-07-25 10:17:34.206552] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.198 [2024-07-25 10:17:34.206567] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:49.198 [2024-07-25 10:17:34.206603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:49.198 qpair failed and we were unable to recover it. 00:28:49.198 [2024-07-25 10:17:34.216386] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.198 [2024-07-25 10:17:34.216518] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.198 [2024-07-25 10:17:34.216548] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.198 [2024-07-25 10:17:34.216566] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.198 [2024-07-25 10:17:34.216580] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:49.198 [2024-07-25 10:17:34.216615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:49.198 qpair failed and we were unable to recover it. 00:28:49.198 [2024-07-25 10:17:34.226392] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.198 [2024-07-25 10:17:34.226535] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.198 [2024-07-25 10:17:34.226564] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.198 [2024-07-25 10:17:34.226581] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.198 [2024-07-25 10:17:34.226596] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:49.198 [2024-07-25 10:17:34.226631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:49.198 qpair failed and we were unable to recover it. 00:28:49.198 [2024-07-25 10:17:34.236370] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.198 [2024-07-25 10:17:34.236505] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.198 [2024-07-25 10:17:34.236533] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.198 [2024-07-25 10:17:34.236550] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.198 [2024-07-25 10:17:34.236564] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:49.198 [2024-07-25 10:17:34.236597] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:49.199 qpair failed and we were unable to recover it. 00:28:49.199 [2024-07-25 10:17:34.246440] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.199 [2024-07-25 10:17:34.246593] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.199 [2024-07-25 10:17:34.246621] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.199 [2024-07-25 10:17:34.246638] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.199 [2024-07-25 10:17:34.246653] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:49.199 [2024-07-25 10:17:34.246688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:49.199 qpair failed and we were unable to recover it. 00:28:49.199 [2024-07-25 10:17:34.256438] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.199 [2024-07-25 10:17:34.256576] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.199 [2024-07-25 10:17:34.256612] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.199 [2024-07-25 10:17:34.256631] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.199 [2024-07-25 10:17:34.256646] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:49.199 [2024-07-25 10:17:34.256681] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:49.199 qpair failed and we were unable to recover it. 00:28:49.199 [2024-07-25 10:17:34.266484] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.199 [2024-07-25 10:17:34.266625] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.199 [2024-07-25 10:17:34.266653] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.199 [2024-07-25 10:17:34.266671] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.199 [2024-07-25 10:17:34.266685] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:49.199 [2024-07-25 10:17:34.266720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:49.199 qpair failed and we were unable to recover it. 00:28:49.199 [2024-07-25 10:17:34.276551] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.199 [2024-07-25 10:17:34.276699] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.199 [2024-07-25 10:17:34.276729] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.199 [2024-07-25 10:17:34.276746] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.199 [2024-07-25 10:17:34.276761] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:49.199 [2024-07-25 10:17:34.276796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:49.199 qpair failed and we were unable to recover it. 00:28:49.199 [2024-07-25 10:17:34.286580] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.199 [2024-07-25 10:17:34.286728] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.199 [2024-07-25 10:17:34.286757] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.199 [2024-07-25 10:17:34.286774] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.199 [2024-07-25 10:17:34.286789] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:49.199 [2024-07-25 10:17:34.286825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:49.199 qpair failed and we were unable to recover it. 00:28:49.199 [2024-07-25 10:17:34.296590] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.199 [2024-07-25 10:17:34.296714] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.199 [2024-07-25 10:17:34.296743] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.199 [2024-07-25 10:17:34.296760] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.199 [2024-07-25 10:17:34.296774] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:49.199 [2024-07-25 10:17:34.296814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:49.199 qpair failed and we were unable to recover it. 00:28:49.199 [2024-07-25 10:17:34.306651] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.199 [2024-07-25 10:17:34.306785] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.199 [2024-07-25 10:17:34.306814] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.199 [2024-07-25 10:17:34.306831] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.199 [2024-07-25 10:17:34.306846] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:49.199 [2024-07-25 10:17:34.306892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:49.199 qpair failed and we were unable to recover it. 00:28:49.199 [2024-07-25 10:17:34.316678] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.199 [2024-07-25 10:17:34.316808] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.199 [2024-07-25 10:17:34.316838] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.199 [2024-07-25 10:17:34.316856] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.199 [2024-07-25 10:17:34.316870] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:49.199 [2024-07-25 10:17:34.316908] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:49.199 qpair failed and we were unable to recover it. 00:28:49.199 [2024-07-25 10:17:34.326692] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.199 [2024-07-25 10:17:34.326865] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.199 [2024-07-25 10:17:34.326893] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.199 [2024-07-25 10:17:34.326910] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.199 [2024-07-25 10:17:34.326924] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:49.199 [2024-07-25 10:17:34.326969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:49.199 qpair failed and we were unable to recover it. 00:28:49.199 [2024-07-25 10:17:34.336649] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.199 [2024-07-25 10:17:34.336799] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.199 [2024-07-25 10:17:34.336828] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.199 [2024-07-25 10:17:34.336846] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.199 [2024-07-25 10:17:34.336860] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:49.199 [2024-07-25 10:17:34.336895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:49.199 qpair failed and we were unable to recover it. 00:28:49.199 [2024-07-25 10:17:34.346796] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.199 [2024-07-25 10:17:34.346971] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.199 [2024-07-25 10:17:34.347006] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.199 [2024-07-25 10:17:34.347024] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.199 [2024-07-25 10:17:34.347041] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:49.199 [2024-07-25 10:17:34.347075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:49.199 qpair failed and we were unable to recover it. 00:28:49.199 [2024-07-25 10:17:34.356750] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.199 [2024-07-25 10:17:34.356877] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.199 [2024-07-25 10:17:34.356907] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.199 [2024-07-25 10:17:34.356930] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.199 [2024-07-25 10:17:34.356945] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:49.199 [2024-07-25 10:17:34.356979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:49.199 qpair failed and we were unable to recover it. 00:28:49.458 [2024-07-25 10:17:34.366798] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.458 [2024-07-25 10:17:34.366940] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.458 [2024-07-25 10:17:34.366969] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.458 [2024-07-25 10:17:34.366987] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.458 [2024-07-25 10:17:34.367001] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:49.458 [2024-07-25 10:17:34.367037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:49.458 qpair failed and we were unable to recover it. 00:28:49.458 [2024-07-25 10:17:34.376857] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.458 [2024-07-25 10:17:34.376984] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.458 [2024-07-25 10:17:34.377013] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.458 [2024-07-25 10:17:34.377030] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.458 [2024-07-25 10:17:34.377045] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:49.458 [2024-07-25 10:17:34.377079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:49.458 qpair failed and we were unable to recover it. 00:28:49.458 [2024-07-25 10:17:34.386824] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.458 [2024-07-25 10:17:34.386958] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.458 [2024-07-25 10:17:34.386987] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.458 [2024-07-25 10:17:34.387004] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.458 [2024-07-25 10:17:34.387026] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:49.458 [2024-07-25 10:17:34.387061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:49.458 qpair failed and we were unable to recover it. 00:28:49.458 [2024-07-25 10:17:34.396870] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.458 [2024-07-25 10:17:34.397043] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.458 [2024-07-25 10:17:34.397072] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.458 [2024-07-25 10:17:34.397090] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.458 [2024-07-25 10:17:34.397104] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:49.458 [2024-07-25 10:17:34.397139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:49.458 qpair failed and we were unable to recover it. 00:28:49.458 [2024-07-25 10:17:34.406858] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.458 [2024-07-25 10:17:34.407004] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.458 [2024-07-25 10:17:34.407033] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.458 [2024-07-25 10:17:34.407059] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.458 [2024-07-25 10:17:34.407073] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:49.458 [2024-07-25 10:17:34.407107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:49.458 qpair failed and we were unable to recover it. 00:28:49.458 [2024-07-25 10:17:34.416880] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.458 [2024-07-25 10:17:34.417004] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.458 [2024-07-25 10:17:34.417033] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.458 [2024-07-25 10:17:34.417050] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.458 [2024-07-25 10:17:34.417065] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:49.458 [2024-07-25 10:17:34.417101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:49.458 qpair failed and we were unable to recover it. 00:28:49.458 [2024-07-25 10:17:34.426916] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.458 [2024-07-25 10:17:34.427074] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.458 [2024-07-25 10:17:34.427105] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.458 [2024-07-25 10:17:34.427123] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.458 [2024-07-25 10:17:34.427137] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:49.458 [2024-07-25 10:17:34.427172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:49.458 qpair failed and we were unable to recover it. 00:28:49.458 [2024-07-25 10:17:34.436978] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.458 [2024-07-25 10:17:34.437118] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.458 [2024-07-25 10:17:34.437147] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.458 [2024-07-25 10:17:34.437165] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.458 [2024-07-25 10:17:34.437180] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:49.458 [2024-07-25 10:17:34.437215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:49.458 qpair failed and we were unable to recover it. 00:28:49.458 [2024-07-25 10:17:34.446988] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.458 [2024-07-25 10:17:34.447115] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.458 [2024-07-25 10:17:34.447144] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.458 [2024-07-25 10:17:34.447161] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.458 [2024-07-25 10:17:34.447177] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:49.458 [2024-07-25 10:17:34.447212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:49.458 qpair failed and we were unable to recover it. 00:28:49.458 [2024-07-25 10:17:34.456999] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.458 [2024-07-25 10:17:34.457122] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.458 [2024-07-25 10:17:34.457152] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.458 [2024-07-25 10:17:34.457168] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.458 [2024-07-25 10:17:34.457184] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:49.458 [2024-07-25 10:17:34.457218] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:49.458 qpair failed and we were unable to recover it. 00:28:49.458 [2024-07-25 10:17:34.467052] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.458 [2024-07-25 10:17:34.467191] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.459 [2024-07-25 10:17:34.467220] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.459 [2024-07-25 10:17:34.467237] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.459 [2024-07-25 10:17:34.467252] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:49.459 [2024-07-25 10:17:34.467286] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:49.459 qpair failed and we were unable to recover it. 00:28:49.459 [2024-07-25 10:17:34.477060] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.459 [2024-07-25 10:17:34.477192] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.459 [2024-07-25 10:17:34.477221] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.459 [2024-07-25 10:17:34.477238] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.459 [2024-07-25 10:17:34.477260] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:49.459 [2024-07-25 10:17:34.477294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:49.459 qpair failed and we were unable to recover it. 00:28:49.459 [2024-07-25 10:17:34.487173] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.459 [2024-07-25 10:17:34.487341] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.459 [2024-07-25 10:17:34.487370] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.459 [2024-07-25 10:17:34.487387] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.459 [2024-07-25 10:17:34.487401] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:49.459 [2024-07-25 10:17:34.487444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:49.459 qpair failed and we were unable to recover it. 00:28:49.459 [2024-07-25 10:17:34.497140] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.459 [2024-07-25 10:17:34.497281] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.459 [2024-07-25 10:17:34.497311] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.459 [2024-07-25 10:17:34.497328] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.459 [2024-07-25 10:17:34.497343] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:49.459 [2024-07-25 10:17:34.497378] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:49.459 qpair failed and we were unable to recover it. 00:28:49.459 [2024-07-25 10:17:34.507184] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.459 [2024-07-25 10:17:34.507329] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.459 [2024-07-25 10:17:34.507357] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.459 [2024-07-25 10:17:34.507375] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.459 [2024-07-25 10:17:34.507389] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:49.459 [2024-07-25 10:17:34.507425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:49.459 qpair failed and we were unable to recover it. 00:28:49.459 [2024-07-25 10:17:34.517189] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.459 [2024-07-25 10:17:34.517317] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.459 [2024-07-25 10:17:34.517346] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.459 [2024-07-25 10:17:34.517364] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.459 [2024-07-25 10:17:34.517378] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:49.459 [2024-07-25 10:17:34.517413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:49.459 qpair failed and we were unable to recover it. 00:28:49.459 [2024-07-25 10:17:34.527231] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.459 [2024-07-25 10:17:34.527400] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.459 [2024-07-25 10:17:34.527436] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.459 [2024-07-25 10:17:34.527457] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.459 [2024-07-25 10:17:34.527472] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:49.459 [2024-07-25 10:17:34.527507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:49.459 qpair failed and we were unable to recover it. 00:28:49.459 [2024-07-25 10:17:34.537240] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.459 [2024-07-25 10:17:34.537375] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.459 [2024-07-25 10:17:34.537404] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.459 [2024-07-25 10:17:34.537422] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.459 [2024-07-25 10:17:34.537444] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:49.459 [2024-07-25 10:17:34.537480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:49.459 qpair failed and we were unable to recover it. 00:28:49.459 [2024-07-25 10:17:34.547322] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.459 [2024-07-25 10:17:34.547519] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.459 [2024-07-25 10:17:34.547548] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.459 [2024-07-25 10:17:34.547565] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.459 [2024-07-25 10:17:34.547579] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:49.459 [2024-07-25 10:17:34.547615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:49.459 qpair failed and we were unable to recover it. 00:28:49.459 [2024-07-25 10:17:34.557326] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.459 [2024-07-25 10:17:34.557482] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.459 [2024-07-25 10:17:34.557511] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.459 [2024-07-25 10:17:34.557529] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.459 [2024-07-25 10:17:34.557544] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:49.459 [2024-07-25 10:17:34.557578] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:49.459 qpair failed and we were unable to recover it. 00:28:49.459 [2024-07-25 10:17:34.567422] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.459 [2024-07-25 10:17:34.567566] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.459 [2024-07-25 10:17:34.567596] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.459 [2024-07-25 10:17:34.567619] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.459 [2024-07-25 10:17:34.567635] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:49.459 [2024-07-25 10:17:34.567670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:49.459 qpair failed and we were unable to recover it. 00:28:49.459 [2024-07-25 10:17:34.577398] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.459 [2024-07-25 10:17:34.577543] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.459 [2024-07-25 10:17:34.577572] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.459 [2024-07-25 10:17:34.577589] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.459 [2024-07-25 10:17:34.577604] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:49.459 [2024-07-25 10:17:34.577639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:49.459 qpair failed and we were unable to recover it. 00:28:49.459 [2024-07-25 10:17:34.587527] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.459 [2024-07-25 10:17:34.587694] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.459 [2024-07-25 10:17:34.587723] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.459 [2024-07-25 10:17:34.587740] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.459 [2024-07-25 10:17:34.587754] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:49.459 [2024-07-25 10:17:34.587790] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:49.459 qpair failed and we were unable to recover it. 00:28:49.459 [2024-07-25 10:17:34.597422] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.459 [2024-07-25 10:17:34.597574] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.460 [2024-07-25 10:17:34.597615] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.460 [2024-07-25 10:17:34.597632] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.460 [2024-07-25 10:17:34.597647] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:49.460 [2024-07-25 10:17:34.597683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:49.460 qpair failed and we were unable to recover it. 00:28:49.460 [2024-07-25 10:17:34.607458] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.460 [2024-07-25 10:17:34.607584] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.460 [2024-07-25 10:17:34.607613] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.460 [2024-07-25 10:17:34.607631] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.460 [2024-07-25 10:17:34.607646] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:49.460 [2024-07-25 10:17:34.607682] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:49.460 qpair failed and we were unable to recover it. 00:28:49.460 [2024-07-25 10:17:34.617494] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.460 [2024-07-25 10:17:34.617661] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.460 [2024-07-25 10:17:34.617690] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.460 [2024-07-25 10:17:34.617707] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.460 [2024-07-25 10:17:34.617722] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:49.460 [2024-07-25 10:17:34.617756] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:49.460 qpair failed and we were unable to recover it. 00:28:49.719 [2024-07-25 10:17:34.627523] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.719 [2024-07-25 10:17:34.627657] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.719 [2024-07-25 10:17:34.627687] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.719 [2024-07-25 10:17:34.627705] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.719 [2024-07-25 10:17:34.627721] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:49.719 [2024-07-25 10:17:34.627755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:49.719 qpair failed and we were unable to recover it. 00:28:49.719 [2024-07-25 10:17:34.637588] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.719 [2024-07-25 10:17:34.637733] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.719 [2024-07-25 10:17:34.637768] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.719 [2024-07-25 10:17:34.637785] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.719 [2024-07-25 10:17:34.637800] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:49.719 [2024-07-25 10:17:34.637835] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:49.719 qpair failed and we were unable to recover it. 00:28:49.719 [2024-07-25 10:17:34.647623] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.719 [2024-07-25 10:17:34.647752] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.719 [2024-07-25 10:17:34.647782] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.719 [2024-07-25 10:17:34.647798] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.719 [2024-07-25 10:17:34.647813] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:49.719 [2024-07-25 10:17:34.647848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:49.719 qpair failed and we were unable to recover it. 00:28:49.719 [2024-07-25 10:17:34.657593] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.719 [2024-07-25 10:17:34.657744] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.719 [2024-07-25 10:17:34.657778] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.719 [2024-07-25 10:17:34.657796] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.719 [2024-07-25 10:17:34.657811] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:49.719 [2024-07-25 10:17:34.657846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:49.719 qpair failed and we were unable to recover it. 00:28:49.719 [2024-07-25 10:17:34.667725] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.719 [2024-07-25 10:17:34.667856] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.719 [2024-07-25 10:17:34.667885] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.719 [2024-07-25 10:17:34.667903] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.719 [2024-07-25 10:17:34.667919] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:49.719 [2024-07-25 10:17:34.667952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:49.719 qpair failed and we were unable to recover it. 00:28:49.719 [2024-07-25 10:17:34.677654] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.719 [2024-07-25 10:17:34.677830] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.719 [2024-07-25 10:17:34.677859] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.719 [2024-07-25 10:17:34.677876] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.719 [2024-07-25 10:17:34.677891] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:49.719 [2024-07-25 10:17:34.677926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:49.719 qpair failed and we were unable to recover it. 00:28:49.719 [2024-07-25 10:17:34.687684] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.719 [2024-07-25 10:17:34.687829] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.719 [2024-07-25 10:17:34.687858] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.719 [2024-07-25 10:17:34.687875] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.719 [2024-07-25 10:17:34.687889] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:49.719 [2024-07-25 10:17:34.687925] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:49.719 qpair failed and we were unable to recover it. 00:28:49.719 [2024-07-25 10:17:34.697700] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.719 [2024-07-25 10:17:34.697865] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.719 [2024-07-25 10:17:34.697894] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.719 [2024-07-25 10:17:34.697911] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.719 [2024-07-25 10:17:34.697925] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:49.719 [2024-07-25 10:17:34.697968] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:49.719 qpair failed and we were unable to recover it. 00:28:49.719 [2024-07-25 10:17:34.707814] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.719 [2024-07-25 10:17:34.707953] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.719 [2024-07-25 10:17:34.707982] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.719 [2024-07-25 10:17:34.707999] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.719 [2024-07-25 10:17:34.708013] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:49.719 [2024-07-25 10:17:34.708049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:49.719 qpair failed and we were unable to recover it. 00:28:49.719 [2024-07-25 10:17:34.717803] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.719 [2024-07-25 10:17:34.717972] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.719 [2024-07-25 10:17:34.718002] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.719 [2024-07-25 10:17:34.718019] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.719 [2024-07-25 10:17:34.718034] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:49.719 [2024-07-25 10:17:34.718070] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:49.719 qpair failed and we were unable to recover it. 00:28:49.719 [2024-07-25 10:17:34.727777] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.719 [2024-07-25 10:17:34.727911] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.719 [2024-07-25 10:17:34.727941] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.719 [2024-07-25 10:17:34.727958] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.719 [2024-07-25 10:17:34.727972] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:49.719 [2024-07-25 10:17:34.728007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:49.719 qpair failed and we were unable to recover it. 00:28:49.719 [2024-07-25 10:17:34.737830] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.719 [2024-07-25 10:17:34.738002] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.719 [2024-07-25 10:17:34.738032] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.719 [2024-07-25 10:17:34.738050] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.719 [2024-07-25 10:17:34.738065] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:49.719 [2024-07-25 10:17:34.738100] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:49.719 qpair failed and we were unable to recover it. 00:28:49.719 [2024-07-25 10:17:34.747873] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.719 [2024-07-25 10:17:34.748068] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.719 [2024-07-25 10:17:34.748102] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.719 [2024-07-25 10:17:34.748121] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.719 [2024-07-25 10:17:34.748137] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:49.720 [2024-07-25 10:17:34.748172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:49.720 qpair failed and we were unable to recover it. 00:28:49.720 [2024-07-25 10:17:34.757880] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.720 [2024-07-25 10:17:34.758013] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.720 [2024-07-25 10:17:34.758042] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.720 [2024-07-25 10:17:34.758059] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.720 [2024-07-25 10:17:34.758073] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:49.720 [2024-07-25 10:17:34.758109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:49.720 qpair failed and we were unable to recover it. 00:28:49.720 [2024-07-25 10:17:34.767898] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.720 [2024-07-25 10:17:34.768044] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.720 [2024-07-25 10:17:34.768073] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.720 [2024-07-25 10:17:34.768090] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.720 [2024-07-25 10:17:34.768105] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:49.720 [2024-07-25 10:17:34.768140] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:49.720 qpair failed and we were unable to recover it. 00:28:49.720 [2024-07-25 10:17:34.778010] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.720 [2024-07-25 10:17:34.778134] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.720 [2024-07-25 10:17:34.778163] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.720 [2024-07-25 10:17:34.778181] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.720 [2024-07-25 10:17:34.778196] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:49.720 [2024-07-25 10:17:34.778230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:49.720 qpair failed and we were unable to recover it. 00:28:49.720 [2024-07-25 10:17:34.787987] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.720 [2024-07-25 10:17:34.788121] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.720 [2024-07-25 10:17:34.788150] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.720 [2024-07-25 10:17:34.788168] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.720 [2024-07-25 10:17:34.788189] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:49.720 [2024-07-25 10:17:34.788224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:49.720 qpair failed and we were unable to recover it. 00:28:49.720 [2024-07-25 10:17:34.797990] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.720 [2024-07-25 10:17:34.798146] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.720 [2024-07-25 10:17:34.798175] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.720 [2024-07-25 10:17:34.798192] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.720 [2024-07-25 10:17:34.798207] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:49.720 [2024-07-25 10:17:34.798242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:49.720 qpair failed and we were unable to recover it. 00:28:49.720 [2024-07-25 10:17:34.808041] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.720 [2024-07-25 10:17:34.808207] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.720 [2024-07-25 10:17:34.808243] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.720 [2024-07-25 10:17:34.808264] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.720 [2024-07-25 10:17:34.808280] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:49.720 [2024-07-25 10:17:34.808315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:49.720 qpair failed and we were unable to recover it. 00:28:49.720 [2024-07-25 10:17:34.818064] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.720 [2024-07-25 10:17:34.818193] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.720 [2024-07-25 10:17:34.818223] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.720 [2024-07-25 10:17:34.818239] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.720 [2024-07-25 10:17:34.818254] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:49.720 [2024-07-25 10:17:34.818290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:49.720 qpair failed and we were unable to recover it. 00:28:49.720 [2024-07-25 10:17:34.828124] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.720 [2024-07-25 10:17:34.828262] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.720 [2024-07-25 10:17:34.828292] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.720 [2024-07-25 10:17:34.828311] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.720 [2024-07-25 10:17:34.828325] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:49.720 [2024-07-25 10:17:34.828358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:49.720 qpair failed and we were unable to recover it. 00:28:49.720 [2024-07-25 10:17:34.838119] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.720 [2024-07-25 10:17:34.838277] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.720 [2024-07-25 10:17:34.838306] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.720 [2024-07-25 10:17:34.838323] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.720 [2024-07-25 10:17:34.838338] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:49.720 [2024-07-25 10:17:34.838374] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:49.720 qpair failed and we were unable to recover it. 00:28:49.720 [2024-07-25 10:17:34.848181] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.720 [2024-07-25 10:17:34.848351] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.720 [2024-07-25 10:17:34.848379] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.720 [2024-07-25 10:17:34.848396] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.720 [2024-07-25 10:17:34.848411] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:49.720 [2024-07-25 10:17:34.848452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:49.720 qpair failed and we were unable to recover it. 00:28:49.720 [2024-07-25 10:17:34.858307] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.720 [2024-07-25 10:17:34.858473] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.720 [2024-07-25 10:17:34.858502] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.720 [2024-07-25 10:17:34.858519] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.720 [2024-07-25 10:17:34.858534] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:49.720 [2024-07-25 10:17:34.858570] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:49.720 qpair failed and we were unable to recover it. 00:28:49.720 [2024-07-25 10:17:34.868237] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.720 [2024-07-25 10:17:34.868380] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.720 [2024-07-25 10:17:34.868410] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.720 [2024-07-25 10:17:34.868433] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.721 [2024-07-25 10:17:34.868450] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:49.721 [2024-07-25 10:17:34.868484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:49.721 qpair failed and we were unable to recover it. 00:28:49.721 [2024-07-25 10:17:34.878249] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.721 [2024-07-25 10:17:34.878380] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.721 [2024-07-25 10:17:34.878408] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.721 [2024-07-25 10:17:34.878426] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.721 [2024-07-25 10:17:34.878454] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:49.721 [2024-07-25 10:17:34.878488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:49.721 qpair failed and we were unable to recover it. 00:28:49.980 [2024-07-25 10:17:34.888336] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.980 [2024-07-25 10:17:34.888512] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.980 [2024-07-25 10:17:34.888542] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.980 [2024-07-25 10:17:34.888559] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.980 [2024-07-25 10:17:34.888573] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:49.980 [2024-07-25 10:17:34.888610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:49.980 qpair failed and we were unable to recover it. 00:28:49.980 [2024-07-25 10:17:34.898309] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.980 [2024-07-25 10:17:34.898447] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.980 [2024-07-25 10:17:34.898476] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.980 [2024-07-25 10:17:34.898494] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.980 [2024-07-25 10:17:34.898509] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:49.980 [2024-07-25 10:17:34.898545] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:49.980 qpair failed and we were unable to recover it. 00:28:49.980 [2024-07-25 10:17:34.908371] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.980 [2024-07-25 10:17:34.908515] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.980 [2024-07-25 10:17:34.908544] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.980 [2024-07-25 10:17:34.908561] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.980 [2024-07-25 10:17:34.908576] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:49.980 [2024-07-25 10:17:34.908612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:49.980 qpair failed and we were unable to recover it. 00:28:49.980 [2024-07-25 10:17:34.918368] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.980 [2024-07-25 10:17:34.918510] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.980 [2024-07-25 10:17:34.918540] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.980 [2024-07-25 10:17:34.918557] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.980 [2024-07-25 10:17:34.918571] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:49.980 [2024-07-25 10:17:34.918607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:49.980 qpair failed and we were unable to recover it. 00:28:49.980 [2024-07-25 10:17:34.928367] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.980 [2024-07-25 10:17:34.928496] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.980 [2024-07-25 10:17:34.928526] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.980 [2024-07-25 10:17:34.928544] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.980 [2024-07-25 10:17:34.928558] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:49.980 [2024-07-25 10:17:34.928594] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:49.980 qpair failed and we were unable to recover it. 00:28:49.980 [2024-07-25 10:17:34.938393] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.980 [2024-07-25 10:17:34.938529] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.980 [2024-07-25 10:17:34.938558] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.980 [2024-07-25 10:17:34.938575] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.980 [2024-07-25 10:17:34.938590] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:49.980 [2024-07-25 10:17:34.938625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:49.980 qpair failed and we were unable to recover it. 00:28:49.980 [2024-07-25 10:17:34.948447] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.980 [2024-07-25 10:17:34.948582] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.980 [2024-07-25 10:17:34.948610] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.980 [2024-07-25 10:17:34.948627] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.980 [2024-07-25 10:17:34.948643] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:49.980 [2024-07-25 10:17:34.948678] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:49.980 qpair failed and we were unable to recover it. 00:28:49.980 [2024-07-25 10:17:34.958467] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.980 [2024-07-25 10:17:34.958599] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.980 [2024-07-25 10:17:34.958629] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.980 [2024-07-25 10:17:34.958646] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.980 [2024-07-25 10:17:34.958661] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:49.981 [2024-07-25 10:17:34.958696] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:49.981 qpair failed and we were unable to recover it. 00:28:49.981 [2024-07-25 10:17:34.968525] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.981 [2024-07-25 10:17:34.968700] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.981 [2024-07-25 10:17:34.968729] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.981 [2024-07-25 10:17:34.968752] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.981 [2024-07-25 10:17:34.968769] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:49.981 [2024-07-25 10:17:34.968803] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:49.981 qpair failed and we were unable to recover it. 00:28:49.981 [2024-07-25 10:17:34.978659] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.981 [2024-07-25 10:17:34.978815] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.981 [2024-07-25 10:17:34.978843] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.981 [2024-07-25 10:17:34.978860] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.981 [2024-07-25 10:17:34.978876] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:49.981 [2024-07-25 10:17:34.978911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:49.981 qpair failed and we were unable to recover it. 00:28:49.981 [2024-07-25 10:17:34.988598] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.981 [2024-07-25 10:17:34.988754] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.981 [2024-07-25 10:17:34.988783] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.981 [2024-07-25 10:17:34.988800] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.981 [2024-07-25 10:17:34.988816] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:49.981 [2024-07-25 10:17:34.988851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:49.981 qpair failed and we were unable to recover it. 00:28:49.981 [2024-07-25 10:17:34.998593] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.981 [2024-07-25 10:17:34.998753] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.981 [2024-07-25 10:17:34.998782] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.981 [2024-07-25 10:17:34.998799] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.981 [2024-07-25 10:17:34.998813] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:49.981 [2024-07-25 10:17:34.998848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:49.981 qpair failed and we were unable to recover it. 00:28:49.981 [2024-07-25 10:17:35.008683] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.981 [2024-07-25 10:17:35.008822] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.981 [2024-07-25 10:17:35.008851] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.981 [2024-07-25 10:17:35.008868] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.981 [2024-07-25 10:17:35.008885] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:49.981 [2024-07-25 10:17:35.008920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:49.981 qpair failed and we were unable to recover it. 00:28:49.981 [2024-07-25 10:17:35.018654] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.981 [2024-07-25 10:17:35.018782] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.981 [2024-07-25 10:17:35.018812] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.981 [2024-07-25 10:17:35.018829] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.981 [2024-07-25 10:17:35.018844] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:49.981 [2024-07-25 10:17:35.018878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:49.981 qpair failed and we were unable to recover it. 00:28:49.981 [2024-07-25 10:17:35.028686] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.981 [2024-07-25 10:17:35.028828] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.981 [2024-07-25 10:17:35.028856] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.981 [2024-07-25 10:17:35.028874] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.981 [2024-07-25 10:17:35.028888] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:49.981 [2024-07-25 10:17:35.028923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:49.981 qpair failed and we were unable to recover it. 00:28:49.981 [2024-07-25 10:17:35.038739] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.981 [2024-07-25 10:17:35.038915] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.981 [2024-07-25 10:17:35.038944] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.981 [2024-07-25 10:17:35.038961] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.981 [2024-07-25 10:17:35.038975] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:49.981 [2024-07-25 10:17:35.039010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:49.981 qpair failed and we were unable to recover it. 00:28:49.981 [2024-07-25 10:17:35.048753] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.981 [2024-07-25 10:17:35.048879] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.981 [2024-07-25 10:17:35.048908] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.982 [2024-07-25 10:17:35.048925] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.982 [2024-07-25 10:17:35.048940] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:49.982 [2024-07-25 10:17:35.048974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:49.982 qpair failed and we were unable to recover it. 00:28:49.982 [2024-07-25 10:17:35.058745] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.982 [2024-07-25 10:17:35.058918] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.982 [2024-07-25 10:17:35.058957] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.982 [2024-07-25 10:17:35.058976] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.982 [2024-07-25 10:17:35.058991] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:49.982 [2024-07-25 10:17:35.059026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:49.982 qpair failed and we were unable to recover it. 00:28:49.982 [2024-07-25 10:17:35.068800] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.982 [2024-07-25 10:17:35.068944] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.982 [2024-07-25 10:17:35.068972] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.982 [2024-07-25 10:17:35.068989] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.982 [2024-07-25 10:17:35.069004] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:49.982 [2024-07-25 10:17:35.069039] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:49.982 qpair failed and we were unable to recover it. 00:28:49.982 [2024-07-25 10:17:35.078782] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.982 [2024-07-25 10:17:35.078907] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.982 [2024-07-25 10:17:35.078935] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.982 [2024-07-25 10:17:35.078952] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.982 [2024-07-25 10:17:35.078967] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:49.982 [2024-07-25 10:17:35.079001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:49.982 qpair failed and we were unable to recover it. 00:28:49.982 [2024-07-25 10:17:35.088874] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.982 [2024-07-25 10:17:35.089026] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.982 [2024-07-25 10:17:35.089055] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.982 [2024-07-25 10:17:35.089072] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.982 [2024-07-25 10:17:35.089087] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:49.982 [2024-07-25 10:17:35.089122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:49.982 qpair failed and we were unable to recover it. 00:28:49.982 [2024-07-25 10:17:35.098876] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.982 [2024-07-25 10:17:35.099055] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.982 [2024-07-25 10:17:35.099085] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.982 [2024-07-25 10:17:35.099102] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.982 [2024-07-25 10:17:35.099117] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:49.982 [2024-07-25 10:17:35.099158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:49.982 qpair failed and we were unable to recover it. 00:28:49.982 [2024-07-25 10:17:35.108900] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.982 [2024-07-25 10:17:35.109031] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.982 [2024-07-25 10:17:35.109061] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.982 [2024-07-25 10:17:35.109078] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.982 [2024-07-25 10:17:35.109095] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:49.982 [2024-07-25 10:17:35.109129] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:49.982 qpair failed and we were unable to recover it. 00:28:49.982 [2024-07-25 10:17:35.118925] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.982 [2024-07-25 10:17:35.119051] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.982 [2024-07-25 10:17:35.119080] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.982 [2024-07-25 10:17:35.119097] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.982 [2024-07-25 10:17:35.119112] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:49.982 [2024-07-25 10:17:35.119147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:49.982 qpair failed and we were unable to recover it. 00:28:49.982 [2024-07-25 10:17:35.128945] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.982 [2024-07-25 10:17:35.129103] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.982 [2024-07-25 10:17:35.129133] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.982 [2024-07-25 10:17:35.129149] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.982 [2024-07-25 10:17:35.129165] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:49.982 [2024-07-25 10:17:35.129199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:49.982 qpair failed and we were unable to recover it. 00:28:49.982 [2024-07-25 10:17:35.138963] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:49.983 [2024-07-25 10:17:35.139095] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:49.983 [2024-07-25 10:17:35.139124] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:49.983 [2024-07-25 10:17:35.139141] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:49.983 [2024-07-25 10:17:35.139156] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:49.983 [2024-07-25 10:17:35.139191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:49.983 qpair failed and we were unable to recover it. 00:28:50.242 [2024-07-25 10:17:35.149027] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.242 [2024-07-25 10:17:35.149166] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.242 [2024-07-25 10:17:35.149201] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.242 [2024-07-25 10:17:35.149219] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.242 [2024-07-25 10:17:35.149234] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:50.242 [2024-07-25 10:17:35.149269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:50.242 qpair failed and we were unable to recover it. 00:28:50.242 [2024-07-25 10:17:35.159063] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.242 [2024-07-25 10:17:35.159197] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.242 [2024-07-25 10:17:35.159225] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.242 [2024-07-25 10:17:35.159243] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.242 [2024-07-25 10:17:35.159257] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:50.242 [2024-07-25 10:17:35.159293] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:50.242 qpair failed and we were unable to recover it. 00:28:50.242 [2024-07-25 10:17:35.169073] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.242 [2024-07-25 10:17:35.169220] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.242 [2024-07-25 10:17:35.169251] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.242 [2024-07-25 10:17:35.169268] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.242 [2024-07-25 10:17:35.169283] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:50.242 [2024-07-25 10:17:35.169319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:50.242 qpair failed and we were unable to recover it. 00:28:50.242 [2024-07-25 10:17:35.179122] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.242 [2024-07-25 10:17:35.179262] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.242 [2024-07-25 10:17:35.179291] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.242 [2024-07-25 10:17:35.179308] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.242 [2024-07-25 10:17:35.179322] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:50.242 [2024-07-25 10:17:35.179356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:50.242 qpair failed and we were unable to recover it. 00:28:50.242 [2024-07-25 10:17:35.189140] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.242 [2024-07-25 10:17:35.189288] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.242 [2024-07-25 10:17:35.189318] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.242 [2024-07-25 10:17:35.189335] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.242 [2024-07-25 10:17:35.189352] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:50.242 [2024-07-25 10:17:35.189394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:50.242 qpair failed and we were unable to recover it. 00:28:50.242 [2024-07-25 10:17:35.199156] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.242 [2024-07-25 10:17:35.199290] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.242 [2024-07-25 10:17:35.199320] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.242 [2024-07-25 10:17:35.199338] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.242 [2024-07-25 10:17:35.199354] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:50.242 [2024-07-25 10:17:35.199389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:50.242 qpair failed and we were unable to recover it. 00:28:50.242 [2024-07-25 10:17:35.209234] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.242 [2024-07-25 10:17:35.209392] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.242 [2024-07-25 10:17:35.209423] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.242 [2024-07-25 10:17:35.209459] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.242 [2024-07-25 10:17:35.209475] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:50.242 [2024-07-25 10:17:35.209511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:50.242 qpair failed and we were unable to recover it. 00:28:50.242 [2024-07-25 10:17:35.219288] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.242 [2024-07-25 10:17:35.219413] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.242 [2024-07-25 10:17:35.219450] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.242 [2024-07-25 10:17:35.219468] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.242 [2024-07-25 10:17:35.219482] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:50.242 [2024-07-25 10:17:35.219519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:50.242 qpair failed and we were unable to recover it. 00:28:50.242 [2024-07-25 10:17:35.229260] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.242 [2024-07-25 10:17:35.229391] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.242 [2024-07-25 10:17:35.229419] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.243 [2024-07-25 10:17:35.229446] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.243 [2024-07-25 10:17:35.229462] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:50.243 [2024-07-25 10:17:35.229499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:50.243 qpair failed and we were unable to recover it. 00:28:50.243 [2024-07-25 10:17:35.239267] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.243 [2024-07-25 10:17:35.239435] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.243 [2024-07-25 10:17:35.239463] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.243 [2024-07-25 10:17:35.239480] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.243 [2024-07-25 10:17:35.239495] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:50.243 [2024-07-25 10:17:35.239528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:50.243 qpair failed and we were unable to recover it. 00:28:50.243 [2024-07-25 10:17:35.249363] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.243 [2024-07-25 10:17:35.249511] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.243 [2024-07-25 10:17:35.249541] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.243 [2024-07-25 10:17:35.249559] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.243 [2024-07-25 10:17:35.249574] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:50.243 [2024-07-25 10:17:35.249611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:50.243 qpair failed and we were unable to recover it. 00:28:50.243 [2024-07-25 10:17:35.259323] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.243 [2024-07-25 10:17:35.259461] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.243 [2024-07-25 10:17:35.259490] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.243 [2024-07-25 10:17:35.259508] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.243 [2024-07-25 10:17:35.259523] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:50.243 [2024-07-25 10:17:35.259558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:50.243 qpair failed and we were unable to recover it. 00:28:50.243 [2024-07-25 10:17:35.269397] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.243 [2024-07-25 10:17:35.269548] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.243 [2024-07-25 10:17:35.269578] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.243 [2024-07-25 10:17:35.269595] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.243 [2024-07-25 10:17:35.269610] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:50.243 [2024-07-25 10:17:35.269644] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:50.243 qpair failed and we were unable to recover it. 00:28:50.243 [2024-07-25 10:17:35.279368] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.243 [2024-07-25 10:17:35.279498] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.243 [2024-07-25 10:17:35.279528] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.243 [2024-07-25 10:17:35.279545] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.243 [2024-07-25 10:17:35.279566] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:50.243 [2024-07-25 10:17:35.279600] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:50.243 qpair failed and we were unable to recover it. 00:28:50.243 [2024-07-25 10:17:35.289405] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.243 [2024-07-25 10:17:35.289560] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.243 [2024-07-25 10:17:35.289592] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.243 [2024-07-25 10:17:35.289610] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.243 [2024-07-25 10:17:35.289626] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:50.243 [2024-07-25 10:17:35.289661] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:50.243 qpair failed and we were unable to recover it. 00:28:50.243 [2024-07-25 10:17:35.299420] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.243 [2024-07-25 10:17:35.299606] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.243 [2024-07-25 10:17:35.299636] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.243 [2024-07-25 10:17:35.299653] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.243 [2024-07-25 10:17:35.299668] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:50.243 [2024-07-25 10:17:35.299704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:50.243 qpair failed and we were unable to recover it. 00:28:50.243 [2024-07-25 10:17:35.309530] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.243 [2024-07-25 10:17:35.309663] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.243 [2024-07-25 10:17:35.309692] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.243 [2024-07-25 10:17:35.309710] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.243 [2024-07-25 10:17:35.309725] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:50.243 [2024-07-25 10:17:35.309761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:50.243 qpair failed and we were unable to recover it. 00:28:50.243 [2024-07-25 10:17:35.319535] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.243 [2024-07-25 10:17:35.319666] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.243 [2024-07-25 10:17:35.319695] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.243 [2024-07-25 10:17:35.319712] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.243 [2024-07-25 10:17:35.319730] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:50.243 [2024-07-25 10:17:35.319765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:50.243 qpair failed and we were unable to recover it. 00:28:50.243 [2024-07-25 10:17:35.329548] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.243 [2024-07-25 10:17:35.329682] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.243 [2024-07-25 10:17:35.329712] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.243 [2024-07-25 10:17:35.329728] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.243 [2024-07-25 10:17:35.329743] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:50.243 [2024-07-25 10:17:35.329776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:50.243 qpair failed and we were unable to recover it. 00:28:50.243 [2024-07-25 10:17:35.339568] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.243 [2024-07-25 10:17:35.339736] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.243 [2024-07-25 10:17:35.339766] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.243 [2024-07-25 10:17:35.339783] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.243 [2024-07-25 10:17:35.339797] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:50.243 [2024-07-25 10:17:35.339833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:50.243 qpair failed and we were unable to recover it. 00:28:50.243 [2024-07-25 10:17:35.349719] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.243 [2024-07-25 10:17:35.349894] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.243 [2024-07-25 10:17:35.349923] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.243 [2024-07-25 10:17:35.349940] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.243 [2024-07-25 10:17:35.349954] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:50.243 [2024-07-25 10:17:35.349989] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:50.243 qpair failed and we were unable to recover it. 00:28:50.243 [2024-07-25 10:17:35.359637] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.243 [2024-07-25 10:17:35.359811] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.243 [2024-07-25 10:17:35.359840] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.243 [2024-07-25 10:17:35.359857] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.243 [2024-07-25 10:17:35.359871] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:50.244 [2024-07-25 10:17:35.359905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:50.244 qpair failed and we were unable to recover it. 00:28:50.244 [2024-07-25 10:17:35.369645] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.244 [2024-07-25 10:17:35.369772] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.244 [2024-07-25 10:17:35.369801] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.244 [2024-07-25 10:17:35.369824] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.244 [2024-07-25 10:17:35.369841] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:50.244 [2024-07-25 10:17:35.369875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:50.244 qpair failed and we were unable to recover it. 00:28:50.244 [2024-07-25 10:17:35.379666] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.244 [2024-07-25 10:17:35.379792] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.244 [2024-07-25 10:17:35.379821] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.244 [2024-07-25 10:17:35.379837] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.244 [2024-07-25 10:17:35.379852] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:50.244 [2024-07-25 10:17:35.379888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:50.244 qpair failed and we were unable to recover it. 00:28:50.244 [2024-07-25 10:17:35.389721] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.244 [2024-07-25 10:17:35.389890] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.244 [2024-07-25 10:17:35.389919] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.244 [2024-07-25 10:17:35.389936] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.244 [2024-07-25 10:17:35.389951] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:50.244 [2024-07-25 10:17:35.389986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:50.244 qpair failed and we were unable to recover it. 00:28:50.244 [2024-07-25 10:17:35.399721] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.244 [2024-07-25 10:17:35.399895] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.244 [2024-07-25 10:17:35.399925] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.244 [2024-07-25 10:17:35.399943] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.244 [2024-07-25 10:17:35.399957] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:50.244 [2024-07-25 10:17:35.399990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:50.244 qpair failed and we were unable to recover it. 00:28:50.503 [2024-07-25 10:17:35.409769] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.503 [2024-07-25 10:17:35.409899] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.503 [2024-07-25 10:17:35.409928] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.503 [2024-07-25 10:17:35.409946] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.503 [2024-07-25 10:17:35.409961] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:50.503 [2024-07-25 10:17:35.409996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:50.503 qpair failed and we were unable to recover it. 00:28:50.503 [2024-07-25 10:17:35.419783] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.503 [2024-07-25 10:17:35.419921] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.503 [2024-07-25 10:17:35.419950] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.503 [2024-07-25 10:17:35.419968] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.503 [2024-07-25 10:17:35.419984] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:50.503 [2024-07-25 10:17:35.420018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:50.503 qpair failed and we were unable to recover it. 00:28:50.503 [2024-07-25 10:17:35.429825] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.503 [2024-07-25 10:17:35.430003] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.503 [2024-07-25 10:17:35.430032] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.503 [2024-07-25 10:17:35.430050] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.503 [2024-07-25 10:17:35.430064] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:50.503 [2024-07-25 10:17:35.430098] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:50.503 qpair failed and we were unable to recover it. 00:28:50.503 [2024-07-25 10:17:35.439835] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.503 [2024-07-25 10:17:35.440006] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.503 [2024-07-25 10:17:35.440035] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.503 [2024-07-25 10:17:35.440053] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.503 [2024-07-25 10:17:35.440067] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:50.503 [2024-07-25 10:17:35.440102] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:50.503 qpair failed and we were unable to recover it. 00:28:50.503 [2024-07-25 10:17:35.449993] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.503 [2024-07-25 10:17:35.450140] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.503 [2024-07-25 10:17:35.450169] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.503 [2024-07-25 10:17:35.450186] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.503 [2024-07-25 10:17:35.450200] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:50.503 [2024-07-25 10:17:35.450236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:50.503 qpair failed and we were unable to recover it. 00:28:50.503 [2024-07-25 10:17:35.459917] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.503 [2024-07-25 10:17:35.460048] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.503 [2024-07-25 10:17:35.460077] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.503 [2024-07-25 10:17:35.460100] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.503 [2024-07-25 10:17:35.460117] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:50.503 [2024-07-25 10:17:35.460151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:50.503 qpair failed and we were unable to recover it. 00:28:50.503 [2024-07-25 10:17:35.469957] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.503 [2024-07-25 10:17:35.470091] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.503 [2024-07-25 10:17:35.470120] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.503 [2024-07-25 10:17:35.470136] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.503 [2024-07-25 10:17:35.470151] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:50.503 [2024-07-25 10:17:35.470184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:50.503 qpair failed and we were unable to recover it. 00:28:50.503 [2024-07-25 10:17:35.479989] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.503 [2024-07-25 10:17:35.480118] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.503 [2024-07-25 10:17:35.480146] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.503 [2024-07-25 10:17:35.480163] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.503 [2024-07-25 10:17:35.480178] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:50.503 [2024-07-25 10:17:35.480213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:50.503 qpair failed and we were unable to recover it. 00:28:50.503 [2024-07-25 10:17:35.490012] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.503 [2024-07-25 10:17:35.490143] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.503 [2024-07-25 10:17:35.490172] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.503 [2024-07-25 10:17:35.490189] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.503 [2024-07-25 10:17:35.490204] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:50.503 [2024-07-25 10:17:35.490239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:50.503 qpair failed and we were unable to recover it. 00:28:50.503 [2024-07-25 10:17:35.500002] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.504 [2024-07-25 10:17:35.500133] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.504 [2024-07-25 10:17:35.500162] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.504 [2024-07-25 10:17:35.500179] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.504 [2024-07-25 10:17:35.500195] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:50.504 [2024-07-25 10:17:35.500229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:50.504 qpair failed and we were unable to recover it. 00:28:50.504 [2024-07-25 10:17:35.510171] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.504 [2024-07-25 10:17:35.510343] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.504 [2024-07-25 10:17:35.510372] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.504 [2024-07-25 10:17:35.510390] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.504 [2024-07-25 10:17:35.510405] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:50.504 [2024-07-25 10:17:35.510447] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:50.504 qpair failed and we were unable to recover it. 00:28:50.504 [2024-07-25 10:17:35.520079] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.504 [2024-07-25 10:17:35.520219] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.504 [2024-07-25 10:17:35.520248] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.504 [2024-07-25 10:17:35.520265] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.504 [2024-07-25 10:17:35.520281] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:50.504 [2024-07-25 10:17:35.520316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:50.504 qpair failed and we were unable to recover it. 00:28:50.504 [2024-07-25 10:17:35.530109] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.504 [2024-07-25 10:17:35.530279] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.504 [2024-07-25 10:17:35.530309] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.504 [2024-07-25 10:17:35.530326] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.504 [2024-07-25 10:17:35.530342] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:50.504 [2024-07-25 10:17:35.530377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:50.504 qpair failed and we were unable to recover it. 00:28:50.504 [2024-07-25 10:17:35.540170] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.504 [2024-07-25 10:17:35.540295] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.504 [2024-07-25 10:17:35.540324] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.504 [2024-07-25 10:17:35.540341] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.504 [2024-07-25 10:17:35.540356] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:50.504 [2024-07-25 10:17:35.540390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:50.504 qpair failed and we were unable to recover it. 00:28:50.504 [2024-07-25 10:17:35.550253] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.504 [2024-07-25 10:17:35.550386] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.504 [2024-07-25 10:17:35.550422] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.504 [2024-07-25 10:17:35.550450] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.504 [2024-07-25 10:17:35.550468] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:50.504 [2024-07-25 10:17:35.550502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:50.504 qpair failed and we were unable to recover it. 00:28:50.504 [2024-07-25 10:17:35.560244] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.504 [2024-07-25 10:17:35.560378] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.504 [2024-07-25 10:17:35.560407] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.504 [2024-07-25 10:17:35.560424] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.504 [2024-07-25 10:17:35.560448] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:50.504 [2024-07-25 10:17:35.560483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:50.504 qpair failed and we were unable to recover it. 00:28:50.504 [2024-07-25 10:17:35.570265] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.504 [2024-07-25 10:17:35.570421] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.504 [2024-07-25 10:17:35.570457] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.504 [2024-07-25 10:17:35.570475] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.504 [2024-07-25 10:17:35.570490] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:50.504 [2024-07-25 10:17:35.570525] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:50.504 qpair failed and we were unable to recover it. 00:28:50.504 [2024-07-25 10:17:35.580289] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.504 [2024-07-25 10:17:35.580439] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.504 [2024-07-25 10:17:35.580468] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.504 [2024-07-25 10:17:35.580486] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.504 [2024-07-25 10:17:35.580501] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:50.504 [2024-07-25 10:17:35.580535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:50.504 qpair failed and we were unable to recover it. 00:28:50.504 [2024-07-25 10:17:35.590381] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.504 [2024-07-25 10:17:35.590544] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.504 [2024-07-25 10:17:35.590573] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.504 [2024-07-25 10:17:35.590591] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.504 [2024-07-25 10:17:35.590605] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:50.504 [2024-07-25 10:17:35.590647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:50.504 qpair failed and we were unable to recover it. 00:28:50.504 [2024-07-25 10:17:35.600342] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.504 [2024-07-25 10:17:35.600473] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.504 [2024-07-25 10:17:35.600502] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.504 [2024-07-25 10:17:35.600520] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.504 [2024-07-25 10:17:35.600535] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:50.504 [2024-07-25 10:17:35.600570] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:50.504 qpair failed and we were unable to recover it. 00:28:50.504 [2024-07-25 10:17:35.610426] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.504 [2024-07-25 10:17:35.610564] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.504 [2024-07-25 10:17:35.610594] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.504 [2024-07-25 10:17:35.610612] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.504 [2024-07-25 10:17:35.610627] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:50.504 [2024-07-25 10:17:35.610662] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:50.504 qpair failed and we were unable to recover it. 00:28:50.504 [2024-07-25 10:17:35.620407] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.504 [2024-07-25 10:17:35.620551] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.504 [2024-07-25 10:17:35.620580] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.504 [2024-07-25 10:17:35.620606] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.504 [2024-07-25 10:17:35.620621] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:50.504 [2024-07-25 10:17:35.620655] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:50.504 qpair failed and we were unable to recover it. 00:28:50.504 [2024-07-25 10:17:35.630465] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.504 [2024-07-25 10:17:35.630606] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.504 [2024-07-25 10:17:35.630635] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.505 [2024-07-25 10:17:35.630653] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.505 [2024-07-25 10:17:35.630668] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:50.505 [2024-07-25 10:17:35.630704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:50.505 qpair failed and we were unable to recover it. 00:28:50.505 [2024-07-25 10:17:35.640506] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.505 [2024-07-25 10:17:35.640665] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.505 [2024-07-25 10:17:35.640708] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.505 [2024-07-25 10:17:35.640726] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.505 [2024-07-25 10:17:35.640741] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:50.505 [2024-07-25 10:17:35.640775] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:50.505 qpair failed and we were unable to recover it. 00:28:50.505 [2024-07-25 10:17:35.650497] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.505 [2024-07-25 10:17:35.650628] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.505 [2024-07-25 10:17:35.650658] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.505 [2024-07-25 10:17:35.650675] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.505 [2024-07-25 10:17:35.650690] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:50.505 [2024-07-25 10:17:35.650724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:50.505 qpair failed and we were unable to recover it. 00:28:50.505 [2024-07-25 10:17:35.660551] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.505 [2024-07-25 10:17:35.660694] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.505 [2024-07-25 10:17:35.660723] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.505 [2024-07-25 10:17:35.660740] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.505 [2024-07-25 10:17:35.660755] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:50.505 [2024-07-25 10:17:35.660790] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:50.505 qpair failed and we were unable to recover it. 00:28:50.765 [2024-07-25 10:17:35.670559] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.765 [2024-07-25 10:17:35.670695] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.765 [2024-07-25 10:17:35.670724] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.765 [2024-07-25 10:17:35.670746] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.765 [2024-07-25 10:17:35.670761] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:50.765 [2024-07-25 10:17:35.670796] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:50.765 qpair failed and we were unable to recover it. 00:28:50.765 [2024-07-25 10:17:35.680586] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.765 [2024-07-25 10:17:35.680724] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.765 [2024-07-25 10:17:35.680753] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.765 [2024-07-25 10:17:35.680769] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.765 [2024-07-25 10:17:35.680793] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:50.765 [2024-07-25 10:17:35.680829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:50.765 qpair failed and we were unable to recover it. 00:28:50.765 [2024-07-25 10:17:35.690611] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.765 [2024-07-25 10:17:35.690743] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.765 [2024-07-25 10:17:35.690772] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.765 [2024-07-25 10:17:35.690789] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.765 [2024-07-25 10:17:35.690804] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:50.765 [2024-07-25 10:17:35.690839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:50.765 qpair failed and we were unable to recover it. 00:28:50.765 [2024-07-25 10:17:35.700609] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.765 [2024-07-25 10:17:35.700737] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.765 [2024-07-25 10:17:35.700765] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.765 [2024-07-25 10:17:35.700783] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.765 [2024-07-25 10:17:35.700797] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:50.765 [2024-07-25 10:17:35.700830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:50.765 qpair failed and we were unable to recover it. 00:28:50.765 [2024-07-25 10:17:35.710692] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.765 [2024-07-25 10:17:35.710834] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.765 [2024-07-25 10:17:35.710864] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.766 [2024-07-25 10:17:35.710881] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.766 [2024-07-25 10:17:35.710896] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:50.766 [2024-07-25 10:17:35.710931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:50.766 qpair failed and we were unable to recover it. 00:28:50.766 [2024-07-25 10:17:35.720708] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.766 [2024-07-25 10:17:35.720876] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.766 [2024-07-25 10:17:35.720905] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.766 [2024-07-25 10:17:35.720922] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.766 [2024-07-25 10:17:35.720936] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:50.766 [2024-07-25 10:17:35.720971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:50.766 qpair failed and we were unable to recover it. 00:28:50.766 [2024-07-25 10:17:35.730698] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.766 [2024-07-25 10:17:35.730827] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.766 [2024-07-25 10:17:35.730857] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.766 [2024-07-25 10:17:35.730874] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.766 [2024-07-25 10:17:35.730889] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:50.766 [2024-07-25 10:17:35.730923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:50.766 qpair failed and we were unable to recover it. 00:28:50.766 [2024-07-25 10:17:35.740738] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.766 [2024-07-25 10:17:35.740863] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.766 [2024-07-25 10:17:35.740891] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.766 [2024-07-25 10:17:35.740908] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.766 [2024-07-25 10:17:35.740923] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:50.766 [2024-07-25 10:17:35.740956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:50.766 qpair failed and we were unable to recover it. 00:28:50.766 [2024-07-25 10:17:35.750805] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.766 [2024-07-25 10:17:35.750950] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.766 [2024-07-25 10:17:35.750979] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.766 [2024-07-25 10:17:35.750997] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.766 [2024-07-25 10:17:35.751011] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:50.766 [2024-07-25 10:17:35.751047] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:50.766 qpair failed and we were unable to recover it. 00:28:50.766 [2024-07-25 10:17:35.760852] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.766 [2024-07-25 10:17:35.760990] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.766 [2024-07-25 10:17:35.761019] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.766 [2024-07-25 10:17:35.761036] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.766 [2024-07-25 10:17:35.761051] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:50.766 [2024-07-25 10:17:35.761087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:50.766 qpair failed and we were unable to recover it. 00:28:50.766 [2024-07-25 10:17:35.770867] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.766 [2024-07-25 10:17:35.771039] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.766 [2024-07-25 10:17:35.771068] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.766 [2024-07-25 10:17:35.771092] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.766 [2024-07-25 10:17:35.771109] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:50.766 [2024-07-25 10:17:35.771143] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:50.766 qpair failed and we were unable to recover it. 00:28:50.766 [2024-07-25 10:17:35.780874] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.766 [2024-07-25 10:17:35.781026] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.766 [2024-07-25 10:17:35.781055] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.766 [2024-07-25 10:17:35.781072] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.766 [2024-07-25 10:17:35.781086] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:50.766 [2024-07-25 10:17:35.781122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:50.766 qpair failed and we were unable to recover it. 00:28:50.766 [2024-07-25 10:17:35.790919] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.766 [2024-07-25 10:17:35.791049] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.766 [2024-07-25 10:17:35.791078] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.766 [2024-07-25 10:17:35.791095] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.766 [2024-07-25 10:17:35.791110] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:50.766 [2024-07-25 10:17:35.791143] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:50.766 qpair failed and we were unable to recover it. 00:28:50.766 [2024-07-25 10:17:35.800907] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.766 [2024-07-25 10:17:35.801029] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.766 [2024-07-25 10:17:35.801057] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.766 [2024-07-25 10:17:35.801075] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.766 [2024-07-25 10:17:35.801090] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:50.766 [2024-07-25 10:17:35.801124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:50.766 qpair failed and we were unable to recover it. 00:28:50.766 [2024-07-25 10:17:35.810970] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.766 [2024-07-25 10:17:35.811106] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.766 [2024-07-25 10:17:35.811135] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.766 [2024-07-25 10:17:35.811153] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.766 [2024-07-25 10:17:35.811167] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:50.766 [2024-07-25 10:17:35.811203] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:50.766 qpair failed and we were unable to recover it. 00:28:50.766 [2024-07-25 10:17:35.820979] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.766 [2024-07-25 10:17:35.821118] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.766 [2024-07-25 10:17:35.821148] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.766 [2024-07-25 10:17:35.821165] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.766 [2024-07-25 10:17:35.821179] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:50.766 [2024-07-25 10:17:35.821213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:50.766 qpair failed and we were unable to recover it. 00:28:50.766 [2024-07-25 10:17:35.831053] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.767 [2024-07-25 10:17:35.831219] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.767 [2024-07-25 10:17:35.831252] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.767 [2024-07-25 10:17:35.831269] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.767 [2024-07-25 10:17:35.831284] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:50.767 [2024-07-25 10:17:35.831319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:50.767 qpair failed and we were unable to recover it. 00:28:50.767 [2024-07-25 10:17:35.841034] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.767 [2024-07-25 10:17:35.841158] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.767 [2024-07-25 10:17:35.841189] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.767 [2024-07-25 10:17:35.841206] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.767 [2024-07-25 10:17:35.841221] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:50.767 [2024-07-25 10:17:35.841264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:50.767 qpair failed and we were unable to recover it. 00:28:50.767 [2024-07-25 10:17:35.851104] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.767 [2024-07-25 10:17:35.851279] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.767 [2024-07-25 10:17:35.851309] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.767 [2024-07-25 10:17:35.851326] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.767 [2024-07-25 10:17:35.851340] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:50.767 [2024-07-25 10:17:35.851375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:50.767 qpair failed and we were unable to recover it. 00:28:50.767 [2024-07-25 10:17:35.861161] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.767 [2024-07-25 10:17:35.861297] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.767 [2024-07-25 10:17:35.861326] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.767 [2024-07-25 10:17:35.861351] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.767 [2024-07-25 10:17:35.861367] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:50.767 [2024-07-25 10:17:35.861402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:50.767 qpair failed and we were unable to recover it. 00:28:50.767 [2024-07-25 10:17:35.871228] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.767 [2024-07-25 10:17:35.871360] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.767 [2024-07-25 10:17:35.871389] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.767 [2024-07-25 10:17:35.871406] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.767 [2024-07-25 10:17:35.871421] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:50.767 [2024-07-25 10:17:35.871462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:50.767 qpair failed and we were unable to recover it. 00:28:50.767 [2024-07-25 10:17:35.881224] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.767 [2024-07-25 10:17:35.881351] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.767 [2024-07-25 10:17:35.881380] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.767 [2024-07-25 10:17:35.881396] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.767 [2024-07-25 10:17:35.881411] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:50.767 [2024-07-25 10:17:35.881456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:50.767 qpair failed and we were unable to recover it. 00:28:50.767 [2024-07-25 10:17:35.891250] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.767 [2024-07-25 10:17:35.891378] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.767 [2024-07-25 10:17:35.891407] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.767 [2024-07-25 10:17:35.891425] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.767 [2024-07-25 10:17:35.891449] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:50.767 [2024-07-25 10:17:35.891484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:50.767 qpair failed and we were unable to recover it. 00:28:50.767 [2024-07-25 10:17:35.901184] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.767 [2024-07-25 10:17:35.901310] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.767 [2024-07-25 10:17:35.901339] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.767 [2024-07-25 10:17:35.901356] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.767 [2024-07-25 10:17:35.901371] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:50.767 [2024-07-25 10:17:35.901407] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:50.767 qpair failed and we were unable to recover it. 00:28:50.767 [2024-07-25 10:17:35.911246] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.767 [2024-07-25 10:17:35.911405] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.767 [2024-07-25 10:17:35.911441] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.767 [2024-07-25 10:17:35.911460] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.767 [2024-07-25 10:17:35.911475] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:50.767 [2024-07-25 10:17:35.911510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:50.767 qpair failed and we were unable to recover it. 00:28:50.767 [2024-07-25 10:17:35.921278] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:50.767 [2024-07-25 10:17:35.921409] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:50.767 [2024-07-25 10:17:35.921447] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:50.767 [2024-07-25 10:17:35.921466] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:50.767 [2024-07-25 10:17:35.921481] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:50.767 [2024-07-25 10:17:35.921516] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:50.767 qpair failed and we were unable to recover it. 00:28:51.027 [2024-07-25 10:17:35.931292] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.027 [2024-07-25 10:17:35.931461] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.027 [2024-07-25 10:17:35.931492] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.027 [2024-07-25 10:17:35.931509] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.027 [2024-07-25 10:17:35.931524] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:51.027 [2024-07-25 10:17:35.931560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.027 qpair failed and we were unable to recover it. 00:28:51.027 [2024-07-25 10:17:35.941380] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.027 [2024-07-25 10:17:35.941515] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.027 [2024-07-25 10:17:35.941545] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.027 [2024-07-25 10:17:35.941562] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.027 [2024-07-25 10:17:35.941577] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:51.027 [2024-07-25 10:17:35.941613] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.027 qpair failed and we were unable to recover it. 00:28:51.027 [2024-07-25 10:17:35.951343] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.027 [2024-07-25 10:17:35.951498] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.027 [2024-07-25 10:17:35.951533] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.027 [2024-07-25 10:17:35.951551] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.027 [2024-07-25 10:17:35.951567] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:51.027 [2024-07-25 10:17:35.951601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.027 qpair failed and we were unable to recover it. 00:28:51.027 [2024-07-25 10:17:35.961367] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.027 [2024-07-25 10:17:35.961499] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.027 [2024-07-25 10:17:35.961528] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.027 [2024-07-25 10:17:35.961545] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.027 [2024-07-25 10:17:35.961560] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:51.027 [2024-07-25 10:17:35.961595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.027 qpair failed and we were unable to recover it. 00:28:51.027 [2024-07-25 10:17:35.971400] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.027 [2024-07-25 10:17:35.971619] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.027 [2024-07-25 10:17:35.971648] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.027 [2024-07-25 10:17:35.971666] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.027 [2024-07-25 10:17:35.971681] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:51.027 [2024-07-25 10:17:35.971716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.027 qpair failed and we were unable to recover it. 00:28:51.027 [2024-07-25 10:17:35.981501] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.027 [2024-07-25 10:17:35.981636] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.027 [2024-07-25 10:17:35.981664] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.027 [2024-07-25 10:17:35.981682] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.027 [2024-07-25 10:17:35.981696] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:51.027 [2024-07-25 10:17:35.981732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.027 qpair failed and we were unable to recover it. 00:28:51.027 [2024-07-25 10:17:35.991515] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.027 [2024-07-25 10:17:35.991684] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.027 [2024-07-25 10:17:35.991713] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.027 [2024-07-25 10:17:35.991730] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.027 [2024-07-25 10:17:35.991745] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:51.027 [2024-07-25 10:17:35.991785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.027 qpair failed and we were unable to recover it. 00:28:51.027 [2024-07-25 10:17:36.001501] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.027 [2024-07-25 10:17:36.001634] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.027 [2024-07-25 10:17:36.001662] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.027 [2024-07-25 10:17:36.001680] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.027 [2024-07-25 10:17:36.001694] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:51.027 [2024-07-25 10:17:36.001729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.027 qpair failed and we were unable to recover it. 00:28:51.027 [2024-07-25 10:17:36.011503] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.027 [2024-07-25 10:17:36.011629] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.027 [2024-07-25 10:17:36.011658] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.027 [2024-07-25 10:17:36.011675] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.027 [2024-07-25 10:17:36.011692] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:51.027 [2024-07-25 10:17:36.011726] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.027 qpair failed and we were unable to recover it. 00:28:51.027 [2024-07-25 10:17:36.021562] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.027 [2024-07-25 10:17:36.021705] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.027 [2024-07-25 10:17:36.021734] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.027 [2024-07-25 10:17:36.021751] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.027 [2024-07-25 10:17:36.021766] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:51.027 [2024-07-25 10:17:36.021802] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.027 qpair failed and we were unable to recover it. 00:28:51.027 [2024-07-25 10:17:36.031615] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.027 [2024-07-25 10:17:36.031765] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.027 [2024-07-25 10:17:36.031799] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.027 [2024-07-25 10:17:36.031817] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.027 [2024-07-25 10:17:36.031832] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:51.027 [2024-07-25 10:17:36.031865] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.027 qpair failed and we were unable to recover it. 00:28:51.027 [2024-07-25 10:17:36.041647] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.027 [2024-07-25 10:17:36.041814] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.027 [2024-07-25 10:17:36.041849] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.027 [2024-07-25 10:17:36.041867] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.027 [2024-07-25 10:17:36.041883] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:51.027 [2024-07-25 10:17:36.041918] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.027 qpair failed and we were unable to recover it. 00:28:51.027 [2024-07-25 10:17:36.051693] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.027 [2024-07-25 10:17:36.051860] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.027 [2024-07-25 10:17:36.051888] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.027 [2024-07-25 10:17:36.051905] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.027 [2024-07-25 10:17:36.051920] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:51.027 [2024-07-25 10:17:36.051955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.027 qpair failed and we were unable to recover it. 00:28:51.027 [2024-07-25 10:17:36.061694] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.028 [2024-07-25 10:17:36.061819] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.028 [2024-07-25 10:17:36.061847] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.028 [2024-07-25 10:17:36.061864] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.028 [2024-07-25 10:17:36.061879] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:51.028 [2024-07-25 10:17:36.061914] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.028 qpair failed and we were unable to recover it. 00:28:51.028 [2024-07-25 10:17:36.071781] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.028 [2024-07-25 10:17:36.071927] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.028 [2024-07-25 10:17:36.071956] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.028 [2024-07-25 10:17:36.071973] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.028 [2024-07-25 10:17:36.071987] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:51.028 [2024-07-25 10:17:36.072021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.028 qpair failed and we were unable to recover it. 00:28:51.028 [2024-07-25 10:17:36.081733] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.028 [2024-07-25 10:17:36.081860] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.028 [2024-07-25 10:17:36.081889] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.028 [2024-07-25 10:17:36.081906] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.028 [2024-07-25 10:17:36.081927] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:51.028 [2024-07-25 10:17:36.081962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.028 qpair failed and we were unable to recover it. 00:28:51.028 [2024-07-25 10:17:36.091757] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.028 [2024-07-25 10:17:36.091880] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.028 [2024-07-25 10:17:36.091908] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.028 [2024-07-25 10:17:36.091926] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.028 [2024-07-25 10:17:36.091941] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:51.028 [2024-07-25 10:17:36.091977] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.028 qpair failed and we were unable to recover it. 00:28:51.028 [2024-07-25 10:17:36.101847] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.028 [2024-07-25 10:17:36.101977] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.028 [2024-07-25 10:17:36.102006] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.028 [2024-07-25 10:17:36.102033] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.028 [2024-07-25 10:17:36.102048] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:51.028 [2024-07-25 10:17:36.102083] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.028 qpair failed and we were unable to recover it. 00:28:51.028 [2024-07-25 10:17:36.111851] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.028 [2024-07-25 10:17:36.111986] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.028 [2024-07-25 10:17:36.112015] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.028 [2024-07-25 10:17:36.112033] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.028 [2024-07-25 10:17:36.112047] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:51.028 [2024-07-25 10:17:36.112092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.028 qpair failed and we were unable to recover it. 00:28:51.028 [2024-07-25 10:17:36.121867] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.028 [2024-07-25 10:17:36.122008] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.028 [2024-07-25 10:17:36.122037] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.028 [2024-07-25 10:17:36.122054] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.028 [2024-07-25 10:17:36.122069] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:51.028 [2024-07-25 10:17:36.122113] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.028 qpair failed and we were unable to recover it. 00:28:51.028 [2024-07-25 10:17:36.131901] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.028 [2024-07-25 10:17:36.132061] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.028 [2024-07-25 10:17:36.132091] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.028 [2024-07-25 10:17:36.132108] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.028 [2024-07-25 10:17:36.132123] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:51.028 [2024-07-25 10:17:36.132158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.028 qpair failed and we were unable to recover it. 00:28:51.028 [2024-07-25 10:17:36.141930] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.028 [2024-07-25 10:17:36.142058] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.028 [2024-07-25 10:17:36.142087] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.028 [2024-07-25 10:17:36.142104] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.028 [2024-07-25 10:17:36.142119] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:51.028 [2024-07-25 10:17:36.142162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.028 qpair failed and we were unable to recover it. 00:28:51.028 [2024-07-25 10:17:36.151931] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.028 [2024-07-25 10:17:36.152064] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.028 [2024-07-25 10:17:36.152093] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.028 [2024-07-25 10:17:36.152109] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.028 [2024-07-25 10:17:36.152124] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:51.028 [2024-07-25 10:17:36.152159] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.028 qpair failed and we were unable to recover it. 00:28:51.028 [2024-07-25 10:17:36.162020] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.028 [2024-07-25 10:17:36.162163] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.028 [2024-07-25 10:17:36.162192] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.028 [2024-07-25 10:17:36.162210] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.028 [2024-07-25 10:17:36.162224] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:51.028 [2024-07-25 10:17:36.162258] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.028 qpair failed and we were unable to recover it. 00:28:51.028 [2024-07-25 10:17:36.172054] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.028 [2024-07-25 10:17:36.172242] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.028 [2024-07-25 10:17:36.172281] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.028 [2024-07-25 10:17:36.172299] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.028 [2024-07-25 10:17:36.172320] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:51.028 [2024-07-25 10:17:36.172355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.028 qpair failed and we were unable to recover it. 00:28:51.028 [2024-07-25 10:17:36.182030] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.028 [2024-07-25 10:17:36.182170] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.028 [2024-07-25 10:17:36.182199] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.028 [2024-07-25 10:17:36.182217] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.028 [2024-07-25 10:17:36.182230] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:51.028 [2024-07-25 10:17:36.182275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.028 qpair failed and we were unable to recover it. 00:28:51.287 [2024-07-25 10:17:36.192097] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.288 [2024-07-25 10:17:36.192259] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.288 [2024-07-25 10:17:36.192288] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.288 [2024-07-25 10:17:36.192315] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.288 [2024-07-25 10:17:36.192330] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:51.288 [2024-07-25 10:17:36.192364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.288 qpair failed and we were unable to recover it. 00:28:51.288 [2024-07-25 10:17:36.202075] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.288 [2024-07-25 10:17:36.202201] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.288 [2024-07-25 10:17:36.202230] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.288 [2024-07-25 10:17:36.202248] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.288 [2024-07-25 10:17:36.202264] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:51.288 [2024-07-25 10:17:36.202298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.288 qpair failed and we were unable to recover it. 00:28:51.288 [2024-07-25 10:17:36.212115] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.288 [2024-07-25 10:17:36.212244] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.288 [2024-07-25 10:17:36.212274] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.288 [2024-07-25 10:17:36.212291] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.288 [2024-07-25 10:17:36.212305] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:51.288 [2024-07-25 10:17:36.212341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.288 qpair failed and we were unable to recover it. 00:28:51.288 [2024-07-25 10:17:36.222160] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.288 [2024-07-25 10:17:36.222281] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.288 [2024-07-25 10:17:36.222311] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.288 [2024-07-25 10:17:36.222328] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.288 [2024-07-25 10:17:36.222342] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:51.288 [2024-07-25 10:17:36.222377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.288 qpair failed and we were unable to recover it. 00:28:51.288 [2024-07-25 10:17:36.232203] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.288 [2024-07-25 10:17:36.232349] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.288 [2024-07-25 10:17:36.232378] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.288 [2024-07-25 10:17:36.232395] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.288 [2024-07-25 10:17:36.232410] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:51.288 [2024-07-25 10:17:36.232451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.288 qpair failed and we were unable to recover it. 00:28:51.288 [2024-07-25 10:17:36.242264] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.288 [2024-07-25 10:17:36.242408] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.288 [2024-07-25 10:17:36.242442] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.288 [2024-07-25 10:17:36.242460] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.288 [2024-07-25 10:17:36.242474] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:51.288 [2024-07-25 10:17:36.242508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.288 qpair failed and we were unable to recover it. 00:28:51.288 [2024-07-25 10:17:36.252218] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.288 [2024-07-25 10:17:36.252342] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.288 [2024-07-25 10:17:36.252371] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.288 [2024-07-25 10:17:36.252388] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.288 [2024-07-25 10:17:36.252403] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:51.288 [2024-07-25 10:17:36.252446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.288 qpair failed and we were unable to recover it. 00:28:51.288 [2024-07-25 10:17:36.262245] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.288 [2024-07-25 10:17:36.262369] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.288 [2024-07-25 10:17:36.262397] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.288 [2024-07-25 10:17:36.262420] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.288 [2024-07-25 10:17:36.262445] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:51.288 [2024-07-25 10:17:36.262481] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.288 qpair failed and we were unable to recover it. 00:28:51.288 [2024-07-25 10:17:36.272336] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.288 [2024-07-25 10:17:36.272470] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.288 [2024-07-25 10:17:36.272500] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.288 [2024-07-25 10:17:36.272517] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.288 [2024-07-25 10:17:36.272532] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:51.288 [2024-07-25 10:17:36.272566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.288 qpair failed and we were unable to recover it. 00:28:51.288 [2024-07-25 10:17:36.282349] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.288 [2024-07-25 10:17:36.282492] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.288 [2024-07-25 10:17:36.282523] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.288 [2024-07-25 10:17:36.282541] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.288 [2024-07-25 10:17:36.282556] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:51.288 [2024-07-25 10:17:36.282592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.288 qpair failed and we were unable to recover it. 00:28:51.288 [2024-07-25 10:17:36.292328] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.288 [2024-07-25 10:17:36.292455] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.288 [2024-07-25 10:17:36.292485] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.288 [2024-07-25 10:17:36.292502] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.288 [2024-07-25 10:17:36.292517] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:51.288 [2024-07-25 10:17:36.292552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.288 qpair failed and we were unable to recover it. 00:28:51.288 [2024-07-25 10:17:36.302381] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.288 [2024-07-25 10:17:36.302547] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.288 [2024-07-25 10:17:36.302575] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.288 [2024-07-25 10:17:36.302593] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.288 [2024-07-25 10:17:36.302607] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:51.288 [2024-07-25 10:17:36.302642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.288 qpair failed and we were unable to recover it. 00:28:51.288 [2024-07-25 10:17:36.312456] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.288 [2024-07-25 10:17:36.312598] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.288 [2024-07-25 10:17:36.312627] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.288 [2024-07-25 10:17:36.312644] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.288 [2024-07-25 10:17:36.312660] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:51.288 [2024-07-25 10:17:36.312701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.288 qpair failed and we were unable to recover it. 00:28:51.288 [2024-07-25 10:17:36.322474] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.288 [2024-07-25 10:17:36.322603] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.288 [2024-07-25 10:17:36.322633] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.288 [2024-07-25 10:17:36.322651] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.288 [2024-07-25 10:17:36.322667] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:51.288 [2024-07-25 10:17:36.322704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.288 qpair failed and we were unable to recover it. 00:28:51.288 [2024-07-25 10:17:36.332459] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.289 [2024-07-25 10:17:36.332614] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.289 [2024-07-25 10:17:36.332643] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.289 [2024-07-25 10:17:36.332661] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.289 [2024-07-25 10:17:36.332677] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:51.289 [2024-07-25 10:17:36.332712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.289 qpair failed and we were unable to recover it. 00:28:51.289 [2024-07-25 10:17:36.342491] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.289 [2024-07-25 10:17:36.342639] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.289 [2024-07-25 10:17:36.342668] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.289 [2024-07-25 10:17:36.342686] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.289 [2024-07-25 10:17:36.342701] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:51.289 [2024-07-25 10:17:36.342735] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.289 qpair failed and we were unable to recover it. 00:28:51.289 [2024-07-25 10:17:36.352578] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.289 [2024-07-25 10:17:36.352716] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.289 [2024-07-25 10:17:36.352751] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.289 [2024-07-25 10:17:36.352769] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.289 [2024-07-25 10:17:36.352785] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:51.289 [2024-07-25 10:17:36.352832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.289 qpair failed and we were unable to recover it. 00:28:51.289 [2024-07-25 10:17:36.362524] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.289 [2024-07-25 10:17:36.362656] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.289 [2024-07-25 10:17:36.362685] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.289 [2024-07-25 10:17:36.362702] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.289 [2024-07-25 10:17:36.362716] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:51.289 [2024-07-25 10:17:36.362751] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.289 qpair failed and we were unable to recover it. 00:28:51.289 [2024-07-25 10:17:36.372671] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.289 [2024-07-25 10:17:36.372801] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.289 [2024-07-25 10:17:36.372829] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.289 [2024-07-25 10:17:36.372846] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.289 [2024-07-25 10:17:36.372862] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:51.289 [2024-07-25 10:17:36.372897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.289 qpair failed and we were unable to recover it. 00:28:51.289 [2024-07-25 10:17:36.382587] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.289 [2024-07-25 10:17:36.382770] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.289 [2024-07-25 10:17:36.382799] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.289 [2024-07-25 10:17:36.382816] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.289 [2024-07-25 10:17:36.382831] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:51.289 [2024-07-25 10:17:36.382867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.289 qpair failed and we were unable to recover it. 00:28:51.289 [2024-07-25 10:17:36.392740] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.289 [2024-07-25 10:17:36.392886] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.289 [2024-07-25 10:17:36.392915] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.289 [2024-07-25 10:17:36.392932] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.289 [2024-07-25 10:17:36.392946] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:51.289 [2024-07-25 10:17:36.392987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.289 qpair failed and we were unable to recover it. 00:28:51.289 [2024-07-25 10:17:36.402650] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.289 [2024-07-25 10:17:36.402784] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.289 [2024-07-25 10:17:36.402814] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.289 [2024-07-25 10:17:36.402831] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.289 [2024-07-25 10:17:36.402847] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:51.289 [2024-07-25 10:17:36.402881] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.289 qpair failed and we were unable to recover it. 00:28:51.289 [2024-07-25 10:17:36.412694] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.289 [2024-07-25 10:17:36.412839] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.289 [2024-07-25 10:17:36.412868] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.289 [2024-07-25 10:17:36.412885] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.289 [2024-07-25 10:17:36.412901] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:51.289 [2024-07-25 10:17:36.412935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.289 qpair failed and we were unable to recover it. 00:28:51.289 [2024-07-25 10:17:36.422691] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.289 [2024-07-25 10:17:36.422839] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.289 [2024-07-25 10:17:36.422869] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.289 [2024-07-25 10:17:36.422886] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.289 [2024-07-25 10:17:36.422903] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:51.289 [2024-07-25 10:17:36.422937] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.289 qpair failed and we were unable to recover it. 00:28:51.289 [2024-07-25 10:17:36.432724] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.289 [2024-07-25 10:17:36.432893] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.289 [2024-07-25 10:17:36.432922] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.289 [2024-07-25 10:17:36.432939] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.289 [2024-07-25 10:17:36.432954] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:51.289 [2024-07-25 10:17:36.432988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.289 qpair failed and we were unable to recover it. 00:28:51.289 [2024-07-25 10:17:36.442771] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.289 [2024-07-25 10:17:36.442906] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.289 [2024-07-25 10:17:36.442942] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.289 [2024-07-25 10:17:36.442960] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.289 [2024-07-25 10:17:36.442975] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:51.289 [2024-07-25 10:17:36.443008] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.289 qpair failed and we were unable to recover it. 00:28:51.289 [2024-07-25 10:17:36.452901] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.289 [2024-07-25 10:17:36.453051] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.289 [2024-07-25 10:17:36.453080] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.548 [2024-07-25 10:17:36.453096] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.548 [2024-07-25 10:17:36.453110] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:51.548 [2024-07-25 10:17:36.453144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.548 qpair failed and we were unable to recover it. 00:28:51.548 [2024-07-25 10:17:36.462853] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.548 [2024-07-25 10:17:36.462973] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.548 [2024-07-25 10:17:36.463001] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.548 [2024-07-25 10:17:36.463017] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.548 [2024-07-25 10:17:36.463032] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:51.549 [2024-07-25 10:17:36.463065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.549 qpair failed and we were unable to recover it. 00:28:51.549 [2024-07-25 10:17:36.472892] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.549 [2024-07-25 10:17:36.473022] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.549 [2024-07-25 10:17:36.473050] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.549 [2024-07-25 10:17:36.473066] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.549 [2024-07-25 10:17:36.473081] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:51.549 [2024-07-25 10:17:36.473114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.549 qpair failed and we were unable to recover it. 00:28:51.549 [2024-07-25 10:17:36.482919] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.549 [2024-07-25 10:17:36.483101] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.549 [2024-07-25 10:17:36.483129] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.549 [2024-07-25 10:17:36.483146] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.549 [2024-07-25 10:17:36.483166] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:51.549 [2024-07-25 10:17:36.483201] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.549 qpair failed and we were unable to recover it. 00:28:51.549 [2024-07-25 10:17:36.492991] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.549 [2024-07-25 10:17:36.493118] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.549 [2024-07-25 10:17:36.493147] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.549 [2024-07-25 10:17:36.493163] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.549 [2024-07-25 10:17:36.493177] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:51.549 [2024-07-25 10:17:36.493210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.549 qpair failed and we were unable to recover it. 00:28:51.549 [2024-07-25 10:17:36.502948] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.549 [2024-07-25 10:17:36.503081] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.549 [2024-07-25 10:17:36.503109] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.549 [2024-07-25 10:17:36.503125] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.549 [2024-07-25 10:17:36.503140] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:51.549 [2024-07-25 10:17:36.503173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.549 qpair failed and we were unable to recover it. 00:28:51.549 [2024-07-25 10:17:36.512962] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.549 [2024-07-25 10:17:36.513103] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.549 [2024-07-25 10:17:36.513131] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.549 [2024-07-25 10:17:36.513148] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.549 [2024-07-25 10:17:36.513163] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:51.549 [2024-07-25 10:17:36.513197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.549 qpair failed and we were unable to recover it. 00:28:51.549 [2024-07-25 10:17:36.522961] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.549 [2024-07-25 10:17:36.523115] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.549 [2024-07-25 10:17:36.523144] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.549 [2024-07-25 10:17:36.523160] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.549 [2024-07-25 10:17:36.523175] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:51.549 [2024-07-25 10:17:36.523208] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.549 qpair failed and we were unable to recover it. 00:28:51.549 [2024-07-25 10:17:36.533056] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.549 [2024-07-25 10:17:36.533212] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.549 [2024-07-25 10:17:36.533251] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.549 [2024-07-25 10:17:36.533280] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.549 [2024-07-25 10:17:36.533305] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:51.549 [2024-07-25 10:17:36.533357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.549 qpair failed and we were unable to recover it. 00:28:51.549 [2024-07-25 10:17:36.543070] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.549 [2024-07-25 10:17:36.543199] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.549 [2024-07-25 10:17:36.543229] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.549 [2024-07-25 10:17:36.543245] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.549 [2024-07-25 10:17:36.543260] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:51.549 [2024-07-25 10:17:36.543294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.549 qpair failed and we were unable to recover it. 00:28:51.549 [2024-07-25 10:17:36.553071] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.549 [2024-07-25 10:17:36.553209] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.549 [2024-07-25 10:17:36.553239] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.549 [2024-07-25 10:17:36.553255] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.549 [2024-07-25 10:17:36.553270] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:51.549 [2024-07-25 10:17:36.553303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.549 qpair failed and we were unable to recover it. 00:28:51.549 [2024-07-25 10:17:36.563100] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.549 [2024-07-25 10:17:36.563224] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.549 [2024-07-25 10:17:36.563252] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.549 [2024-07-25 10:17:36.563269] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.549 [2024-07-25 10:17:36.563284] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:51.549 [2024-07-25 10:17:36.563318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.549 qpair failed and we were unable to recover it. 00:28:51.549 [2024-07-25 10:17:36.573131] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.549 [2024-07-25 10:17:36.573257] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.549 [2024-07-25 10:17:36.573285] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.549 [2024-07-25 10:17:36.573301] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.549 [2024-07-25 10:17:36.573321] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:51.549 [2024-07-25 10:17:36.573357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.549 qpair failed and we were unable to recover it. 00:28:51.549 [2024-07-25 10:17:36.583135] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.549 [2024-07-25 10:17:36.583303] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.549 [2024-07-25 10:17:36.583332] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.549 [2024-07-25 10:17:36.583349] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.549 [2024-07-25 10:17:36.583363] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:51.549 [2024-07-25 10:17:36.583397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.549 qpair failed and we were unable to recover it. 00:28:51.549 [2024-07-25 10:17:36.593186] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.549 [2024-07-25 10:17:36.593320] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.549 [2024-07-25 10:17:36.593349] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.549 [2024-07-25 10:17:36.593365] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.549 [2024-07-25 10:17:36.593379] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:51.549 [2024-07-25 10:17:36.593412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.549 qpair failed and we were unable to recover it. 00:28:51.549 [2024-07-25 10:17:36.603201] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.550 [2024-07-25 10:17:36.603341] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.550 [2024-07-25 10:17:36.603369] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.550 [2024-07-25 10:17:36.603385] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.550 [2024-07-25 10:17:36.603400] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:51.550 [2024-07-25 10:17:36.603440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.550 qpair failed and we were unable to recover it. 00:28:51.550 [2024-07-25 10:17:36.613226] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.550 [2024-07-25 10:17:36.613395] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.550 [2024-07-25 10:17:36.613424] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.550 [2024-07-25 10:17:36.613455] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.550 [2024-07-25 10:17:36.613470] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:51.550 [2024-07-25 10:17:36.613505] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.550 qpair failed and we were unable to recover it. 00:28:51.550 [2024-07-25 10:17:36.623246] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.550 [2024-07-25 10:17:36.623370] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.550 [2024-07-25 10:17:36.623398] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.550 [2024-07-25 10:17:36.623414] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.550 [2024-07-25 10:17:36.623435] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:51.550 [2024-07-25 10:17:36.623472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.550 qpair failed and we were unable to recover it. 00:28:51.550 [2024-07-25 10:17:36.633288] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.550 [2024-07-25 10:17:36.633444] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.550 [2024-07-25 10:17:36.633473] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.550 [2024-07-25 10:17:36.633489] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.550 [2024-07-25 10:17:36.633503] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:51.550 [2024-07-25 10:17:36.633536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.550 qpair failed and we were unable to recover it. 00:28:51.550 [2024-07-25 10:17:36.643304] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.550 [2024-07-25 10:17:36.643445] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.550 [2024-07-25 10:17:36.643483] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.550 [2024-07-25 10:17:36.643501] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.550 [2024-07-25 10:17:36.643515] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:51.550 [2024-07-25 10:17:36.643549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.550 qpair failed and we were unable to recover it. 00:28:51.550 [2024-07-25 10:17:36.653344] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.550 [2024-07-25 10:17:36.653484] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.550 [2024-07-25 10:17:36.653512] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.550 [2024-07-25 10:17:36.653529] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.550 [2024-07-25 10:17:36.653543] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:51.550 [2024-07-25 10:17:36.653577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.550 qpair failed and we were unable to recover it. 00:28:51.550 [2024-07-25 10:17:36.663396] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.550 [2024-07-25 10:17:36.663530] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.550 [2024-07-25 10:17:36.663559] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.550 [2024-07-25 10:17:36.663583] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.550 [2024-07-25 10:17:36.663600] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:51.550 [2024-07-25 10:17:36.663635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.550 qpair failed and we were unable to recover it. 00:28:51.550 [2024-07-25 10:17:36.673464] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.550 [2024-07-25 10:17:36.673621] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.550 [2024-07-25 10:17:36.673649] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.550 [2024-07-25 10:17:36.673665] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.550 [2024-07-25 10:17:36.673682] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:51.550 [2024-07-25 10:17:36.673717] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.550 qpair failed and we were unable to recover it. 00:28:51.550 [2024-07-25 10:17:36.683420] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.550 [2024-07-25 10:17:36.683561] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.550 [2024-07-25 10:17:36.683589] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.550 [2024-07-25 10:17:36.683606] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.550 [2024-07-25 10:17:36.683621] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:51.550 [2024-07-25 10:17:36.683654] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.550 qpair failed and we were unable to recover it. 00:28:51.550 [2024-07-25 10:17:36.693440] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.550 [2024-07-25 10:17:36.693563] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.550 [2024-07-25 10:17:36.693593] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.550 [2024-07-25 10:17:36.693609] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.550 [2024-07-25 10:17:36.693624] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:51.550 [2024-07-25 10:17:36.693659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.550 qpair failed and we were unable to recover it. 00:28:51.550 [2024-07-25 10:17:36.703465] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.550 [2024-07-25 10:17:36.703627] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.550 [2024-07-25 10:17:36.703655] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.550 [2024-07-25 10:17:36.703672] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.550 [2024-07-25 10:17:36.703687] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:51.550 [2024-07-25 10:17:36.703721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.550 qpair failed and we were unable to recover it. 00:28:51.550 [2024-07-25 10:17:36.713532] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.550 [2024-07-25 10:17:36.713674] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.550 [2024-07-25 10:17:36.713702] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.550 [2024-07-25 10:17:36.713718] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.550 [2024-07-25 10:17:36.713732] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:51.550 [2024-07-25 10:17:36.713764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.550 qpair failed and we were unable to recover it. 00:28:51.809 [2024-07-25 10:17:36.723587] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.809 [2024-07-25 10:17:36.723748] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.809 [2024-07-25 10:17:36.723777] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.809 [2024-07-25 10:17:36.723793] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.809 [2024-07-25 10:17:36.723807] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:51.809 [2024-07-25 10:17:36.723840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.809 qpair failed and we were unable to recover it. 00:28:51.809 [2024-07-25 10:17:36.733594] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.809 [2024-07-25 10:17:36.733722] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.809 [2024-07-25 10:17:36.733750] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.809 [2024-07-25 10:17:36.733766] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.809 [2024-07-25 10:17:36.733781] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:51.809 [2024-07-25 10:17:36.733814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.809 qpair failed and we were unable to recover it. 00:28:51.809 [2024-07-25 10:17:36.743678] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.809 [2024-07-25 10:17:36.743810] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.809 [2024-07-25 10:17:36.743839] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.809 [2024-07-25 10:17:36.743855] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.809 [2024-07-25 10:17:36.743876] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:51.809 [2024-07-25 10:17:36.743909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.809 qpair failed and we were unable to recover it. 00:28:51.810 [2024-07-25 10:17:36.753649] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.810 [2024-07-25 10:17:36.753783] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.810 [2024-07-25 10:17:36.753818] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.810 [2024-07-25 10:17:36.753835] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.810 [2024-07-25 10:17:36.753849] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:51.810 [2024-07-25 10:17:36.753882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.810 qpair failed and we were unable to recover it. 00:28:51.810 [2024-07-25 10:17:36.763734] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.810 [2024-07-25 10:17:36.763864] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.810 [2024-07-25 10:17:36.763893] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.810 [2024-07-25 10:17:36.763909] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.810 [2024-07-25 10:17:36.763923] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:51.810 [2024-07-25 10:17:36.763956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.810 qpair failed and we were unable to recover it. 00:28:51.810 [2024-07-25 10:17:36.773748] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.810 [2024-07-25 10:17:36.773879] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.810 [2024-07-25 10:17:36.773907] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.810 [2024-07-25 10:17:36.773924] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.810 [2024-07-25 10:17:36.773938] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:51.810 [2024-07-25 10:17:36.773972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.810 qpair failed and we were unable to recover it. 00:28:51.810 [2024-07-25 10:17:36.783713] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.810 [2024-07-25 10:17:36.783847] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.810 [2024-07-25 10:17:36.783876] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.810 [2024-07-25 10:17:36.783892] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.810 [2024-07-25 10:17:36.783907] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:51.810 [2024-07-25 10:17:36.783941] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.810 qpair failed and we were unable to recover it. 00:28:51.810 [2024-07-25 10:17:36.793785] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.810 [2024-07-25 10:17:36.793922] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.810 [2024-07-25 10:17:36.793950] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.810 [2024-07-25 10:17:36.793966] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.810 [2024-07-25 10:17:36.793980] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:51.810 [2024-07-25 10:17:36.794020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.810 qpair failed and we were unable to recover it. 00:28:51.810 [2024-07-25 10:17:36.803844] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.810 [2024-07-25 10:17:36.803972] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.810 [2024-07-25 10:17:36.804000] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.810 [2024-07-25 10:17:36.804016] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.810 [2024-07-25 10:17:36.804030] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:51.810 [2024-07-25 10:17:36.804062] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.810 qpair failed and we were unable to recover it. 00:28:51.810 [2024-07-25 10:17:36.813802] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.810 [2024-07-25 10:17:36.813939] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.810 [2024-07-25 10:17:36.813968] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.810 [2024-07-25 10:17:36.813984] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.810 [2024-07-25 10:17:36.813998] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:51.810 [2024-07-25 10:17:36.814032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.810 qpair failed and we were unable to recover it. 00:28:51.810 [2024-07-25 10:17:36.823819] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.810 [2024-07-25 10:17:36.823997] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.810 [2024-07-25 10:17:36.824026] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.810 [2024-07-25 10:17:36.824043] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.810 [2024-07-25 10:17:36.824057] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:51.810 [2024-07-25 10:17:36.824090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.810 qpair failed and we were unable to recover it. 00:28:51.810 [2024-07-25 10:17:36.833924] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.810 [2024-07-25 10:17:36.834105] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.810 [2024-07-25 10:17:36.834133] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.810 [2024-07-25 10:17:36.834149] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.810 [2024-07-25 10:17:36.834164] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:51.810 [2024-07-25 10:17:36.834197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.810 qpair failed and we were unable to recover it. 00:28:51.810 [2024-07-25 10:17:36.843888] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.810 [2024-07-25 10:17:36.844021] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.810 [2024-07-25 10:17:36.844055] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.810 [2024-07-25 10:17:36.844073] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.810 [2024-07-25 10:17:36.844087] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:51.810 [2024-07-25 10:17:36.844120] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.810 qpair failed and we were unable to recover it. 00:28:51.810 [2024-07-25 10:17:36.853904] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.810 [2024-07-25 10:17:36.854045] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.810 [2024-07-25 10:17:36.854073] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.810 [2024-07-25 10:17:36.854089] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.810 [2024-07-25 10:17:36.854104] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:51.810 [2024-07-25 10:17:36.854138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.810 qpair failed and we were unable to recover it. 00:28:51.810 [2024-07-25 10:17:36.863931] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.810 [2024-07-25 10:17:36.864063] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.810 [2024-07-25 10:17:36.864092] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.810 [2024-07-25 10:17:36.864108] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.810 [2024-07-25 10:17:36.864123] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:51.810 [2024-07-25 10:17:36.864155] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.810 qpair failed and we were unable to recover it. 00:28:51.810 [2024-07-25 10:17:36.873977] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.810 [2024-07-25 10:17:36.874134] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.810 [2024-07-25 10:17:36.874162] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.810 [2024-07-25 10:17:36.874178] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.810 [2024-07-25 10:17:36.874192] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:51.810 [2024-07-25 10:17:36.874226] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.810 qpair failed and we were unable to recover it. 00:28:51.810 [2024-07-25 10:17:36.884001] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.811 [2024-07-25 10:17:36.884164] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.811 [2024-07-25 10:17:36.884192] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.811 [2024-07-25 10:17:36.884209] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.811 [2024-07-25 10:17:36.884223] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:51.811 [2024-07-25 10:17:36.884262] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.811 qpair failed and we were unable to recover it. 00:28:51.811 [2024-07-25 10:17:36.894114] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.811 [2024-07-25 10:17:36.894240] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.811 [2024-07-25 10:17:36.894268] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.811 [2024-07-25 10:17:36.894284] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.811 [2024-07-25 10:17:36.894299] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:51.811 [2024-07-25 10:17:36.894332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.811 qpair failed and we were unable to recover it. 00:28:51.811 [2024-07-25 10:17:36.904039] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.811 [2024-07-25 10:17:36.904165] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.811 [2024-07-25 10:17:36.904193] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.811 [2024-07-25 10:17:36.904209] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.811 [2024-07-25 10:17:36.904224] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:51.811 [2024-07-25 10:17:36.904257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.811 qpair failed and we were unable to recover it. 00:28:51.811 [2024-07-25 10:17:36.914112] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.811 [2024-07-25 10:17:36.914276] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.811 [2024-07-25 10:17:36.914304] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.811 [2024-07-25 10:17:36.914321] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.811 [2024-07-25 10:17:36.914335] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:51.811 [2024-07-25 10:17:36.914369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.811 qpair failed and we were unable to recover it. 00:28:51.811 [2024-07-25 10:17:36.924101] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.811 [2024-07-25 10:17:36.924246] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.811 [2024-07-25 10:17:36.924274] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.811 [2024-07-25 10:17:36.924291] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.811 [2024-07-25 10:17:36.924306] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:51.811 [2024-07-25 10:17:36.924338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.811 qpair failed and we were unable to recover it. 00:28:51.811 [2024-07-25 10:17:36.934169] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.811 [2024-07-25 10:17:36.934303] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.811 [2024-07-25 10:17:36.934331] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.811 [2024-07-25 10:17:36.934347] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.811 [2024-07-25 10:17:36.934362] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:51.811 [2024-07-25 10:17:36.934394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.811 qpair failed and we were unable to recover it. 00:28:51.811 [2024-07-25 10:17:36.944188] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.811 [2024-07-25 10:17:36.944353] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.811 [2024-07-25 10:17:36.944382] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.811 [2024-07-25 10:17:36.944399] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.811 [2024-07-25 10:17:36.944413] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:51.811 [2024-07-25 10:17:36.944454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.811 qpair failed and we were unable to recover it. 00:28:51.811 [2024-07-25 10:17:36.954210] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.811 [2024-07-25 10:17:36.954340] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.811 [2024-07-25 10:17:36.954368] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.811 [2024-07-25 10:17:36.954384] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.811 [2024-07-25 10:17:36.954399] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:51.811 [2024-07-25 10:17:36.954438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.811 qpair failed and we were unable to recover it. 00:28:51.811 [2024-07-25 10:17:36.964259] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.811 [2024-07-25 10:17:36.964448] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.811 [2024-07-25 10:17:36.964477] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.811 [2024-07-25 10:17:36.964493] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.811 [2024-07-25 10:17:36.964508] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:51.811 [2024-07-25 10:17:36.964541] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.811 qpair failed and we were unable to recover it. 00:28:51.811 [2024-07-25 10:17:36.974298] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:51.811 [2024-07-25 10:17:36.974485] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:51.811 [2024-07-25 10:17:36.974513] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:51.811 [2024-07-25 10:17:36.974529] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:51.811 [2024-07-25 10:17:36.974548] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:51.811 [2024-07-25 10:17:36.974582] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:51.811 qpair failed and we were unable to recover it. 00:28:52.070 [2024-07-25 10:17:36.984285] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.070 [2024-07-25 10:17:36.984408] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.070 [2024-07-25 10:17:36.984442] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.070 [2024-07-25 10:17:36.984460] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.070 [2024-07-25 10:17:36.984475] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:52.070 [2024-07-25 10:17:36.984508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.070 qpair failed and we were unable to recover it. 00:28:52.070 [2024-07-25 10:17:36.994314] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.070 [2024-07-25 10:17:36.994453] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.070 [2024-07-25 10:17:36.994481] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.070 [2024-07-25 10:17:36.994497] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.070 [2024-07-25 10:17:36.994512] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:52.070 [2024-07-25 10:17:36.994546] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.070 qpair failed and we were unable to recover it. 00:28:52.070 [2024-07-25 10:17:37.004363] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.070 [2024-07-25 10:17:37.004494] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.070 [2024-07-25 10:17:37.004523] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.070 [2024-07-25 10:17:37.004539] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.070 [2024-07-25 10:17:37.004554] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:52.070 [2024-07-25 10:17:37.004588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.070 qpair failed and we were unable to recover it. 00:28:52.070 [2024-07-25 10:17:37.014358] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.070 [2024-07-25 10:17:37.014501] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.070 [2024-07-25 10:17:37.014530] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.070 [2024-07-25 10:17:37.014546] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.070 [2024-07-25 10:17:37.014560] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:52.070 [2024-07-25 10:17:37.014594] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.070 qpair failed and we were unable to recover it. 00:28:52.070 [2024-07-25 10:17:37.024410] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.070 [2024-07-25 10:17:37.024552] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.070 [2024-07-25 10:17:37.024581] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.071 [2024-07-25 10:17:37.024597] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.071 [2024-07-25 10:17:37.024612] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:52.071 [2024-07-25 10:17:37.024645] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.071 qpair failed and we were unable to recover it. 00:28:52.071 [2024-07-25 10:17:37.034436] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.071 [2024-07-25 10:17:37.034586] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.071 [2024-07-25 10:17:37.034614] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.071 [2024-07-25 10:17:37.034630] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.071 [2024-07-25 10:17:37.034645] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:52.071 [2024-07-25 10:17:37.034679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.071 qpair failed and we were unable to recover it. 00:28:52.071 [2024-07-25 10:17:37.044445] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.071 [2024-07-25 10:17:37.044583] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.071 [2024-07-25 10:17:37.044611] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.071 [2024-07-25 10:17:37.044628] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.071 [2024-07-25 10:17:37.044643] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:52.071 [2024-07-25 10:17:37.044677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.071 qpair failed and we were unable to recover it. 00:28:52.071 [2024-07-25 10:17:37.054476] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.071 [2024-07-25 10:17:37.054601] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.071 [2024-07-25 10:17:37.054630] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.071 [2024-07-25 10:17:37.054646] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.071 [2024-07-25 10:17:37.054660] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:52.071 [2024-07-25 10:17:37.054693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.071 qpair failed and we were unable to recover it. 00:28:52.071 [2024-07-25 10:17:37.064496] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.071 [2024-07-25 10:17:37.064631] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.071 [2024-07-25 10:17:37.064659] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.071 [2024-07-25 10:17:37.064682] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.071 [2024-07-25 10:17:37.064698] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:52.071 [2024-07-25 10:17:37.064733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.071 qpair failed and we were unable to recover it. 00:28:52.071 [2024-07-25 10:17:37.074553] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.071 [2024-07-25 10:17:37.074685] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.071 [2024-07-25 10:17:37.074712] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.071 [2024-07-25 10:17:37.074728] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.071 [2024-07-25 10:17:37.074742] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:52.071 [2024-07-25 10:17:37.074776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.071 qpair failed and we were unable to recover it. 00:28:52.071 [2024-07-25 10:17:37.084584] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.071 [2024-07-25 10:17:37.084719] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.071 [2024-07-25 10:17:37.084747] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.071 [2024-07-25 10:17:37.084763] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.071 [2024-07-25 10:17:37.084778] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:52.071 [2024-07-25 10:17:37.084811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.071 qpair failed and we were unable to recover it. 00:28:52.071 [2024-07-25 10:17:37.094593] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.071 [2024-07-25 10:17:37.094727] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.071 [2024-07-25 10:17:37.094755] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.071 [2024-07-25 10:17:37.094771] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.071 [2024-07-25 10:17:37.094786] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:52.071 [2024-07-25 10:17:37.094819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.071 qpair failed and we were unable to recover it. 00:28:52.071 [2024-07-25 10:17:37.104628] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.071 [2024-07-25 10:17:37.104756] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.071 [2024-07-25 10:17:37.104784] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.071 [2024-07-25 10:17:37.104800] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.071 [2024-07-25 10:17:37.104814] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:52.071 [2024-07-25 10:17:37.104848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.071 qpair failed and we were unable to recover it. 00:28:52.071 [2024-07-25 10:17:37.114700] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.071 [2024-07-25 10:17:37.114835] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.071 [2024-07-25 10:17:37.114864] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.071 [2024-07-25 10:17:37.114880] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.071 [2024-07-25 10:17:37.114895] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:52.071 [2024-07-25 10:17:37.114928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.071 qpair failed and we were unable to recover it. 00:28:52.071 [2024-07-25 10:17:37.124690] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.071 [2024-07-25 10:17:37.124819] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.071 [2024-07-25 10:17:37.124848] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.071 [2024-07-25 10:17:37.124864] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.071 [2024-07-25 10:17:37.124879] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:52.071 [2024-07-25 10:17:37.124911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.071 qpair failed and we were unable to recover it. 00:28:52.071 [2024-07-25 10:17:37.134742] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.071 [2024-07-25 10:17:37.134872] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.071 [2024-07-25 10:17:37.134900] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.071 [2024-07-25 10:17:37.134915] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.071 [2024-07-25 10:17:37.134930] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:52.071 [2024-07-25 10:17:37.134963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.071 qpair failed and we were unable to recover it. 00:28:52.071 [2024-07-25 10:17:37.144851] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.072 [2024-07-25 10:17:37.144978] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.072 [2024-07-25 10:17:37.145006] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.072 [2024-07-25 10:17:37.145023] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.072 [2024-07-25 10:17:37.145037] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:52.072 [2024-07-25 10:17:37.145070] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.072 qpair failed and we were unable to recover it. 00:28:52.072 [2024-07-25 10:17:37.154832] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.072 [2024-07-25 10:17:37.154981] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.072 [2024-07-25 10:17:37.155009] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.072 [2024-07-25 10:17:37.155032] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.072 [2024-07-25 10:17:37.155047] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:52.072 [2024-07-25 10:17:37.155080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.072 qpair failed and we were unable to recover it. 00:28:52.072 [2024-07-25 10:17:37.164814] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.072 [2024-07-25 10:17:37.164975] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.072 [2024-07-25 10:17:37.165003] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.072 [2024-07-25 10:17:37.165019] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.072 [2024-07-25 10:17:37.165034] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:52.072 [2024-07-25 10:17:37.165066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.072 qpair failed and we were unable to recover it. 00:28:52.072 [2024-07-25 10:17:37.174812] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.072 [2024-07-25 10:17:37.174941] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.072 [2024-07-25 10:17:37.174970] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.072 [2024-07-25 10:17:37.174986] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.072 [2024-07-25 10:17:37.175000] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:52.072 [2024-07-25 10:17:37.175034] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.072 qpair failed and we were unable to recover it. 00:28:52.072 [2024-07-25 10:17:37.184882] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.072 [2024-07-25 10:17:37.185011] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.072 [2024-07-25 10:17:37.185040] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.072 [2024-07-25 10:17:37.185056] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.072 [2024-07-25 10:17:37.185069] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:52.072 [2024-07-25 10:17:37.185102] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.072 qpair failed and we were unable to recover it. 00:28:52.072 [2024-07-25 10:17:37.194915] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.072 [2024-07-25 10:17:37.195051] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.072 [2024-07-25 10:17:37.195078] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.072 [2024-07-25 10:17:37.195095] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.072 [2024-07-25 10:17:37.195109] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:52.072 [2024-07-25 10:17:37.195144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.072 qpair failed and we were unable to recover it. 00:28:52.072 [2024-07-25 10:17:37.204988] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.072 [2024-07-25 10:17:37.205127] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.072 [2024-07-25 10:17:37.205155] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.072 [2024-07-25 10:17:37.205171] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.072 [2024-07-25 10:17:37.205185] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:52.072 [2024-07-25 10:17:37.205220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.072 qpair failed and we were unable to recover it. 00:28:52.072 [2024-07-25 10:17:37.214953] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.072 [2024-07-25 10:17:37.215082] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.072 [2024-07-25 10:17:37.215110] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.072 [2024-07-25 10:17:37.215126] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.072 [2024-07-25 10:17:37.215141] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:52.072 [2024-07-25 10:17:37.215175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.072 qpair failed and we were unable to recover it. 00:28:52.072 [2024-07-25 10:17:37.224967] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.072 [2024-07-25 10:17:37.225096] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.072 [2024-07-25 10:17:37.225124] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.072 [2024-07-25 10:17:37.225141] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.072 [2024-07-25 10:17:37.225156] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:52.072 [2024-07-25 10:17:37.225189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.072 qpair failed and we were unable to recover it. 00:28:52.072 [2024-07-25 10:17:37.235028] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.072 [2024-07-25 10:17:37.235158] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.072 [2024-07-25 10:17:37.235186] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.072 [2024-07-25 10:17:37.235202] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.072 [2024-07-25 10:17:37.235217] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:52.072 [2024-07-25 10:17:37.235251] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.072 qpair failed and we were unable to recover it. 00:28:52.331 [2024-07-25 10:17:37.245071] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.331 [2024-07-25 10:17:37.245201] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.332 [2024-07-25 10:17:37.245237] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.332 [2024-07-25 10:17:37.245262] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.332 [2024-07-25 10:17:37.245275] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:52.332 [2024-07-25 10:17:37.245308] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.332 qpair failed and we were unable to recover it. 00:28:52.332 [2024-07-25 10:17:37.255088] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.332 [2024-07-25 10:17:37.255221] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.332 [2024-07-25 10:17:37.255250] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.332 [2024-07-25 10:17:37.255266] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.332 [2024-07-25 10:17:37.255281] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:52.332 [2024-07-25 10:17:37.255315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.332 qpair failed and we were unable to recover it. 00:28:52.332 [2024-07-25 10:17:37.265114] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.332 [2024-07-25 10:17:37.265237] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.332 [2024-07-25 10:17:37.265265] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.332 [2024-07-25 10:17:37.265281] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.332 [2024-07-25 10:17:37.265295] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:52.332 [2024-07-25 10:17:37.265327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.332 qpair failed and we were unable to recover it. 00:28:52.332 [2024-07-25 10:17:37.275231] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.332 [2024-07-25 10:17:37.275365] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.332 [2024-07-25 10:17:37.275393] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.332 [2024-07-25 10:17:37.275410] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.332 [2024-07-25 10:17:37.275424] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:52.332 [2024-07-25 10:17:37.275465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.332 qpair failed and we were unable to recover it. 00:28:52.332 [2024-07-25 10:17:37.285171] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.332 [2024-07-25 10:17:37.285366] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.332 [2024-07-25 10:17:37.285395] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.332 [2024-07-25 10:17:37.285411] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.332 [2024-07-25 10:17:37.285425] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:52.332 [2024-07-25 10:17:37.285474] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.332 qpair failed and we were unable to recover it. 00:28:52.332 [2024-07-25 10:17:37.295232] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.332 [2024-07-25 10:17:37.295356] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.332 [2024-07-25 10:17:37.295383] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.332 [2024-07-25 10:17:37.295400] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.332 [2024-07-25 10:17:37.295414] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:52.332 [2024-07-25 10:17:37.295454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.332 qpair failed and we were unable to recover it. 00:28:52.332 [2024-07-25 10:17:37.305236] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.332 [2024-07-25 10:17:37.305410] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.332 [2024-07-25 10:17:37.305446] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.332 [2024-07-25 10:17:37.305463] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.332 [2024-07-25 10:17:37.305478] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:52.332 [2024-07-25 10:17:37.305512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.332 qpair failed and we were unable to recover it. 00:28:52.332 [2024-07-25 10:17:37.315263] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.332 [2024-07-25 10:17:37.315399] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.332 [2024-07-25 10:17:37.315435] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.332 [2024-07-25 10:17:37.315454] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.332 [2024-07-25 10:17:37.315469] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:52.332 [2024-07-25 10:17:37.315504] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.332 qpair failed and we were unable to recover it. 00:28:52.332 [2024-07-25 10:17:37.325291] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.332 [2024-07-25 10:17:37.325440] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.332 [2024-07-25 10:17:37.325469] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.332 [2024-07-25 10:17:37.325486] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.332 [2024-07-25 10:17:37.325501] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:52.332 [2024-07-25 10:17:37.325534] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.332 qpair failed and we were unable to recover it. 00:28:52.332 [2024-07-25 10:17:37.335314] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.332 [2024-07-25 10:17:37.335448] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.332 [2024-07-25 10:17:37.335483] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.332 [2024-07-25 10:17:37.335500] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.332 [2024-07-25 10:17:37.335515] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:52.332 [2024-07-25 10:17:37.335549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.332 qpair failed and we were unable to recover it. 00:28:52.332 [2024-07-25 10:17:37.345434] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.332 [2024-07-25 10:17:37.345568] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.332 [2024-07-25 10:17:37.345597] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.332 [2024-07-25 10:17:37.345613] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.332 [2024-07-25 10:17:37.345628] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:52.332 [2024-07-25 10:17:37.345661] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.332 qpair failed and we were unable to recover it. 00:28:52.332 [2024-07-25 10:17:37.355396] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.332 [2024-07-25 10:17:37.355547] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.332 [2024-07-25 10:17:37.355576] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.332 [2024-07-25 10:17:37.355592] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.332 [2024-07-25 10:17:37.355607] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:52.332 [2024-07-25 10:17:37.355641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.332 qpair failed and we were unable to recover it. 00:28:52.332 [2024-07-25 10:17:37.365404] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.333 [2024-07-25 10:17:37.365590] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.333 [2024-07-25 10:17:37.365618] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.333 [2024-07-25 10:17:37.365634] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.333 [2024-07-25 10:17:37.365648] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:52.333 [2024-07-25 10:17:37.365682] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.333 qpair failed and we were unable to recover it. 00:28:52.333 [2024-07-25 10:17:37.375450] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.333 [2024-07-25 10:17:37.375580] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.333 [2024-07-25 10:17:37.375608] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.333 [2024-07-25 10:17:37.375624] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.333 [2024-07-25 10:17:37.375645] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:52.333 [2024-07-25 10:17:37.375682] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.333 qpair failed and we were unable to recover it. 00:28:52.333 [2024-07-25 10:17:37.385479] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.333 [2024-07-25 10:17:37.385615] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.333 [2024-07-25 10:17:37.385642] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.333 [2024-07-25 10:17:37.385659] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.333 [2024-07-25 10:17:37.385673] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:52.333 [2024-07-25 10:17:37.385707] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.333 qpair failed and we were unable to recover it. 00:28:52.333 [2024-07-25 10:17:37.395527] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.333 [2024-07-25 10:17:37.395669] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.333 [2024-07-25 10:17:37.395696] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.333 [2024-07-25 10:17:37.395712] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.333 [2024-07-25 10:17:37.395727] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:52.333 [2024-07-25 10:17:37.395760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.333 qpair failed and we were unable to recover it. 00:28:52.333 [2024-07-25 10:17:37.405523] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.333 [2024-07-25 10:17:37.405655] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.333 [2024-07-25 10:17:37.405682] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.333 [2024-07-25 10:17:37.405699] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.333 [2024-07-25 10:17:37.405713] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:52.333 [2024-07-25 10:17:37.405747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.333 qpair failed and we were unable to recover it. 00:28:52.333 [2024-07-25 10:17:37.415573] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.333 [2024-07-25 10:17:37.415707] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.333 [2024-07-25 10:17:37.415736] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.333 [2024-07-25 10:17:37.415752] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.333 [2024-07-25 10:17:37.415767] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:52.333 [2024-07-25 10:17:37.415800] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.333 qpair failed and we were unable to recover it. 00:28:52.333 [2024-07-25 10:17:37.425751] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.333 [2024-07-25 10:17:37.425907] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.333 [2024-07-25 10:17:37.425935] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.333 [2024-07-25 10:17:37.425952] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.333 [2024-07-25 10:17:37.425966] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:52.333 [2024-07-25 10:17:37.425999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.333 qpair failed and we were unable to recover it. 00:28:52.333 [2024-07-25 10:17:37.435786] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.333 [2024-07-25 10:17:37.435932] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.333 [2024-07-25 10:17:37.435960] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.333 [2024-07-25 10:17:37.435976] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.333 [2024-07-25 10:17:37.435990] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:52.333 [2024-07-25 10:17:37.436025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.333 qpair failed and we were unable to recover it. 00:28:52.333 [2024-07-25 10:17:37.445656] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.333 [2024-07-25 10:17:37.445788] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.333 [2024-07-25 10:17:37.445817] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.333 [2024-07-25 10:17:37.445833] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.333 [2024-07-25 10:17:37.445847] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:52.333 [2024-07-25 10:17:37.445880] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.333 qpair failed and we were unable to recover it. 00:28:52.333 [2024-07-25 10:17:37.455711] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.333 [2024-07-25 10:17:37.455843] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.333 [2024-07-25 10:17:37.455870] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.333 [2024-07-25 10:17:37.455886] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.333 [2024-07-25 10:17:37.455901] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:52.333 [2024-07-25 10:17:37.455935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.333 qpair failed and we were unable to recover it. 00:28:52.333 [2024-07-25 10:17:37.465740] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.333 [2024-07-25 10:17:37.465877] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.333 [2024-07-25 10:17:37.465905] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.333 [2024-07-25 10:17:37.465928] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.333 [2024-07-25 10:17:37.465944] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:52.333 [2024-07-25 10:17:37.465978] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.333 qpair failed and we were unable to recover it. 00:28:52.333 [2024-07-25 10:17:37.475773] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.333 [2024-07-25 10:17:37.475931] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.333 [2024-07-25 10:17:37.475959] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.333 [2024-07-25 10:17:37.475976] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.333 [2024-07-25 10:17:37.475990] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:52.333 [2024-07-25 10:17:37.476024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.333 qpair failed and we were unable to recover it. 00:28:52.334 [2024-07-25 10:17:37.485794] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.334 [2024-07-25 10:17:37.485929] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.334 [2024-07-25 10:17:37.485958] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.334 [2024-07-25 10:17:37.485974] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.334 [2024-07-25 10:17:37.485988] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:52.334 [2024-07-25 10:17:37.486021] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.334 qpair failed and we were unable to recover it. 00:28:52.334 [2024-07-25 10:17:37.495790] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.334 [2024-07-25 10:17:37.495925] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.334 [2024-07-25 10:17:37.495953] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.334 [2024-07-25 10:17:37.495969] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.334 [2024-07-25 10:17:37.495985] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:52.334 [2024-07-25 10:17:37.496024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.334 qpair failed and we were unable to recover it. 00:28:52.592 [2024-07-25 10:17:37.505805] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.592 [2024-07-25 10:17:37.505956] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.592 [2024-07-25 10:17:37.505984] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.592 [2024-07-25 10:17:37.506000] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.592 [2024-07-25 10:17:37.506014] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:52.592 [2024-07-25 10:17:37.506047] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.592 qpair failed and we were unable to recover it. 00:28:52.592 [2024-07-25 10:17:37.515869] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.592 [2024-07-25 10:17:37.516004] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.592 [2024-07-25 10:17:37.516033] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.592 [2024-07-25 10:17:37.516050] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.592 [2024-07-25 10:17:37.516064] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:52.592 [2024-07-25 10:17:37.516098] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.592 qpair failed and we were unable to recover it. 00:28:52.592 [2024-07-25 10:17:37.525863] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.592 [2024-07-25 10:17:37.525992] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.592 [2024-07-25 10:17:37.526020] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.592 [2024-07-25 10:17:37.526037] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.592 [2024-07-25 10:17:37.526052] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:52.593 [2024-07-25 10:17:37.526085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.593 qpair failed and we were unable to recover it. 00:28:52.593 [2024-07-25 10:17:37.535893] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.593 [2024-07-25 10:17:37.536015] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.593 [2024-07-25 10:17:37.536043] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.593 [2024-07-25 10:17:37.536060] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.593 [2024-07-25 10:17:37.536075] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:52.593 [2024-07-25 10:17:37.536108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.593 qpair failed and we were unable to recover it. 00:28:52.593 [2024-07-25 10:17:37.545934] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.593 [2024-07-25 10:17:37.546066] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.593 [2024-07-25 10:17:37.546094] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.593 [2024-07-25 10:17:37.546111] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.593 [2024-07-25 10:17:37.546126] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:52.593 [2024-07-25 10:17:37.546159] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.593 qpair failed and we were unable to recover it. 00:28:52.593 [2024-07-25 10:17:37.555996] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.593 [2024-07-25 10:17:37.556142] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.593 [2024-07-25 10:17:37.556169] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.593 [2024-07-25 10:17:37.556194] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.593 [2024-07-25 10:17:37.556210] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:52.593 [2024-07-25 10:17:37.556255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.593 qpair failed and we were unable to recover it. 00:28:52.593 [2024-07-25 10:17:37.566000] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.593 [2024-07-25 10:17:37.566148] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.593 [2024-07-25 10:17:37.566177] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.593 [2024-07-25 10:17:37.566194] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.593 [2024-07-25 10:17:37.566208] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:52.593 [2024-07-25 10:17:37.566242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.593 qpair failed and we were unable to recover it. 00:28:52.593 [2024-07-25 10:17:37.576016] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.593 [2024-07-25 10:17:37.576144] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.593 [2024-07-25 10:17:37.576173] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.593 [2024-07-25 10:17:37.576190] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.593 [2024-07-25 10:17:37.576205] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:52.593 [2024-07-25 10:17:37.576238] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.593 qpair failed and we were unable to recover it. 00:28:52.593 [2024-07-25 10:17:37.586034] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.593 [2024-07-25 10:17:37.586196] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.593 [2024-07-25 10:17:37.586230] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.593 [2024-07-25 10:17:37.586247] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.593 [2024-07-25 10:17:37.586261] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:52.593 [2024-07-25 10:17:37.586294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.593 qpair failed and we were unable to recover it. 00:28:52.593 [2024-07-25 10:17:37.596110] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.593 [2024-07-25 10:17:37.596253] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.593 [2024-07-25 10:17:37.596282] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.593 [2024-07-25 10:17:37.596298] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.593 [2024-07-25 10:17:37.596313] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:52.593 [2024-07-25 10:17:37.596348] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.593 qpair failed and we were unable to recover it. 00:28:52.593 [2024-07-25 10:17:37.606095] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.593 [2024-07-25 10:17:37.606238] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.593 [2024-07-25 10:17:37.606267] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.593 [2024-07-25 10:17:37.606283] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.593 [2024-07-25 10:17:37.606297] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:52.593 [2024-07-25 10:17:37.606332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.593 qpair failed and we were unable to recover it. 00:28:52.593 [2024-07-25 10:17:37.616111] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.593 [2024-07-25 10:17:37.616234] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.593 [2024-07-25 10:17:37.616263] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.593 [2024-07-25 10:17:37.616279] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.593 [2024-07-25 10:17:37.616294] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:52.593 [2024-07-25 10:17:37.616327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.593 qpair failed and we were unable to recover it. 00:28:52.593 [2024-07-25 10:17:37.626166] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.593 [2024-07-25 10:17:37.626307] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.593 [2024-07-25 10:17:37.626336] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.593 [2024-07-25 10:17:37.626352] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.593 [2024-07-25 10:17:37.626367] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:52.593 [2024-07-25 10:17:37.626402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.593 qpair failed and we were unable to recover it. 00:28:52.593 [2024-07-25 10:17:37.636296] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.593 [2024-07-25 10:17:37.636449] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.593 [2024-07-25 10:17:37.636478] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.593 [2024-07-25 10:17:37.636494] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.593 [2024-07-25 10:17:37.636508] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:52.593 [2024-07-25 10:17:37.636541] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.593 qpair failed and we were unable to recover it. 00:28:52.593 [2024-07-25 10:17:37.646207] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.593 [2024-07-25 10:17:37.646383] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.593 [2024-07-25 10:17:37.646416] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.593 [2024-07-25 10:17:37.646444] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.593 [2024-07-25 10:17:37.646460] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:52.593 [2024-07-25 10:17:37.646494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.593 qpair failed and we were unable to recover it. 00:28:52.593 [2024-07-25 10:17:37.656243] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.593 [2024-07-25 10:17:37.656377] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.593 [2024-07-25 10:17:37.656404] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.593 [2024-07-25 10:17:37.656420] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.593 [2024-07-25 10:17:37.656447] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:52.594 [2024-07-25 10:17:37.656482] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.594 qpair failed and we were unable to recover it. 00:28:52.594 [2024-07-25 10:17:37.666348] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.594 [2024-07-25 10:17:37.666482] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.594 [2024-07-25 10:17:37.666510] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.594 [2024-07-25 10:17:37.666527] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.594 [2024-07-25 10:17:37.666541] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:52.594 [2024-07-25 10:17:37.666575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.594 qpair failed and we were unable to recover it. 00:28:52.594 [2024-07-25 10:17:37.676440] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.594 [2024-07-25 10:17:37.676610] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.594 [2024-07-25 10:17:37.676639] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.594 [2024-07-25 10:17:37.676655] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.594 [2024-07-25 10:17:37.676669] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:52.594 [2024-07-25 10:17:37.676702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.594 qpair failed and we were unable to recover it. 00:28:52.594 [2024-07-25 10:17:37.686364] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.594 [2024-07-25 10:17:37.686493] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.594 [2024-07-25 10:17:37.686521] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.594 [2024-07-25 10:17:37.686538] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.594 [2024-07-25 10:17:37.686552] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:52.594 [2024-07-25 10:17:37.686591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.594 qpair failed and we were unable to recover it. 00:28:52.594 [2024-07-25 10:17:37.696401] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.594 [2024-07-25 10:17:37.696573] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.594 [2024-07-25 10:17:37.696602] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.594 [2024-07-25 10:17:37.696619] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.594 [2024-07-25 10:17:37.696633] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:52.594 [2024-07-25 10:17:37.696666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.594 qpair failed and we were unable to recover it. 00:28:52.594 [2024-07-25 10:17:37.706416] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.594 [2024-07-25 10:17:37.706552] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.594 [2024-07-25 10:17:37.706580] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.594 [2024-07-25 10:17:37.706596] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.594 [2024-07-25 10:17:37.706611] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:52.594 [2024-07-25 10:17:37.706645] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.594 qpair failed and we were unable to recover it. 00:28:52.594 [2024-07-25 10:17:37.716457] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.594 [2024-07-25 10:17:37.716593] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.594 [2024-07-25 10:17:37.716622] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.594 [2024-07-25 10:17:37.716638] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.594 [2024-07-25 10:17:37.716653] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:52.594 [2024-07-25 10:17:37.716690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.594 qpair failed and we were unable to recover it. 00:28:52.594 [2024-07-25 10:17:37.726510] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.594 [2024-07-25 10:17:37.726655] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.594 [2024-07-25 10:17:37.726683] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.594 [2024-07-25 10:17:37.726700] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.594 [2024-07-25 10:17:37.726714] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:52.594 [2024-07-25 10:17:37.726748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.594 qpair failed and we were unable to recover it. 00:28:52.594 [2024-07-25 10:17:37.736500] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.594 [2024-07-25 10:17:37.736674] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.594 [2024-07-25 10:17:37.736707] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.594 [2024-07-25 10:17:37.736724] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.594 [2024-07-25 10:17:37.736739] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:52.594 [2024-07-25 10:17:37.736773] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.594 qpair failed and we were unable to recover it. 00:28:52.594 [2024-07-25 10:17:37.746541] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.594 [2024-07-25 10:17:37.746682] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.594 [2024-07-25 10:17:37.746710] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.594 [2024-07-25 10:17:37.746726] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.594 [2024-07-25 10:17:37.746741] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:52.594 [2024-07-25 10:17:37.746775] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.594 qpair failed and we were unable to recover it. 00:28:52.594 [2024-07-25 10:17:37.756599] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.594 [2024-07-25 10:17:37.756738] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.594 [2024-07-25 10:17:37.756767] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.594 [2024-07-25 10:17:37.756783] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.594 [2024-07-25 10:17:37.756798] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:52.594 [2024-07-25 10:17:37.756832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.594 qpair failed and we were unable to recover it. 00:28:52.854 [2024-07-25 10:17:37.766596] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.854 [2024-07-25 10:17:37.766741] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.854 [2024-07-25 10:17:37.766769] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.854 [2024-07-25 10:17:37.766785] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.854 [2024-07-25 10:17:37.766800] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:52.854 [2024-07-25 10:17:37.766835] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.854 qpair failed and we were unable to recover it. 00:28:52.854 [2024-07-25 10:17:37.776607] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.854 [2024-07-25 10:17:37.776733] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.854 [2024-07-25 10:17:37.776761] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.854 [2024-07-25 10:17:37.776777] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.854 [2024-07-25 10:17:37.776797] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:52.854 [2024-07-25 10:17:37.776832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.854 qpair failed and we were unable to recover it. 00:28:52.854 [2024-07-25 10:17:37.786674] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.854 [2024-07-25 10:17:37.786818] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.854 [2024-07-25 10:17:37.786850] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.854 [2024-07-25 10:17:37.786867] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.854 [2024-07-25 10:17:37.786882] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:52.854 [2024-07-25 10:17:37.786916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.854 qpair failed and we were unable to recover it. 00:28:52.854 [2024-07-25 10:17:37.796709] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.854 [2024-07-25 10:17:37.796838] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.854 [2024-07-25 10:17:37.796867] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.854 [2024-07-25 10:17:37.796884] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.854 [2024-07-25 10:17:37.796899] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:52.854 [2024-07-25 10:17:37.796932] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.854 qpair failed and we were unable to recover it. 00:28:52.854 [2024-07-25 10:17:37.806723] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.854 [2024-07-25 10:17:37.806876] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.854 [2024-07-25 10:17:37.806904] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.854 [2024-07-25 10:17:37.806921] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.854 [2024-07-25 10:17:37.806935] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:52.854 [2024-07-25 10:17:37.806970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.854 qpair failed and we were unable to recover it. 00:28:52.854 [2024-07-25 10:17:37.816760] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.854 [2024-07-25 10:17:37.816892] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.854 [2024-07-25 10:17:37.816922] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.854 [2024-07-25 10:17:37.816940] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.854 [2024-07-25 10:17:37.816954] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:52.854 [2024-07-25 10:17:37.816989] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.854 qpair failed and we were unable to recover it. 00:28:52.854 [2024-07-25 10:17:37.826734] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.854 [2024-07-25 10:17:37.826878] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.854 [2024-07-25 10:17:37.826906] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.854 [2024-07-25 10:17:37.826923] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.854 [2024-07-25 10:17:37.826937] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:52.854 [2024-07-25 10:17:37.826972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.854 qpair failed and we were unable to recover it. 00:28:52.854 [2024-07-25 10:17:37.836827] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.854 [2024-07-25 10:17:37.836973] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.854 [2024-07-25 10:17:37.837002] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.854 [2024-07-25 10:17:37.837018] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.854 [2024-07-25 10:17:37.837033] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:52.854 [2024-07-25 10:17:37.837066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.854 qpair failed and we were unable to recover it. 00:28:52.854 [2024-07-25 10:17:37.846800] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.854 [2024-07-25 10:17:37.846945] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.854 [2024-07-25 10:17:37.846974] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.854 [2024-07-25 10:17:37.846990] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.854 [2024-07-25 10:17:37.847005] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:52.854 [2024-07-25 10:17:37.847040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.854 qpair failed and we were unable to recover it. 00:28:52.854 [2024-07-25 10:17:37.856849] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.854 [2024-07-25 10:17:37.856979] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.854 [2024-07-25 10:17:37.857008] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.854 [2024-07-25 10:17:37.857024] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.854 [2024-07-25 10:17:37.857039] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:52.854 [2024-07-25 10:17:37.857073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.854 qpair failed and we were unable to recover it. 00:28:52.854 [2024-07-25 10:17:37.866851] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.854 [2024-07-25 10:17:37.866991] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.854 [2024-07-25 10:17:37.867019] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.854 [2024-07-25 10:17:37.867036] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.854 [2024-07-25 10:17:37.867057] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:52.854 [2024-07-25 10:17:37.867091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.854 qpair failed and we were unable to recover it. 00:28:52.854 [2024-07-25 10:17:37.876932] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.855 [2024-07-25 10:17:37.877063] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.855 [2024-07-25 10:17:37.877091] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.855 [2024-07-25 10:17:37.877114] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.855 [2024-07-25 10:17:37.877129] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:52.855 [2024-07-25 10:17:37.877163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.855 qpair failed and we were unable to recover it. 00:28:52.855 [2024-07-25 10:17:37.886947] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.855 [2024-07-25 10:17:37.887075] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.855 [2024-07-25 10:17:37.887108] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.855 [2024-07-25 10:17:37.887124] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.855 [2024-07-25 10:17:37.887138] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:52.855 [2024-07-25 10:17:37.887172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.855 qpair failed and we were unable to recover it. 00:28:52.855 [2024-07-25 10:17:37.896972] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.855 [2024-07-25 10:17:37.897119] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.855 [2024-07-25 10:17:37.897155] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.855 [2024-07-25 10:17:37.897172] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.855 [2024-07-25 10:17:37.897187] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:52.855 [2024-07-25 10:17:37.897223] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.855 qpair failed and we were unable to recover it. 00:28:52.855 [2024-07-25 10:17:37.906960] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.855 [2024-07-25 10:17:37.907090] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.855 [2024-07-25 10:17:37.907119] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.855 [2024-07-25 10:17:37.907135] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.855 [2024-07-25 10:17:37.907150] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:52.855 [2024-07-25 10:17:37.907184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.855 qpair failed and we were unable to recover it. 00:28:52.855 [2024-07-25 10:17:37.917010] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.855 [2024-07-25 10:17:37.917146] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.855 [2024-07-25 10:17:37.917175] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.855 [2024-07-25 10:17:37.917192] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.855 [2024-07-25 10:17:37.917207] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:52.855 [2024-07-25 10:17:37.917240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.855 qpair failed and we were unable to recover it. 00:28:52.855 [2024-07-25 10:17:37.927074] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.855 [2024-07-25 10:17:37.927253] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.855 [2024-07-25 10:17:37.927288] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.855 [2024-07-25 10:17:37.927304] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.855 [2024-07-25 10:17:37.927319] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:52.855 [2024-07-25 10:17:37.927353] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.855 qpair failed and we were unable to recover it. 00:28:52.855 [2024-07-25 10:17:37.937060] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.855 [2024-07-25 10:17:37.937193] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.855 [2024-07-25 10:17:37.937222] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.855 [2024-07-25 10:17:37.937238] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.855 [2024-07-25 10:17:37.937253] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:52.855 [2024-07-25 10:17:37.937287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.855 qpair failed and we were unable to recover it. 00:28:52.855 [2024-07-25 10:17:37.947173] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.855 [2024-07-25 10:17:37.947305] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.855 [2024-07-25 10:17:37.947334] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.855 [2024-07-25 10:17:37.947350] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.855 [2024-07-25 10:17:37.947365] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:52.855 [2024-07-25 10:17:37.947398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.855 qpair failed and we were unable to recover it. 00:28:52.855 [2024-07-25 10:17:37.957191] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.855 [2024-07-25 10:17:37.957338] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.855 [2024-07-25 10:17:37.957366] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.855 [2024-07-25 10:17:37.957390] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.855 [2024-07-25 10:17:37.957406] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:52.855 [2024-07-25 10:17:37.957445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.855 qpair failed and we were unable to recover it. 00:28:52.855 [2024-07-25 10:17:37.967183] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.855 [2024-07-25 10:17:37.967319] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.855 [2024-07-25 10:17:37.967347] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.855 [2024-07-25 10:17:37.967363] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.855 [2024-07-25 10:17:37.967377] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:52.855 [2024-07-25 10:17:37.967411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.855 qpair failed and we were unable to recover it. 00:28:52.855 [2024-07-25 10:17:37.977183] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.855 [2024-07-25 10:17:37.977309] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.855 [2024-07-25 10:17:37.977337] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.855 [2024-07-25 10:17:37.977353] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.855 [2024-07-25 10:17:37.977368] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:52.855 [2024-07-25 10:17:37.977400] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.855 qpair failed and we were unable to recover it. 00:28:52.855 [2024-07-25 10:17:37.987254] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.855 [2024-07-25 10:17:37.987374] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.855 [2024-07-25 10:17:37.987403] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.855 [2024-07-25 10:17:37.987419] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.855 [2024-07-25 10:17:37.987454] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:52.856 [2024-07-25 10:17:37.987490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.856 qpair failed and we were unable to recover it. 00:28:52.856 [2024-07-25 10:17:37.997206] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.856 [2024-07-25 10:17:37.997341] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.856 [2024-07-25 10:17:37.997369] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.856 [2024-07-25 10:17:37.997385] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.856 [2024-07-25 10:17:37.997400] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:52.856 [2024-07-25 10:17:37.997441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.856 qpair failed and we were unable to recover it. 00:28:52.856 [2024-07-25 10:17:38.007386] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.856 [2024-07-25 10:17:38.007529] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.856 [2024-07-25 10:17:38.007558] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.856 [2024-07-25 10:17:38.007575] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.856 [2024-07-25 10:17:38.007588] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:52.856 [2024-07-25 10:17:38.007623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.856 qpair failed and we were unable to recover it. 00:28:52.856 [2024-07-25 10:17:38.017329] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:52.856 [2024-07-25 10:17:38.017475] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:52.856 [2024-07-25 10:17:38.017505] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:52.856 [2024-07-25 10:17:38.017521] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:52.856 [2024-07-25 10:17:38.017535] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:52.856 [2024-07-25 10:17:38.017568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:52.856 qpair failed and we were unable to recover it. 00:28:53.115 [2024-07-25 10:17:38.027295] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:53.115 [2024-07-25 10:17:38.027419] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:53.115 [2024-07-25 10:17:38.027457] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:53.115 [2024-07-25 10:17:38.027475] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:53.115 [2024-07-25 10:17:38.027489] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:53.115 [2024-07-25 10:17:38.027522] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:53.115 qpair failed and we were unable to recover it. 00:28:53.115 [2024-07-25 10:17:38.037354] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:53.115 [2024-07-25 10:17:38.037522] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:53.115 [2024-07-25 10:17:38.037551] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:53.115 [2024-07-25 10:17:38.037568] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:53.115 [2024-07-25 10:17:38.037583] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:53.115 [2024-07-25 10:17:38.037616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:53.115 qpair failed and we were unable to recover it. 00:28:53.115 [2024-07-25 10:17:38.047371] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:53.115 [2024-07-25 10:17:38.047524] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:53.115 [2024-07-25 10:17:38.047557] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:53.115 [2024-07-25 10:17:38.047575] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:53.115 [2024-07-25 10:17:38.047589] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:53.115 [2024-07-25 10:17:38.047622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:53.115 qpair failed and we were unable to recover it. 00:28:53.115 [2024-07-25 10:17:38.057380] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:53.115 [2024-07-25 10:17:38.057509] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:53.115 [2024-07-25 10:17:38.057538] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:53.115 [2024-07-25 10:17:38.057554] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:53.115 [2024-07-25 10:17:38.057569] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:53.115 [2024-07-25 10:17:38.057602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:53.115 qpair failed and we were unable to recover it. 00:28:53.115 [2024-07-25 10:17:38.067488] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:53.115 [2024-07-25 10:17:38.067613] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:53.115 [2024-07-25 10:17:38.067641] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:53.115 [2024-07-25 10:17:38.067657] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:53.115 [2024-07-25 10:17:38.067672] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:53.115 [2024-07-25 10:17:38.067706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:53.115 qpair failed and we were unable to recover it. 00:28:53.115 [2024-07-25 10:17:38.077498] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:53.115 [2024-07-25 10:17:38.077687] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:53.115 [2024-07-25 10:17:38.077716] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:53.115 [2024-07-25 10:17:38.077732] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:53.115 [2024-07-25 10:17:38.077746] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:53.115 [2024-07-25 10:17:38.077782] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:53.115 qpair failed and we were unable to recover it. 00:28:53.115 [2024-07-25 10:17:38.087585] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:53.115 [2024-07-25 10:17:38.087716] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:53.115 [2024-07-25 10:17:38.087744] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:53.115 [2024-07-25 10:17:38.087760] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:53.115 [2024-07-25 10:17:38.087775] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:53.115 [2024-07-25 10:17:38.087814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:53.115 qpair failed and we were unable to recover it. 00:28:53.115 [2024-07-25 10:17:38.097545] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:53.115 [2024-07-25 10:17:38.097679] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:53.115 [2024-07-25 10:17:38.097710] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:53.115 [2024-07-25 10:17:38.097726] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:53.115 [2024-07-25 10:17:38.097741] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:53.115 [2024-07-25 10:17:38.097776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:53.115 qpair failed and we were unable to recover it. 00:28:53.115 [2024-07-25 10:17:38.107611] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:53.115 [2024-07-25 10:17:38.107738] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:53.115 [2024-07-25 10:17:38.107769] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:53.115 [2024-07-25 10:17:38.107786] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:53.115 [2024-07-25 10:17:38.107801] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:53.115 [2024-07-25 10:17:38.107834] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:53.115 qpair failed and we were unable to recover it. 00:28:53.115 [2024-07-25 10:17:38.117570] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:53.115 [2024-07-25 10:17:38.117705] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:53.115 [2024-07-25 10:17:38.117733] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:53.115 [2024-07-25 10:17:38.117749] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:53.115 [2024-07-25 10:17:38.117764] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:53.115 [2024-07-25 10:17:38.117797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:53.115 qpair failed and we were unable to recover it. 00:28:53.115 [2024-07-25 10:17:38.127638] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:53.115 [2024-07-25 10:17:38.127799] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:53.115 [2024-07-25 10:17:38.127827] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:53.115 [2024-07-25 10:17:38.127843] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:53.115 [2024-07-25 10:17:38.127857] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:53.115 [2024-07-25 10:17:38.127892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:53.115 qpair failed and we were unable to recover it. 00:28:53.115 [2024-07-25 10:17:38.137661] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:53.115 [2024-07-25 10:17:38.137796] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:53.115 [2024-07-25 10:17:38.137832] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:53.115 [2024-07-25 10:17:38.137849] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:53.115 [2024-07-25 10:17:38.137864] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:53.115 [2024-07-25 10:17:38.137906] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:53.115 qpair failed and we were unable to recover it. 00:28:53.115 [2024-07-25 10:17:38.147656] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:53.115 [2024-07-25 10:17:38.147804] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:53.115 [2024-07-25 10:17:38.147832] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:53.115 [2024-07-25 10:17:38.147849] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:53.116 [2024-07-25 10:17:38.147863] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:53.116 [2024-07-25 10:17:38.147896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:53.116 qpair failed and we were unable to recover it. 00:28:53.116 [2024-07-25 10:17:38.157744] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:53.116 [2024-07-25 10:17:38.157900] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:53.116 [2024-07-25 10:17:38.157929] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:53.116 [2024-07-25 10:17:38.157945] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:53.116 [2024-07-25 10:17:38.157960] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:53.116 [2024-07-25 10:17:38.157993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:53.116 qpair failed and we were unable to recover it. 00:28:53.116 [2024-07-25 10:17:38.167702] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:53.116 [2024-07-25 10:17:38.167830] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:53.116 [2024-07-25 10:17:38.167859] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:53.116 [2024-07-25 10:17:38.167875] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:53.116 [2024-07-25 10:17:38.167890] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:53.116 [2024-07-25 10:17:38.167923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:53.116 qpair failed and we were unable to recover it. 00:28:53.116 [2024-07-25 10:17:38.177780] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:53.116 [2024-07-25 10:17:38.177908] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:53.116 [2024-07-25 10:17:38.177941] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:53.116 [2024-07-25 10:17:38.177957] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:53.116 [2024-07-25 10:17:38.177977] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:53.116 [2024-07-25 10:17:38.178011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:53.116 qpair failed and we were unable to recover it. 00:28:53.116 [2024-07-25 10:17:38.187849] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:53.116 [2024-07-25 10:17:38.188015] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:53.116 [2024-07-25 10:17:38.188044] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:53.116 [2024-07-25 10:17:38.188060] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:53.116 [2024-07-25 10:17:38.188073] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:53.116 [2024-07-25 10:17:38.188106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:53.116 qpair failed and we were unable to recover it. 00:28:53.116 [2024-07-25 10:17:38.197834] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:53.116 [2024-07-25 10:17:38.197987] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:53.116 [2024-07-25 10:17:38.198015] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:53.116 [2024-07-25 10:17:38.198031] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:53.116 [2024-07-25 10:17:38.198046] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:53.116 [2024-07-25 10:17:38.198079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:53.116 qpair failed and we were unable to recover it. 00:28:53.116 [2024-07-25 10:17:38.207899] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:53.116 [2024-07-25 10:17:38.208033] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:53.116 [2024-07-25 10:17:38.208061] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:53.116 [2024-07-25 10:17:38.208077] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:53.116 [2024-07-25 10:17:38.208098] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:53.116 [2024-07-25 10:17:38.208133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:53.116 qpair failed and we were unable to recover it. 00:28:53.116 [2024-07-25 10:17:38.217867] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:53.116 [2024-07-25 10:17:38.217993] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:53.116 [2024-07-25 10:17:38.218023] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:53.116 [2024-07-25 10:17:38.218039] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:53.116 [2024-07-25 10:17:38.218054] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:53.116 [2024-07-25 10:17:38.218087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:53.116 qpair failed and we were unable to recover it. 00:28:53.116 [2024-07-25 10:17:38.227905] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:53.116 [2024-07-25 10:17:38.228046] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:53.116 [2024-07-25 10:17:38.228075] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:53.116 [2024-07-25 10:17:38.228091] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:53.116 [2024-07-25 10:17:38.228106] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:53.116 [2024-07-25 10:17:38.228139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:53.116 qpair failed and we were unable to recover it. 00:28:53.116 [2024-07-25 10:17:38.237991] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:53.116 [2024-07-25 10:17:38.238121] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:53.116 [2024-07-25 10:17:38.238149] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:53.116 [2024-07-25 10:17:38.238165] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:53.116 [2024-07-25 10:17:38.238179] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:53.116 [2024-07-25 10:17:38.238213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:53.116 qpair failed and we were unable to recover it. 00:28:53.116 [2024-07-25 10:17:38.248001] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:53.116 [2024-07-25 10:17:38.248160] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:53.116 [2024-07-25 10:17:38.248187] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:53.116 [2024-07-25 10:17:38.248203] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:53.116 [2024-07-25 10:17:38.248217] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:53.116 [2024-07-25 10:17:38.248249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:53.116 qpair failed and we were unable to recover it. 00:28:53.116 [2024-07-25 10:17:38.258027] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:53.116 [2024-07-25 10:17:38.258161] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:53.116 [2024-07-25 10:17:38.258190] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:53.116 [2024-07-25 10:17:38.258207] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:53.116 [2024-07-25 10:17:38.258221] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:53.116 [2024-07-25 10:17:38.258255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:53.116 qpair failed and we were unable to recover it. 00:28:53.116 [2024-07-25 10:17:38.268057] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:53.116 [2024-07-25 10:17:38.268191] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:53.116 [2024-07-25 10:17:38.268220] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:53.116 [2024-07-25 10:17:38.268236] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:53.116 [2024-07-25 10:17:38.268258] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:53.116 [2024-07-25 10:17:38.268293] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:53.116 qpair failed and we were unable to recover it. 00:28:53.116 [2024-07-25 10:17:38.278049] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:53.116 [2024-07-25 10:17:38.278181] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:53.116 [2024-07-25 10:17:38.278209] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:53.116 [2024-07-25 10:17:38.278225] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:53.116 [2024-07-25 10:17:38.278239] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:53.116 [2024-07-25 10:17:38.278271] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:53.117 qpair failed and we were unable to recover it. 00:28:53.374 [2024-07-25 10:17:38.288081] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:53.374 [2024-07-25 10:17:38.288205] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:53.374 [2024-07-25 10:17:38.288233] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:53.374 [2024-07-25 10:17:38.288249] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:53.374 [2024-07-25 10:17:38.288264] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:53.374 [2024-07-25 10:17:38.288298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:53.374 qpair failed and we were unable to recover it. 00:28:53.374 [2024-07-25 10:17:38.298112] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:53.374 [2024-07-25 10:17:38.298241] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:53.374 [2024-07-25 10:17:38.298270] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:53.374 [2024-07-25 10:17:38.298286] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:53.374 [2024-07-25 10:17:38.298300] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:53.374 [2024-07-25 10:17:38.298335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:53.374 qpair failed and we were unable to recover it. 00:28:53.374 [2024-07-25 10:17:38.308150] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:53.374 [2024-07-25 10:17:38.308292] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:53.374 [2024-07-25 10:17:38.308320] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:53.374 [2024-07-25 10:17:38.308336] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:53.374 [2024-07-25 10:17:38.308350] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:53.374 [2024-07-25 10:17:38.308385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:53.374 qpair failed and we were unable to recover it. 00:28:53.374 [2024-07-25 10:17:38.318256] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:53.375 [2024-07-25 10:17:38.318408] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:53.375 [2024-07-25 10:17:38.318444] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:53.375 [2024-07-25 10:17:38.318462] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:53.375 [2024-07-25 10:17:38.318477] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:53.375 [2024-07-25 10:17:38.318510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:53.375 qpair failed and we were unable to recover it. 00:28:53.375 [2024-07-25 10:17:38.328202] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:53.375 [2024-07-25 10:17:38.328325] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:53.375 [2024-07-25 10:17:38.328354] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:53.375 [2024-07-25 10:17:38.328370] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:53.375 [2024-07-25 10:17:38.328385] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:53.375 [2024-07-25 10:17:38.328418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:53.375 qpair failed and we were unable to recover it. 00:28:53.375 [2024-07-25 10:17:38.338284] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:53.375 [2024-07-25 10:17:38.338412] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:53.375 [2024-07-25 10:17:38.338455] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:53.375 [2024-07-25 10:17:38.338472] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:53.375 [2024-07-25 10:17:38.338487] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:53.375 [2024-07-25 10:17:38.338521] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:53.375 qpair failed and we were unable to recover it. 00:28:53.375 [2024-07-25 10:17:38.348289] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:53.375 [2024-07-25 10:17:38.348450] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:53.375 [2024-07-25 10:17:38.348479] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:53.375 [2024-07-25 10:17:38.348495] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:53.375 [2024-07-25 10:17:38.348510] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:53.375 [2024-07-25 10:17:38.348545] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:53.375 qpair failed and we were unable to recover it. 00:28:53.375 [2024-07-25 10:17:38.358323] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:53.375 [2024-07-25 10:17:38.358456] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:53.375 [2024-07-25 10:17:38.358485] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:53.375 [2024-07-25 10:17:38.358508] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:53.375 [2024-07-25 10:17:38.358524] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa8000b90 00:28:53.375 [2024-07-25 10:17:38.358558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:53.375 qpair failed and we were unable to recover it. 00:28:53.375 [2024-07-25 10:17:38.368348] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:53.375 [2024-07-25 10:17:38.368485] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:53.375 [2024-07-25 10:17:38.368523] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:53.375 [2024-07-25 10:17:38.368542] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:53.375 [2024-07-25 10:17:38.368559] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff98000b90 00:28:53.375 [2024-07-25 10:17:38.368594] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:53.375 qpair failed and we were unable to recover it. 00:28:53.375 [2024-07-25 10:17:38.378449] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:53.375 [2024-07-25 10:17:38.378627] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:53.375 [2024-07-25 10:17:38.378664] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:53.375 [2024-07-25 10:17:38.378683] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:53.375 [2024-07-25 10:17:38.378700] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a3ea0 00:28:53.375 [2024-07-25 10:17:38.378736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:53.375 qpair failed and we were unable to recover it. 00:28:53.375 [2024-07-25 10:17:38.388406] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:53.375 [2024-07-25 10:17:38.388546] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:53.375 [2024-07-25 10:17:38.388583] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:53.375 [2024-07-25 10:17:38.388601] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:53.375 [2024-07-25 10:17:38.388617] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x18a3ea0 00:28:53.375 [2024-07-25 10:17:38.388651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:28:53.375 qpair failed and we were unable to recover it. 00:28:53.375 [2024-07-25 10:17:38.398460] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:53.375 [2024-07-25 10:17:38.398604] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:53.375 [2024-07-25 10:17:38.398643] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:53.375 [2024-07-25 10:17:38.398661] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:53.375 [2024-07-25 10:17:38.398676] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff98000b90 00:28:53.375 [2024-07-25 10:17:38.398714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:53.375 qpair failed and we were unable to recover it. 00:28:53.375 [2024-07-25 10:17:38.408454] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:53.375 [2024-07-25 10:17:38.408584] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:53.375 [2024-07-25 10:17:38.408615] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:53.375 [2024-07-25 10:17:38.408633] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:53.375 [2024-07-25 10:17:38.408648] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7eff98000b90 00:28:53.375 [2024-07-25 10:17:38.408682] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:28:53.375 qpair failed and we were unable to recover it. 00:28:53.375 [2024-07-25 10:17:38.408817] nvme_ctrlr.c:4480:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:28:53.375 A controller has encountered a failure and is being reset. 00:28:53.375 [2024-07-25 10:17:38.418515] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:53.375 [2024-07-25 10:17:38.418654] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:53.375 [2024-07-25 10:17:38.418689] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:53.375 [2024-07-25 10:17:38.418709] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:53.375 [2024-07-25 10:17:38.418725] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa0000b90 00:28:53.375 [2024-07-25 10:17:38.418764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:53.375 qpair failed and we were unable to recover it. 00:28:53.375 [2024-07-25 10:17:38.428513] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:28:53.375 [2024-07-25 10:17:38.428652] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:28:53.375 [2024-07-25 10:17:38.428683] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:28:53.375 [2024-07-25 10:17:38.428711] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:53.375 [2024-07-25 10:17:38.428727] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7effa0000b90 00:28:53.375 [2024-07-25 10:17:38.428763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:28:53.375 qpair failed and we were unable to recover it. 00:28:53.375 [2024-07-25 10:17:38.428886] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a0b00 (9): Bad file descriptor 00:28:53.375 Controller properly reset. 00:28:53.375 Initializing NVMe Controllers 00:28:53.375 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:53.375 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:53.375 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:28:53.375 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:28:53.375 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:28:53.375 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:28:53.375 Initialization complete. Launching workers. 00:28:53.375 Starting thread on core 1 00:28:53.376 Starting thread on core 2 00:28:53.376 Starting thread on core 3 00:28:53.376 Starting thread on core 0 00:28:53.376 10:17:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:28:53.376 00:28:53.376 real 0m10.912s 00:28:53.376 user 0m18.870s 00:28:53.376 sys 0m5.462s 00:28:53.376 10:17:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:53.376 10:17:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:53.376 ************************************ 00:28:53.376 END TEST nvmf_target_disconnect_tc2 00:28:53.376 ************************************ 00:28:53.376 10:17:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:28:53.376 10:17:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:28:53.376 10:17:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:28:53.376 10:17:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:53.376 10:17:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:28:53.376 10:17:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:53.376 10:17:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:28:53.376 10:17:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:53.376 10:17:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:53.376 rmmod nvme_tcp 00:28:53.376 rmmod nvme_fabrics 00:28:53.376 rmmod nvme_keyring 00:28:53.635 10:17:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:53.635 10:17:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:28:53.635 10:17:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:28:53.635 10:17:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 553360 ']' 00:28:53.635 10:17:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 553360 00:28:53.635 10:17:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@950 -- # '[' -z 553360 ']' 00:28:53.635 10:17:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # kill -0 553360 00:28:53.635 10:17:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # uname 00:28:53.635 10:17:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:28:53.635 10:17:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 553360 00:28:53.635 10:17:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_4 00:28:53.635 10:17:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_4 = sudo ']' 00:28:53.635 10:17:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 553360' 00:28:53.635 killing process with pid 553360 00:28:53.635 10:17:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@969 -- # kill 553360 00:28:53.635 10:17:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@974 -- # wait 553360 00:28:53.892 10:17:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:53.892 10:17:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:53.892 10:17:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:53.892 10:17:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:53.892 10:17:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:53.892 10:17:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:53.892 10:17:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:53.892 10:17:39 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:56.422 10:17:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:56.422 00:28:56.422 real 0m16.485s 00:28:56.422 user 0m45.155s 00:28:56.422 sys 0m7.985s 00:28:56.422 10:17:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:56.422 10:17:41 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:28:56.422 ************************************ 00:28:56.422 END TEST nvmf_target_disconnect 00:28:56.422 ************************************ 00:28:56.422 10:17:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:28:56.422 00:28:56.422 real 5m34.214s 00:28:56.422 user 11m57.306s 00:28:56.422 sys 1m24.690s 00:28:56.422 10:17:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:56.422 10:17:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.422 ************************************ 00:28:56.422 END TEST nvmf_host 00:28:56.422 ************************************ 00:28:56.422 00:28:56.422 real 21m43.728s 00:28:56.422 user 51m21.056s 00:28:56.422 sys 5m36.252s 00:28:56.422 10:17:41 nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:28:56.422 10:17:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:56.422 ************************************ 00:28:56.422 END TEST nvmf_tcp 00:28:56.422 ************************************ 00:28:56.422 10:17:41 -- spdk/autotest.sh@292 -- # [[ 0 -eq 0 ]] 00:28:56.422 10:17:41 -- spdk/autotest.sh@293 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:28:56.422 10:17:41 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:28:56.422 10:17:41 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:28:56.422 10:17:41 -- common/autotest_common.sh@10 -- # set +x 00:28:56.422 ************************************ 00:28:56.422 START TEST spdkcli_nvmf_tcp 00:28:56.422 ************************************ 00:28:56.422 10:17:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:28:56.422 * Looking for test storage... 00:28:56.422 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:28:56.422 10:17:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:28:56.423 10:17:41 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:28:56.423 10:17:41 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:28:56.423 10:17:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:56.423 10:17:41 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:28:56.423 10:17:41 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:56.423 10:17:41 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:56.423 10:17:41 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:56.423 10:17:41 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:56.423 10:17:41 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:56.423 10:17:41 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:56.423 10:17:41 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:56.423 10:17:41 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:56.423 10:17:41 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:56.423 10:17:41 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:56.423 10:17:41 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:28:56.423 10:17:41 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:28:56.423 10:17:41 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:56.423 10:17:41 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:56.423 10:17:41 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:56.423 10:17:41 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:56.423 10:17:41 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:56.423 10:17:41 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:56.423 10:17:41 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:56.423 10:17:41 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:56.423 10:17:41 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:56.423 10:17:41 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:56.423 10:17:41 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:56.423 10:17:41 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:28:56.423 10:17:41 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:56.423 10:17:41 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:28:56.423 10:17:41 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:56.423 10:17:41 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:56.423 10:17:41 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:56.423 10:17:41 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:56.423 10:17:41 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:56.423 10:17:41 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:56.423 10:17:41 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:56.423 10:17:41 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:56.423 10:17:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:28:56.423 10:17:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:28:56.423 10:17:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:28:56.423 10:17:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:28:56.423 10:17:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:56.423 10:17:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:56.423 10:17:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:28:56.423 10:17:41 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=554557 00:28:56.423 10:17:41 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:28:56.423 10:17:41 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 554557 00:28:56.423 10:17:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@831 -- # '[' -z 554557 ']' 00:28:56.423 10:17:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:56.423 10:17:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:28:56.423 10:17:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:56.423 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:56.423 10:17:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:28:56.423 10:17:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:56.423 [2024-07-25 10:17:41.296972] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:28:56.423 [2024-07-25 10:17:41.297075] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid554557 ] 00:28:56.423 EAL: No free 2048 kB hugepages reported on node 1 00:28:56.423 [2024-07-25 10:17:41.373904] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:56.423 [2024-07-25 10:17:41.504455] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:56.423 [2024-07-25 10:17:41.504476] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:56.681 10:17:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:28:56.681 10:17:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # return 0 00:28:56.681 10:17:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:28:56.681 10:17:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:56.681 10:17:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:56.681 10:17:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:28:56.681 10:17:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:28:56.681 10:17:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:28:56.681 10:17:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:56.681 10:17:41 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:56.681 10:17:41 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:28:56.681 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:28:56.681 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:28:56.681 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:28:56.681 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:28:56.681 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:28:56.681 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:28:56.681 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:28:56.681 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:28:56.681 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:28:56.681 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:28:56.681 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:28:56.681 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:28:56.681 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:28:56.681 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:28:56.681 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:28:56.681 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:28:56.681 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:28:56.681 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:28:56.681 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:28:56.681 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:28:56.681 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:28:56.681 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:28:56.681 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:28:56.681 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:28:56.681 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:28:56.681 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:28:56.681 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:28:56.682 ' 00:28:59.209 [2024-07-25 10:17:44.264768] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:00.581 [2024-07-25 10:17:45.505194] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:29:03.106 [2024-07-25 10:17:47.792165] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:29:05.004 [2024-07-25 10:17:49.762409] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:29:06.376 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:29:06.376 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:29:06.376 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:29:06.376 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:29:06.376 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:29:06.376 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:29:06.376 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:29:06.376 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:29:06.376 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:29:06.376 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:29:06.376 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:29:06.376 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:06.376 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:29:06.376 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:29:06.376 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:06.376 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:29:06.376 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:29:06.376 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:29:06.376 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:29:06.376 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:06.376 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:29:06.376 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:29:06.376 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:29:06.376 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:29:06.376 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:29:06.376 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:29:06.376 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:29:06.376 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:29:06.376 10:17:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:29:06.376 10:17:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:06.376 10:17:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:06.376 10:17:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:29:06.376 10:17:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:06.376 10:17:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:06.376 10:17:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:29:06.376 10:17:51 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:29:06.941 10:17:51 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:29:06.941 10:17:51 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:29:06.941 10:17:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:29:06.941 10:17:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:06.941 10:17:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:06.941 10:17:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:29:06.941 10:17:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:06.941 10:17:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:06.941 10:17:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:29:06.941 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:29:06.941 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:29:06.941 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:29:06.941 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:29:06.941 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:29:06.941 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:29:06.941 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:29:06.941 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:29:06.941 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:29:06.941 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:29:06.941 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:29:06.941 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:29:06.941 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:29:06.941 ' 00:29:12.223 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:29:12.223 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:29:12.223 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:29:12.223 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:29:12.223 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:29:12.223 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:29:12.223 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:29:12.223 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:29:12.223 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:29:12.223 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:29:12.223 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:29:12.223 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:29:12.223 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:29:12.223 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:29:12.223 10:17:57 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:29:12.223 10:17:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:12.223 10:17:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:12.223 10:17:57 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 554557 00:29:12.223 10:17:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 554557 ']' 00:29:12.223 10:17:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 554557 00:29:12.223 10:17:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # uname 00:29:12.223 10:17:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:12.223 10:17:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 554557 00:29:12.223 10:17:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:12.223 10:17:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:12.223 10:17:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 554557' 00:29:12.223 killing process with pid 554557 00:29:12.223 10:17:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@969 -- # kill 554557 00:29:12.223 10:17:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@974 -- # wait 554557 00:29:12.481 10:17:57 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:29:12.481 10:17:57 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:29:12.481 10:17:57 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 554557 ']' 00:29:12.481 10:17:57 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 554557 00:29:12.481 10:17:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 554557 ']' 00:29:12.481 10:17:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 554557 00:29:12.481 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (554557) - No such process 00:29:12.481 10:17:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@977 -- # echo 'Process with pid 554557 is not found' 00:29:12.481 Process with pid 554557 is not found 00:29:12.481 10:17:57 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:29:12.481 10:17:57 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:29:12.481 10:17:57 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:29:12.481 00:29:12.481 real 0m16.436s 00:29:12.481 user 0m34.837s 00:29:12.481 sys 0m0.860s 00:29:12.481 10:17:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:12.481 10:17:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:12.481 ************************************ 00:29:12.481 END TEST spdkcli_nvmf_tcp 00:29:12.481 ************************************ 00:29:12.481 10:17:57 -- spdk/autotest.sh@294 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:29:12.481 10:17:57 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:29:12.481 10:17:57 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:12.481 10:17:57 -- common/autotest_common.sh@10 -- # set +x 00:29:12.481 ************************************ 00:29:12.481 START TEST nvmf_identify_passthru 00:29:12.481 ************************************ 00:29:12.481 10:17:57 nvmf_identify_passthru -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:29:12.740 * Looking for test storage... 00:29:12.740 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:12.740 10:17:57 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:12.741 10:17:57 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:29:12.741 10:17:57 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:12.741 10:17:57 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:12.741 10:17:57 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:12.741 10:17:57 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:12.741 10:17:57 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:12.741 10:17:57 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:12.741 10:17:57 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:12.741 10:17:57 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:12.741 10:17:57 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:12.741 10:17:57 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:12.741 10:17:57 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:29:12.741 10:17:57 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:29:12.741 10:17:57 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:12.741 10:17:57 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:12.741 10:17:57 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:12.741 10:17:57 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:12.741 10:17:57 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:12.741 10:17:57 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:12.741 10:17:57 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:12.741 10:17:57 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:12.741 10:17:57 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:12.741 10:17:57 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:12.741 10:17:57 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:12.741 10:17:57 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:29:12.741 10:17:57 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:12.741 10:17:57 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:29:12.741 10:17:57 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:12.741 10:17:57 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:12.741 10:17:57 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:12.741 10:17:57 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:12.741 10:17:57 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:12.741 10:17:57 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:12.741 10:17:57 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:12.741 10:17:57 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:12.741 10:17:57 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:12.741 10:17:57 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:12.741 10:17:57 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:12.741 10:17:57 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:12.741 10:17:57 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:12.741 10:17:57 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:12.741 10:17:57 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:12.741 10:17:57 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:29:12.741 10:17:57 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:12.741 10:17:57 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:29:12.741 10:17:57 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:12.741 10:17:57 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:12.741 10:17:57 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:12.741 10:17:57 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:12.741 10:17:57 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:12.741 10:17:57 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:12.741 10:17:57 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:12.741 10:17:57 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:12.741 10:17:57 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:12.741 10:17:57 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:12.741 10:17:57 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:29:12.741 10:17:57 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:15.316 10:18:00 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:15.316 10:18:00 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:29:15.316 10:18:00 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:15.316 10:18:00 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:15.316 10:18:00 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:15.316 10:18:00 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:15.316 10:18:00 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:15.316 10:18:00 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:29:15.316 10:18:00 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:15.316 10:18:00 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:29:15.316 10:18:00 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:29:15.316 10:18:00 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:29:15.316 10:18:00 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:29:15.316 10:18:00 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:29:15.316 10:18:00 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:29:15.316 10:18:00 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:15.316 10:18:00 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:15.316 10:18:00 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:15.316 10:18:00 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:15.316 10:18:00 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:15.316 10:18:00 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:15.316 10:18:00 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:15.316 10:18:00 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:15.316 10:18:00 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:15.316 10:18:00 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:15.316 10:18:00 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:15.316 10:18:00 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:15.316 10:18:00 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:15.316 10:18:00 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:15.316 10:18:00 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:15.316 10:18:00 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:15.316 10:18:00 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:15.316 10:18:00 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:15.316 10:18:00 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:29:15.316 Found 0000:84:00.0 (0x8086 - 0x159b) 00:29:15.316 10:18:00 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:15.316 10:18:00 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:15.316 10:18:00 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:15.316 10:18:00 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:15.316 10:18:00 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:15.316 10:18:00 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:15.316 10:18:00 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:29:15.316 Found 0000:84:00.1 (0x8086 - 0x159b) 00:29:15.316 10:18:00 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:15.316 10:18:00 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:15.316 10:18:00 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:15.316 10:18:00 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:15.316 10:18:00 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:15.316 10:18:00 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:15.316 10:18:00 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:15.316 10:18:00 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:15.316 10:18:00 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:15.316 10:18:00 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:15.316 10:18:00 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:15.316 10:18:00 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:15.316 10:18:00 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:15.316 10:18:00 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:15.316 10:18:00 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:15.316 10:18:00 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:29:15.316 Found net devices under 0000:84:00.0: cvl_0_0 00:29:15.316 10:18:00 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:15.316 10:18:00 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:15.316 10:18:00 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:15.316 10:18:00 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:15.316 10:18:00 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:15.316 10:18:00 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:15.316 10:18:00 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:15.316 10:18:00 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:15.316 10:18:00 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:29:15.316 Found net devices under 0000:84:00.1: cvl_0_1 00:29:15.316 10:18:00 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:15.316 10:18:00 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:15.316 10:18:00 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:29:15.316 10:18:00 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:15.316 10:18:00 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:15.316 10:18:00 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:15.316 10:18:00 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:15.316 10:18:00 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:15.316 10:18:00 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:15.316 10:18:00 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:15.316 10:18:00 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:15.316 10:18:00 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:15.316 10:18:00 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:15.316 10:18:00 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:15.316 10:18:00 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:15.316 10:18:00 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:15.316 10:18:00 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:15.316 10:18:00 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:15.316 10:18:00 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:15.316 10:18:00 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:15.316 10:18:00 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:15.316 10:18:00 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:15.316 10:18:00 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:15.316 10:18:00 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:15.316 10:18:00 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:15.316 10:18:00 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:15.316 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:15.316 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.138 ms 00:29:15.316 00:29:15.316 --- 10.0.0.2 ping statistics --- 00:29:15.316 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:15.316 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:29:15.316 10:18:00 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:15.316 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:15.316 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.163 ms 00:29:15.316 00:29:15.316 --- 10.0.0.1 ping statistics --- 00:29:15.316 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:15.316 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:29:15.316 10:18:00 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:15.316 10:18:00 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:29:15.316 10:18:00 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:15.316 10:18:00 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:15.317 10:18:00 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:15.317 10:18:00 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:15.317 10:18:00 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:15.317 10:18:00 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:15.317 10:18:00 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:15.317 10:18:00 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:29:15.317 10:18:00 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:15.317 10:18:00 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:15.317 10:18:00 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:29:15.317 10:18:00 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:29:15.317 10:18:00 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:29:15.317 10:18:00 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:29:15.317 10:18:00 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:29:15.317 10:18:00 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:29:15.317 10:18:00 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:29:15.317 10:18:00 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:29:15.317 10:18:00 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:29:15.317 10:18:00 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:29:15.317 10:18:00 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:29:15.317 10:18:00 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:82:00.0 00:29:15.317 10:18:00 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:82:00.0 00:29:15.317 10:18:00 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:82:00.0 00:29:15.317 10:18:00 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:82:00.0 ']' 00:29:15.317 10:18:00 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:82:00.0' -i 0 00:29:15.317 10:18:00 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:29:15.317 10:18:00 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:29:15.317 EAL: No free 2048 kB hugepages reported on node 1 00:29:19.497 10:18:04 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ9142051K1P0FGN 00:29:19.497 10:18:04 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:82:00.0' -i 0 00:29:19.497 10:18:04 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:29:19.497 10:18:04 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:29:19.754 EAL: No free 2048 kB hugepages reported on node 1 00:29:23.936 10:18:08 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:29:23.936 10:18:08 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:29:23.936 10:18:08 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:23.936 10:18:08 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:23.936 10:18:08 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:29:23.936 10:18:08 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:23.936 10:18:08 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:23.936 10:18:08 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=559475 00:29:23.936 10:18:08 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:29:23.936 10:18:08 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:23.936 10:18:08 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 559475 00:29:23.936 10:18:08 nvmf_identify_passthru -- common/autotest_common.sh@831 -- # '[' -z 559475 ']' 00:29:23.936 10:18:08 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:23.936 10:18:08 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:23.936 10:18:08 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:23.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:23.936 10:18:08 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:23.936 10:18:08 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:23.936 [2024-07-25 10:18:08.976682] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:29:23.936 [2024-07-25 10:18:08.976784] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:23.936 EAL: No free 2048 kB hugepages reported on node 1 00:29:23.936 [2024-07-25 10:18:09.050906] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:24.194 [2024-07-25 10:18:09.166836] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:24.194 [2024-07-25 10:18:09.166892] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:24.194 [2024-07-25 10:18:09.166919] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:24.194 [2024-07-25 10:18:09.166930] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:24.194 [2024-07-25 10:18:09.166940] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:24.194 [2024-07-25 10:18:09.166998] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:24.194 [2024-07-25 10:18:09.167023] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:24.194 [2024-07-25 10:18:09.167087] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:24.194 [2024-07-25 10:18:09.167090] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:24.194 10:18:09 nvmf_identify_passthru -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:24.194 10:18:09 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # return 0 00:29:24.194 10:18:09 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:29:24.194 10:18:09 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.194 10:18:09 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:24.194 INFO: Log level set to 20 00:29:24.194 INFO: Requests: 00:29:24.194 { 00:29:24.194 "jsonrpc": "2.0", 00:29:24.194 "method": "nvmf_set_config", 00:29:24.194 "id": 1, 00:29:24.194 "params": { 00:29:24.194 "admin_cmd_passthru": { 00:29:24.194 "identify_ctrlr": true 00:29:24.194 } 00:29:24.194 } 00:29:24.194 } 00:29:24.194 00:29:24.194 INFO: response: 00:29:24.194 { 00:29:24.194 "jsonrpc": "2.0", 00:29:24.194 "id": 1, 00:29:24.194 "result": true 00:29:24.194 } 00:29:24.194 00:29:24.194 10:18:09 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.194 10:18:09 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:29:24.194 10:18:09 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.194 10:18:09 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:24.194 INFO: Setting log level to 20 00:29:24.194 INFO: Setting log level to 20 00:29:24.194 INFO: Log level set to 20 00:29:24.194 INFO: Log level set to 20 00:29:24.194 INFO: Requests: 00:29:24.194 { 00:29:24.194 "jsonrpc": "2.0", 00:29:24.194 "method": "framework_start_init", 00:29:24.194 "id": 1 00:29:24.194 } 00:29:24.194 00:29:24.194 INFO: Requests: 00:29:24.194 { 00:29:24.194 "jsonrpc": "2.0", 00:29:24.194 "method": "framework_start_init", 00:29:24.194 "id": 1 00:29:24.194 } 00:29:24.194 00:29:24.194 [2024-07-25 10:18:09.298660] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:29:24.194 INFO: response: 00:29:24.194 { 00:29:24.194 "jsonrpc": "2.0", 00:29:24.194 "id": 1, 00:29:24.194 "result": true 00:29:24.194 } 00:29:24.194 00:29:24.194 INFO: response: 00:29:24.194 { 00:29:24.194 "jsonrpc": "2.0", 00:29:24.194 "id": 1, 00:29:24.194 "result": true 00:29:24.194 } 00:29:24.194 00:29:24.194 10:18:09 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.194 10:18:09 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:24.194 10:18:09 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.194 10:18:09 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:24.194 INFO: Setting log level to 40 00:29:24.194 INFO: Setting log level to 40 00:29:24.194 INFO: Setting log level to 40 00:29:24.195 [2024-07-25 10:18:09.308668] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:24.195 10:18:09 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:24.195 10:18:09 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:29:24.195 10:18:09 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:24.195 10:18:09 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:24.195 10:18:09 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:82:00.0 00:29:24.195 10:18:09 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:24.195 10:18:09 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:27.488 Nvme0n1 00:29:27.488 10:18:12 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:27.488 10:18:12 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:29:27.488 10:18:12 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:27.488 10:18:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:27.488 10:18:12 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:27.488 10:18:12 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:29:27.488 10:18:12 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:27.488 10:18:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:27.488 10:18:12 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:27.488 10:18:12 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:27.488 10:18:12 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:27.488 10:18:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:27.488 [2024-07-25 10:18:12.201159] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:27.488 10:18:12 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:27.488 10:18:12 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:29:27.488 10:18:12 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:27.488 10:18:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:27.488 [ 00:29:27.488 { 00:29:27.488 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:27.488 "subtype": "Discovery", 00:29:27.488 "listen_addresses": [], 00:29:27.488 "allow_any_host": true, 00:29:27.488 "hosts": [] 00:29:27.488 }, 00:29:27.488 { 00:29:27.488 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:27.488 "subtype": "NVMe", 00:29:27.488 "listen_addresses": [ 00:29:27.488 { 00:29:27.488 "trtype": "TCP", 00:29:27.488 "adrfam": "IPv4", 00:29:27.488 "traddr": "10.0.0.2", 00:29:27.488 "trsvcid": "4420" 00:29:27.488 } 00:29:27.488 ], 00:29:27.488 "allow_any_host": true, 00:29:27.488 "hosts": [], 00:29:27.488 "serial_number": "SPDK00000000000001", 00:29:27.488 "model_number": "SPDK bdev Controller", 00:29:27.488 "max_namespaces": 1, 00:29:27.488 "min_cntlid": 1, 00:29:27.488 "max_cntlid": 65519, 00:29:27.488 "namespaces": [ 00:29:27.488 { 00:29:27.488 "nsid": 1, 00:29:27.488 "bdev_name": "Nvme0n1", 00:29:27.488 "name": "Nvme0n1", 00:29:27.488 "nguid": "0ED85C5A05024E948076A655D1CB8CC6", 00:29:27.488 "uuid": "0ed85c5a-0502-4e94-8076-a655d1cb8cc6" 00:29:27.488 } 00:29:27.488 ] 00:29:27.488 } 00:29:27.488 ] 00:29:27.488 10:18:12 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:27.488 10:18:12 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:29:27.488 10:18:12 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:29:27.488 10:18:12 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:29:27.488 EAL: No free 2048 kB hugepages reported on node 1 00:29:27.488 10:18:12 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ9142051K1P0FGN 00:29:27.488 10:18:12 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:29:27.488 10:18:12 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:29:27.488 10:18:12 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:29:27.488 EAL: No free 2048 kB hugepages reported on node 1 00:29:27.488 10:18:12 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:29:27.488 10:18:12 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ9142051K1P0FGN '!=' BTLJ9142051K1P0FGN ']' 00:29:27.488 10:18:12 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:29:27.488 10:18:12 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:27.488 10:18:12 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:27.488 10:18:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:27.488 10:18:12 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:27.488 10:18:12 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:29:27.488 10:18:12 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:29:27.488 10:18:12 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:27.488 10:18:12 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:29:27.488 10:18:12 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:27.488 10:18:12 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:29:27.488 10:18:12 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:27.488 10:18:12 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:27.488 rmmod nvme_tcp 00:29:27.488 rmmod nvme_fabrics 00:29:27.488 rmmod nvme_keyring 00:29:27.488 10:18:12 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:27.488 10:18:12 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:29:27.488 10:18:12 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:29:27.488 10:18:12 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 559475 ']' 00:29:27.488 10:18:12 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 559475 00:29:27.488 10:18:12 nvmf_identify_passthru -- common/autotest_common.sh@950 -- # '[' -z 559475 ']' 00:29:27.488 10:18:12 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # kill -0 559475 00:29:27.488 10:18:12 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # uname 00:29:27.745 10:18:12 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:29:27.745 10:18:12 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 559475 00:29:27.745 10:18:12 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:29:27.745 10:18:12 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:29:27.745 10:18:12 nvmf_identify_passthru -- common/autotest_common.sh@968 -- # echo 'killing process with pid 559475' 00:29:27.745 killing process with pid 559475 00:29:27.745 10:18:12 nvmf_identify_passthru -- common/autotest_common.sh@969 -- # kill 559475 00:29:27.745 10:18:12 nvmf_identify_passthru -- common/autotest_common.sh@974 -- # wait 559475 00:29:29.643 10:18:14 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:29.643 10:18:14 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:29.643 10:18:14 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:29.643 10:18:14 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:29.644 10:18:14 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:29.644 10:18:14 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:29.644 10:18:14 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:29.644 10:18:14 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:31.544 10:18:16 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:31.544 00:29:31.544 real 0m18.755s 00:29:31.544 user 0m27.190s 00:29:31.544 sys 0m2.798s 00:29:31.544 10:18:16 nvmf_identify_passthru -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:31.544 10:18:16 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:29:31.544 ************************************ 00:29:31.544 END TEST nvmf_identify_passthru 00:29:31.544 ************************************ 00:29:31.544 10:18:16 -- spdk/autotest.sh@296 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:29:31.544 10:18:16 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:29:31.544 10:18:16 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:31.544 10:18:16 -- common/autotest_common.sh@10 -- # set +x 00:29:31.544 ************************************ 00:29:31.544 START TEST nvmf_dif 00:29:31.544 ************************************ 00:29:31.544 10:18:16 nvmf_dif -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:29:31.544 * Looking for test storage... 00:29:31.544 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:31.544 10:18:16 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:31.544 10:18:16 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:29:31.544 10:18:16 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:31.544 10:18:16 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:31.544 10:18:16 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:31.544 10:18:16 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:31.544 10:18:16 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:31.544 10:18:16 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:31.544 10:18:16 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:31.544 10:18:16 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:31.544 10:18:16 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:31.544 10:18:16 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:31.544 10:18:16 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:29:31.544 10:18:16 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:29:31.544 10:18:16 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:31.544 10:18:16 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:31.544 10:18:16 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:31.544 10:18:16 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:31.545 10:18:16 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:31.545 10:18:16 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:31.545 10:18:16 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:31.545 10:18:16 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:31.545 10:18:16 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:31.545 10:18:16 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:31.545 10:18:16 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:31.545 10:18:16 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:29:31.545 10:18:16 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:31.545 10:18:16 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:29:31.545 10:18:16 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:31.545 10:18:16 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:31.545 10:18:16 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:31.545 10:18:16 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:31.545 10:18:16 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:31.545 10:18:16 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:31.545 10:18:16 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:31.545 10:18:16 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:31.545 10:18:16 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:29:31.545 10:18:16 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:29:31.545 10:18:16 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:29:31.545 10:18:16 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:29:31.545 10:18:16 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:29:31.545 10:18:16 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:31.545 10:18:16 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:31.545 10:18:16 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:31.545 10:18:16 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:31.545 10:18:16 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:31.545 10:18:16 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:31.545 10:18:16 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:31.545 10:18:16 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:31.545 10:18:16 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:31.545 10:18:16 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:31.545 10:18:16 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:29:31.545 10:18:16 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:34.097 10:18:18 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:34.097 10:18:18 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:29:34.097 10:18:18 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:34.097 10:18:18 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:34.097 10:18:18 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:34.097 10:18:18 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:34.097 10:18:18 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:34.097 10:18:18 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:29:34.097 10:18:18 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:34.097 10:18:18 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:29:34.097 10:18:18 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:29:34.097 10:18:18 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:29:34.097 10:18:18 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:29:34.097 10:18:18 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:29:34.097 10:18:18 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:29:34.097 10:18:18 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:34.097 10:18:18 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:34.097 10:18:18 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:34.097 10:18:18 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:34.097 10:18:18 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:34.097 10:18:18 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:34.097 10:18:18 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:34.097 10:18:18 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:34.097 10:18:18 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:34.097 10:18:18 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:34.097 10:18:18 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:34.097 10:18:18 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:34.097 10:18:18 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:34.097 10:18:18 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:34.097 10:18:18 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:34.097 10:18:18 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:34.097 10:18:18 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:34.097 10:18:18 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:34.097 10:18:18 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:29:34.097 Found 0000:84:00.0 (0x8086 - 0x159b) 00:29:34.097 10:18:18 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:34.097 10:18:18 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:34.097 10:18:18 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:34.097 10:18:18 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:34.097 10:18:18 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:34.097 10:18:18 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:34.097 10:18:18 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:29:34.097 Found 0000:84:00.1 (0x8086 - 0x159b) 00:29:34.097 10:18:18 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:34.097 10:18:18 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:34.097 10:18:18 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:34.097 10:18:18 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:34.097 10:18:18 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:34.097 10:18:18 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:34.097 10:18:18 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:34.097 10:18:18 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:34.097 10:18:18 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:34.097 10:18:18 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:34.098 10:18:18 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:34.098 10:18:18 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:34.098 10:18:18 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:34.098 10:18:18 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:34.098 10:18:18 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:34.098 10:18:18 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:29:34.098 Found net devices under 0000:84:00.0: cvl_0_0 00:29:34.098 10:18:18 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:34.098 10:18:18 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:34.098 10:18:18 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:34.098 10:18:18 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:34.098 10:18:18 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:34.098 10:18:18 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:34.098 10:18:18 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:34.098 10:18:18 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:34.098 10:18:18 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:29:34.098 Found net devices under 0000:84:00.1: cvl_0_1 00:29:34.098 10:18:18 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:34.098 10:18:18 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:34.098 10:18:18 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:29:34.098 10:18:18 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:34.098 10:18:18 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:34.098 10:18:18 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:34.098 10:18:18 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:34.098 10:18:18 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:34.098 10:18:18 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:34.098 10:18:18 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:34.098 10:18:18 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:34.098 10:18:18 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:34.098 10:18:18 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:34.098 10:18:18 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:34.098 10:18:18 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:34.098 10:18:18 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:34.098 10:18:18 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:34.098 10:18:18 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:34.098 10:18:18 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:34.098 10:18:18 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:34.098 10:18:18 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:34.098 10:18:18 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:34.098 10:18:18 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:34.098 10:18:18 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:34.098 10:18:18 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:34.098 10:18:18 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:34.098 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:34.098 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.277 ms 00:29:34.098 00:29:34.098 --- 10.0.0.2 ping statistics --- 00:29:34.098 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:34.098 rtt min/avg/max/mdev = 0.277/0.277/0.277/0.000 ms 00:29:34.098 10:18:18 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:34.098 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:34.098 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.141 ms 00:29:34.098 00:29:34.098 --- 10.0.0.1 ping statistics --- 00:29:34.098 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:34.098 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:29:34.098 10:18:18 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:34.098 10:18:18 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:29:34.098 10:18:18 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:29:34.098 10:18:18 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:29:35.042 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:29:35.042 0000:82:00.0 (8086 0a54): Already using the vfio-pci driver 00:29:35.042 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:29:35.042 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:29:35.042 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:29:35.042 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:29:35.042 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:29:35.042 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:29:35.042 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:29:35.042 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:29:35.042 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:29:35.042 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:29:35.042 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:29:35.042 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:29:35.042 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:29:35.042 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:29:35.042 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:29:35.300 10:18:20 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:35.300 10:18:20 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:35.300 10:18:20 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:35.300 10:18:20 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:35.300 10:18:20 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:35.300 10:18:20 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:35.300 10:18:20 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:29:35.300 10:18:20 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:29:35.300 10:18:20 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:35.301 10:18:20 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:35.301 10:18:20 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:35.301 10:18:20 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=562973 00:29:35.301 10:18:20 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:29:35.301 10:18:20 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 562973 00:29:35.301 10:18:20 nvmf_dif -- common/autotest_common.sh@831 -- # '[' -z 562973 ']' 00:29:35.301 10:18:20 nvmf_dif -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:35.301 10:18:20 nvmf_dif -- common/autotest_common.sh@836 -- # local max_retries=100 00:29:35.301 10:18:20 nvmf_dif -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:35.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:35.301 10:18:20 nvmf_dif -- common/autotest_common.sh@840 -- # xtrace_disable 00:29:35.301 10:18:20 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:35.301 [2024-07-25 10:18:20.365365] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:29:35.301 [2024-07-25 10:18:20.365461] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:35.301 EAL: No free 2048 kB hugepages reported on node 1 00:29:35.301 [2024-07-25 10:18:20.440443] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:35.559 [2024-07-25 10:18:20.561834] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:35.559 [2024-07-25 10:18:20.561896] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:35.559 [2024-07-25 10:18:20.561913] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:35.559 [2024-07-25 10:18:20.561926] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:35.559 [2024-07-25 10:18:20.561937] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:35.559 [2024-07-25 10:18:20.561976] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:35.559 10:18:20 nvmf_dif -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:29:35.559 10:18:20 nvmf_dif -- common/autotest_common.sh@864 -- # return 0 00:29:35.559 10:18:20 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:35.559 10:18:20 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:35.559 10:18:20 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:35.559 10:18:20 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:35.559 10:18:20 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:29:35.559 10:18:20 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:29:35.559 10:18:20 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:35.559 10:18:20 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:35.559 [2024-07-25 10:18:20.716759] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:35.559 10:18:20 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:35.559 10:18:20 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:29:35.559 10:18:20 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:29:35.559 10:18:20 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:35.559 10:18:20 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:35.827 ************************************ 00:29:35.827 START TEST fio_dif_1_default 00:29:35.827 ************************************ 00:29:35.827 10:18:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1125 -- # fio_dif_1 00:29:35.827 10:18:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:29:35.827 10:18:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:29:35.827 10:18:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:29:35.827 10:18:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:29:35.827 10:18:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:29:35.827 10:18:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:29:35.827 10:18:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:35.827 10:18:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:35.827 bdev_null0 00:29:35.827 10:18:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:35.827 10:18:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:29:35.827 10:18:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:35.827 10:18:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:35.827 10:18:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:35.827 10:18:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:29:35.827 10:18:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:35.827 10:18:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:35.827 10:18:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:35.827 10:18:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:35.827 10:18:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:35.827 10:18:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:35.827 [2024-07-25 10:18:20.773045] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:35.828 10:18:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:35.828 10:18:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:29:35.828 10:18:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:29:35.828 10:18:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:29:35.828 10:18:20 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:29:35.828 10:18:20 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:29:35.828 10:18:20 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:35.828 10:18:20 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:35.828 { 00:29:35.828 "params": { 00:29:35.828 "name": "Nvme$subsystem", 00:29:35.828 "trtype": "$TEST_TRANSPORT", 00:29:35.828 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:35.828 "adrfam": "ipv4", 00:29:35.828 "trsvcid": "$NVMF_PORT", 00:29:35.828 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:35.828 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:35.828 "hdgst": ${hdgst:-false}, 00:29:35.828 "ddgst": ${ddgst:-false} 00:29:35.828 }, 00:29:35.828 "method": "bdev_nvme_attach_controller" 00:29:35.828 } 00:29:35.828 EOF 00:29:35.828 )") 00:29:35.828 10:18:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:35.828 10:18:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:35.828 10:18:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:29:35.828 10:18:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:29:35.828 10:18:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:29:35.828 10:18:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:35.828 10:18:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:29:35.828 10:18:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:29:35.828 10:18:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:35.828 10:18:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:29:35.828 10:18:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:29:35.828 10:18:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:35.828 10:18:20 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:29:35.828 10:18:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:29:35.828 10:18:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:35.828 10:18:20 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:29:35.828 10:18:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:29:35.828 10:18:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:35.828 10:18:20 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:29:35.828 10:18:20 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:29:35.828 10:18:20 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:35.828 "params": { 00:29:35.828 "name": "Nvme0", 00:29:35.828 "trtype": "tcp", 00:29:35.828 "traddr": "10.0.0.2", 00:29:35.828 "adrfam": "ipv4", 00:29:35.828 "trsvcid": "4420", 00:29:35.828 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:35.828 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:35.828 "hdgst": false, 00:29:35.828 "ddgst": false 00:29:35.828 }, 00:29:35.828 "method": "bdev_nvme_attach_controller" 00:29:35.828 }' 00:29:35.828 10:18:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:35.828 10:18:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:35.828 10:18:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:35.828 10:18:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:35.828 10:18:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:29:35.828 10:18:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:35.828 10:18:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:35.828 10:18:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:35.828 10:18:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:29:35.828 10:18:20 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:36.089 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:29:36.089 fio-3.35 00:29:36.089 Starting 1 thread 00:29:36.089 EAL: No free 2048 kB hugepages reported on node 1 00:29:48.284 00:29:48.284 filename0: (groupid=0, jobs=1): err= 0: pid=563198: Thu Jul 25 10:18:31 2024 00:29:48.284 read: IOPS=96, BW=384KiB/s (393kB/s)(3856KiB/10037msec) 00:29:48.284 slat (usec): min=6, max=126, avg=11.11, stdev= 4.80 00:29:48.284 clat (usec): min=40884, max=47028, avg=41609.03, stdev=598.55 00:29:48.284 lat (usec): min=40892, max=47061, avg=41620.14, stdev=598.24 00:29:48.284 clat percentiles (usec): 00:29:48.284 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:29:48.284 | 30.00th=[41157], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:29:48.284 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:29:48.284 | 99.00th=[42206], 99.50th=[42206], 99.90th=[46924], 99.95th=[46924], 00:29:48.284 | 99.99th=[46924] 00:29:48.284 bw ( KiB/s): min= 352, max= 416, per=99.95%, avg=384.00, stdev=10.38, samples=20 00:29:48.284 iops : min= 88, max= 104, avg=96.00, stdev= 2.60, samples=20 00:29:48.284 lat (msec) : 50=100.00% 00:29:48.284 cpu : usr=89.98%, sys=9.69%, ctx=23, majf=0, minf=195 00:29:48.284 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:48.284 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:48.284 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:48.284 issued rwts: total=964,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:48.284 latency : target=0, window=0, percentile=100.00%, depth=4 00:29:48.284 00:29:48.284 Run status group 0 (all jobs): 00:29:48.284 READ: bw=384KiB/s (393kB/s), 384KiB/s-384KiB/s (393kB/s-393kB/s), io=3856KiB (3949kB), run=10037-10037msec 00:29:48.284 10:18:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:29:48.284 10:18:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:29:48.284 10:18:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:29:48.284 10:18:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:29:48.284 10:18:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:29:48.284 10:18:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:48.284 10:18:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:48.284 10:18:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:48.284 10:18:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:48.284 10:18:31 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:29:48.284 10:18:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:48.284 10:18:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:48.284 10:18:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:48.284 00:29:48.284 real 0m11.191s 00:29:48.284 user 0m10.195s 00:29:48.284 sys 0m1.284s 00:29:48.284 10:18:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:48.284 10:18:31 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:29:48.284 ************************************ 00:29:48.284 END TEST fio_dif_1_default 00:29:48.284 ************************************ 00:29:48.284 10:18:31 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:29:48.284 10:18:31 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:29:48.284 10:18:31 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:48.284 10:18:31 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:48.284 ************************************ 00:29:48.284 START TEST fio_dif_1_multi_subsystems 00:29:48.284 ************************************ 00:29:48.284 10:18:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1125 -- # fio_dif_1_multi_subsystems 00:29:48.284 10:18:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:29:48.284 10:18:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:29:48.284 10:18:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:29:48.284 10:18:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:29:48.284 10:18:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:29:48.284 10:18:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:29:48.284 10:18:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:29:48.284 10:18:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:48.284 10:18:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:48.284 bdev_null0 00:29:48.284 10:18:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:48.284 10:18:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:29:48.284 10:18:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:48.284 10:18:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:48.284 10:18:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:48.284 10:18:31 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:29:48.284 10:18:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:48.284 10:18:31 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:48.284 10:18:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:48.284 10:18:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:48.284 10:18:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:48.284 10:18:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:48.284 [2024-07-25 10:18:32.011777] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:48.284 10:18:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:48.284 10:18:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:29:48.284 10:18:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:29:48.284 10:18:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:29:48.285 10:18:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:29:48.285 10:18:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:48.285 10:18:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:48.285 bdev_null1 00:29:48.285 10:18:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:48.285 10:18:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:29:48.285 10:18:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:48.285 10:18:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:48.285 10:18:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:48.285 10:18:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:29:48.285 10:18:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:48.285 10:18:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:48.285 10:18:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:48.285 10:18:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:48.285 10:18:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:48.285 10:18:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:48.285 10:18:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:48.285 10:18:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:29:48.285 10:18:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:29:48.285 10:18:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:29:48.285 10:18:32 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:29:48.285 10:18:32 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:29:48.285 10:18:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:48.285 10:18:32 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:48.285 10:18:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:48.285 10:18:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:29:48.285 10:18:32 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:48.285 { 00:29:48.285 "params": { 00:29:48.285 "name": "Nvme$subsystem", 00:29:48.285 "trtype": "$TEST_TRANSPORT", 00:29:48.285 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:48.285 "adrfam": "ipv4", 00:29:48.285 "trsvcid": "$NVMF_PORT", 00:29:48.285 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:48.285 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:48.285 "hdgst": ${hdgst:-false}, 00:29:48.285 "ddgst": ${ddgst:-false} 00:29:48.285 }, 00:29:48.285 "method": "bdev_nvme_attach_controller" 00:29:48.285 } 00:29:48.285 EOF 00:29:48.285 )") 00:29:48.285 10:18:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:29:48.285 10:18:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:29:48.285 10:18:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:48.285 10:18:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:29:48.285 10:18:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:29:48.285 10:18:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:48.285 10:18:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:29:48.285 10:18:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:29:48.285 10:18:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:48.285 10:18:32 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:29:48.285 10:18:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:29:48.285 10:18:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:29:48.285 10:18:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:29:48.285 10:18:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:48.285 10:18:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:29:48.285 10:18:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:48.285 10:18:32 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:48.285 10:18:32 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:48.285 { 00:29:48.285 "params": { 00:29:48.285 "name": "Nvme$subsystem", 00:29:48.285 "trtype": "$TEST_TRANSPORT", 00:29:48.285 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:48.285 "adrfam": "ipv4", 00:29:48.285 "trsvcid": "$NVMF_PORT", 00:29:48.285 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:48.285 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:48.285 "hdgst": ${hdgst:-false}, 00:29:48.285 "ddgst": ${ddgst:-false} 00:29:48.285 }, 00:29:48.285 "method": "bdev_nvme_attach_controller" 00:29:48.285 } 00:29:48.285 EOF 00:29:48.285 )") 00:29:48.285 10:18:32 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:29:48.285 10:18:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:29:48.285 10:18:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:29:48.285 10:18:32 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:29:48.285 10:18:32 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:29:48.285 10:18:32 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:48.285 "params": { 00:29:48.285 "name": "Nvme0", 00:29:48.285 "trtype": "tcp", 00:29:48.285 "traddr": "10.0.0.2", 00:29:48.285 "adrfam": "ipv4", 00:29:48.285 "trsvcid": "4420", 00:29:48.285 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:48.285 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:48.285 "hdgst": false, 00:29:48.285 "ddgst": false 00:29:48.285 }, 00:29:48.285 "method": "bdev_nvme_attach_controller" 00:29:48.285 },{ 00:29:48.285 "params": { 00:29:48.285 "name": "Nvme1", 00:29:48.285 "trtype": "tcp", 00:29:48.285 "traddr": "10.0.0.2", 00:29:48.285 "adrfam": "ipv4", 00:29:48.285 "trsvcid": "4420", 00:29:48.285 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:48.285 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:48.285 "hdgst": false, 00:29:48.285 "ddgst": false 00:29:48.285 }, 00:29:48.285 "method": "bdev_nvme_attach_controller" 00:29:48.285 }' 00:29:48.285 10:18:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:48.285 10:18:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:48.285 10:18:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:48.285 10:18:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:48.285 10:18:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:29:48.285 10:18:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:48.285 10:18:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:48.285 10:18:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:48.285 10:18:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:29:48.285 10:18:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:48.285 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:29:48.285 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:29:48.285 fio-3.35 00:29:48.285 Starting 2 threads 00:29:48.285 EAL: No free 2048 kB hugepages reported on node 1 00:29:58.242 00:29:58.242 filename0: (groupid=0, jobs=1): err= 0: pid=564606: Thu Jul 25 10:18:43 2024 00:29:58.242 read: IOPS=96, BW=387KiB/s (397kB/s)(3888KiB/10036msec) 00:29:58.242 slat (nsec): min=7236, max=26728, avg=10530.59, stdev=3298.28 00:29:58.242 clat (usec): min=40843, max=46228, avg=41264.89, stdev=540.54 00:29:58.242 lat (usec): min=40851, max=46243, avg=41275.42, stdev=540.70 00:29:58.242 clat percentiles (usec): 00:29:58.242 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:29:58.242 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:29:58.242 | 70.00th=[41157], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:29:58.242 | 99.00th=[42206], 99.50th=[42206], 99.90th=[46400], 99.95th=[46400], 00:29:58.242 | 99.99th=[46400] 00:29:58.242 bw ( KiB/s): min= 352, max= 416, per=50.26%, avg=387.20, stdev=14.31, samples=20 00:29:58.242 iops : min= 88, max= 104, avg=96.80, stdev= 3.58, samples=20 00:29:58.242 lat (msec) : 50=100.00% 00:29:58.242 cpu : usr=94.93%, sys=4.76%, ctx=14, majf=0, minf=90 00:29:58.242 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:58.242 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:58.242 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:58.242 issued rwts: total=972,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:58.242 latency : target=0, window=0, percentile=100.00%, depth=4 00:29:58.242 filename1: (groupid=0, jobs=1): err= 0: pid=564607: Thu Jul 25 10:18:43 2024 00:29:58.242 read: IOPS=95, BW=384KiB/s (393kB/s)(3840KiB/10001msec) 00:29:58.242 slat (nsec): min=6910, max=42857, avg=10519.72, stdev=3477.42 00:29:58.242 clat (usec): min=40910, max=46194, avg=41635.00, stdev=552.98 00:29:58.242 lat (usec): min=40919, max=46210, avg=41645.52, stdev=553.23 00:29:58.242 clat percentiles (usec): 00:29:58.242 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:29:58.242 | 30.00th=[41157], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:29:58.242 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:29:58.242 | 99.00th=[42206], 99.50th=[42206], 99.90th=[46400], 99.95th=[46400], 00:29:58.242 | 99.99th=[46400] 00:29:58.242 bw ( KiB/s): min= 352, max= 416, per=49.87%, avg=384.00, stdev=10.67, samples=19 00:29:58.242 iops : min= 88, max= 104, avg=96.00, stdev= 2.67, samples=19 00:29:58.242 lat (msec) : 50=100.00% 00:29:58.242 cpu : usr=94.85%, sys=4.85%, ctx=19, majf=0, minf=149 00:29:58.242 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:58.242 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:58.242 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:58.242 issued rwts: total=960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:58.242 latency : target=0, window=0, percentile=100.00%, depth=4 00:29:58.242 00:29:58.242 Run status group 0 (all jobs): 00:29:58.242 READ: bw=770KiB/s (789kB/s), 384KiB/s-387KiB/s (393kB/s-397kB/s), io=7728KiB (7913kB), run=10001-10036msec 00:29:58.242 10:18:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:29:58.242 10:18:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:29:58.242 10:18:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:29:58.242 10:18:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:29:58.242 10:18:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:29:58.242 10:18:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:58.242 10:18:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:58.242 10:18:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:58.242 10:18:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:58.242 10:18:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:29:58.242 10:18:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:58.242 10:18:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:58.242 10:18:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:58.243 10:18:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:29:58.243 10:18:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:29:58.243 10:18:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:29:58.243 10:18:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:58.243 10:18:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:58.243 10:18:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:58.243 10:18:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:58.243 10:18:43 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:29:58.243 10:18:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:58.243 10:18:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:58.500 10:18:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:58.500 00:29:58.500 real 0m11.429s 00:29:58.500 user 0m20.572s 00:29:58.500 sys 0m1.268s 00:29:58.500 10:18:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1126 -- # xtrace_disable 00:29:58.500 10:18:43 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:29:58.500 ************************************ 00:29:58.500 END TEST fio_dif_1_multi_subsystems 00:29:58.500 ************************************ 00:29:58.500 10:18:43 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:29:58.500 10:18:43 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:29:58.500 10:18:43 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:29:58.500 10:18:43 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:58.500 ************************************ 00:29:58.500 START TEST fio_dif_rand_params 00:29:58.500 ************************************ 00:29:58.500 10:18:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1125 -- # fio_dif_rand_params 00:29:58.500 10:18:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:29:58.500 10:18:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:29:58.500 10:18:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:29:58.500 10:18:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:29:58.500 10:18:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:29:58.500 10:18:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:29:58.500 10:18:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:29:58.500 10:18:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:29:58.500 10:18:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:29:58.500 10:18:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:29:58.500 10:18:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:29:58.500 10:18:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:29:58.500 10:18:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:29:58.500 10:18:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:58.500 10:18:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:58.500 bdev_null0 00:29:58.501 10:18:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:58.501 10:18:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:29:58.501 10:18:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:58.501 10:18:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:58.501 10:18:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:58.501 10:18:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:29:58.501 10:18:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:58.501 10:18:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:58.501 10:18:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:58.501 10:18:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:58.501 10:18:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:58.501 10:18:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:58.501 [2024-07-25 10:18:43.516626] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:58.501 10:18:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:58.501 10:18:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:29:58.501 10:18:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:29:58.501 10:18:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:29:58.501 10:18:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:29:58.501 10:18:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:29:58.501 10:18:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:58.501 10:18:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:58.501 10:18:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:58.501 { 00:29:58.501 "params": { 00:29:58.501 "name": "Nvme$subsystem", 00:29:58.501 "trtype": "$TEST_TRANSPORT", 00:29:58.501 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:58.501 "adrfam": "ipv4", 00:29:58.501 "trsvcid": "$NVMF_PORT", 00:29:58.501 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:58.501 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:58.501 "hdgst": ${hdgst:-false}, 00:29:58.501 "ddgst": ${ddgst:-false} 00:29:58.501 }, 00:29:58.501 "method": "bdev_nvme_attach_controller" 00:29:58.501 } 00:29:58.501 EOF 00:29:58.501 )") 00:29:58.501 10:18:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:29:58.501 10:18:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:58.501 10:18:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:29:58.501 10:18:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:29:58.501 10:18:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:29:58.501 10:18:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:58.501 10:18:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:29:58.501 10:18:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:58.501 10:18:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:29:58.501 10:18:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:29:58.501 10:18:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:58.501 10:18:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:29:58.501 10:18:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:58.501 10:18:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:29:58.501 10:18:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:29:58.501 10:18:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:29:58.501 10:18:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:58.501 10:18:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:29:58.501 10:18:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:29:58.501 10:18:43 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:58.501 "params": { 00:29:58.501 "name": "Nvme0", 00:29:58.501 "trtype": "tcp", 00:29:58.501 "traddr": "10.0.0.2", 00:29:58.501 "adrfam": "ipv4", 00:29:58.501 "trsvcid": "4420", 00:29:58.501 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:58.501 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:58.501 "hdgst": false, 00:29:58.501 "ddgst": false 00:29:58.501 }, 00:29:58.501 "method": "bdev_nvme_attach_controller" 00:29:58.501 }' 00:29:58.501 10:18:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:58.501 10:18:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:58.501 10:18:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:58.501 10:18:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:58.501 10:18:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:29:58.501 10:18:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:58.501 10:18:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:58.501 10:18:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:58.501 10:18:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:29:58.501 10:18:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:58.758 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:29:58.758 ... 00:29:58.758 fio-3.35 00:29:58.758 Starting 3 threads 00:29:58.758 EAL: No free 2048 kB hugepages reported on node 1 00:30:05.314 00:30:05.314 filename0: (groupid=0, jobs=1): err= 0: pid=565992: Thu Jul 25 10:18:49 2024 00:30:05.314 read: IOPS=187, BW=23.4MiB/s (24.5MB/s)(118MiB/5046msec) 00:30:05.314 slat (nsec): min=4959, max=77269, avg=14534.42, stdev=2903.68 00:30:05.314 clat (usec): min=5042, max=58744, avg=15967.65, stdev=13361.49 00:30:05.314 lat (usec): min=5056, max=58761, avg=15982.18, stdev=13361.62 00:30:05.314 clat percentiles (usec): 00:30:05.314 | 1.00th=[ 6128], 5.00th=[ 6456], 10.00th=[ 7177], 20.00th=[ 8979], 00:30:05.314 | 30.00th=[ 9896], 40.00th=[10421], 50.00th=[11731], 60.00th=[13042], 00:30:05.314 | 70.00th=[13960], 80.00th=[15664], 90.00th=[49546], 95.00th=[52691], 00:30:05.314 | 99.00th=[55837], 99.50th=[56886], 99.90th=[58983], 99.95th=[58983], 00:30:05.314 | 99.99th=[58983] 00:30:05.314 bw ( KiB/s): min=17664, max=32256, per=32.46%, avg=24093.80, stdev=4695.10, samples=10 00:30:05.314 iops : min= 138, max= 252, avg=188.20, stdev=36.70, samples=10 00:30:05.314 lat (msec) : 10=32.94%, 20=55.72%, 50=2.44%, 100=8.90% 00:30:05.314 cpu : usr=92.61%, sys=6.94%, ctx=12, majf=0, minf=153 00:30:05.314 IO depths : 1=1.0%, 2=99.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:05.314 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:05.314 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:05.314 issued rwts: total=944,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:05.314 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:05.314 filename0: (groupid=0, jobs=1): err= 0: pid=565993: Thu Jul 25 10:18:49 2024 00:30:05.314 read: IOPS=197, BW=24.6MiB/s (25.8MB/s)(123MiB/5004msec) 00:30:05.314 slat (nsec): min=4975, max=26781, avg=14222.62, stdev=1870.85 00:30:05.314 clat (usec): min=5889, max=90158, avg=15204.60, stdev=12774.65 00:30:05.314 lat (usec): min=5903, max=90173, avg=15218.82, stdev=12774.62 00:30:05.314 clat percentiles (usec): 00:30:05.314 | 1.00th=[ 6194], 5.00th=[ 7439], 10.00th=[ 8291], 20.00th=[ 9241], 00:30:05.314 | 30.00th=[ 9634], 40.00th=[10290], 50.00th=[11207], 60.00th=[12125], 00:30:05.314 | 70.00th=[13173], 80.00th=[14222], 90.00th=[19006], 95.00th=[51119], 00:30:05.314 | 99.00th=[54264], 99.50th=[57934], 99.90th=[89654], 99.95th=[89654], 00:30:05.314 | 99.99th=[89654] 00:30:05.314 bw ( KiB/s): min=19968, max=34560, per=33.90%, avg=25164.80, stdev=4940.57, samples=10 00:30:05.314 iops : min= 156, max= 270, avg=196.60, stdev=38.60, samples=10 00:30:05.314 lat (msec) : 10=34.89%, 20=55.38%, 50=1.62%, 100=8.11% 00:30:05.314 cpu : usr=91.84%, sys=7.70%, ctx=9, majf=0, minf=53 00:30:05.314 IO depths : 1=0.9%, 2=99.1%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:05.314 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:05.314 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:05.314 issued rwts: total=986,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:05.314 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:05.314 filename0: (groupid=0, jobs=1): err= 0: pid=565994: Thu Jul 25 10:18:49 2024 00:30:05.314 read: IOPS=199, BW=24.9MiB/s (26.1MB/s)(125MiB/5004msec) 00:30:05.314 slat (nsec): min=4837, max=22145, avg=14096.87, stdev=2145.48 00:30:05.314 clat (usec): min=5528, max=55448, avg=15050.02, stdev=12054.51 00:30:05.314 lat (usec): min=5542, max=55464, avg=15064.11, stdev=12054.61 00:30:05.314 clat percentiles (usec): 00:30:05.314 | 1.00th=[ 5997], 5.00th=[ 6587], 10.00th=[ 7635], 20.00th=[ 9110], 00:30:05.314 | 30.00th=[ 9503], 40.00th=[10421], 50.00th=[11338], 60.00th=[12649], 00:30:05.314 | 70.00th=[13829], 80.00th=[15139], 90.00th=[18220], 95.00th=[51119], 00:30:05.314 | 99.00th=[53740], 99.50th=[54789], 99.90th=[55313], 99.95th=[55313], 00:30:05.314 | 99.99th=[55313] 00:30:05.314 bw ( KiB/s): min=20736, max=32256, per=34.25%, avg=25420.80, stdev=3801.02, samples=10 00:30:05.314 iops : min= 162, max= 252, avg=198.60, stdev=29.70, samples=10 00:30:05.314 lat (msec) : 10=34.94%, 20=55.72%, 50=2.11%, 100=7.23% 00:30:05.314 cpu : usr=92.26%, sys=7.28%, ctx=12, majf=0, minf=106 00:30:05.314 IO depths : 1=2.1%, 2=97.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:05.314 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:05.314 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:05.314 issued rwts: total=996,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:05.314 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:05.314 00:30:05.314 Run status group 0 (all jobs): 00:30:05.314 READ: bw=72.5MiB/s (76.0MB/s), 23.4MiB/s-24.9MiB/s (24.5MB/s-26.1MB/s), io=366MiB (384MB), run=5004-5046msec 00:30:05.315 10:18:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:30:05.315 10:18:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:30:05.315 10:18:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:05.315 10:18:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:05.315 10:18:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:30:05.315 10:18:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:05.315 10:18:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:05.315 10:18:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:05.315 10:18:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:05.315 10:18:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:05.315 10:18:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:05.315 10:18:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:05.315 10:18:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:05.315 10:18:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:30:05.315 10:18:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:30:05.315 10:18:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:30:05.315 10:18:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:30:05.315 10:18:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:30:05.315 10:18:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:30:05.315 10:18:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:30:05.315 10:18:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:30:05.315 10:18:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:05.315 10:18:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:30:05.315 10:18:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:30:05.315 10:18:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:30:05.315 10:18:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:05.315 10:18:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:05.315 bdev_null0 00:30:05.315 10:18:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:05.315 10:18:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:05.315 10:18:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:05.315 10:18:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:05.315 10:18:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:05.315 10:18:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:05.315 10:18:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:05.315 10:18:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:05.315 10:18:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:05.315 10:18:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:05.315 10:18:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:05.315 10:18:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:05.315 [2024-07-25 10:18:49.821676] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:05.315 10:18:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:05.315 10:18:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:05.315 10:18:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:30:05.315 10:18:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:30:05.315 10:18:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:30:05.315 10:18:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:05.315 10:18:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:05.315 bdev_null1 00:30:05.315 10:18:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:05.315 10:18:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:30:05.315 10:18:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:05.315 10:18:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:05.315 10:18:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:05.315 10:18:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:30:05.315 10:18:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:05.315 10:18:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:05.315 10:18:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:05.315 10:18:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:05.315 10:18:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:05.315 10:18:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:05.315 10:18:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:05.315 10:18:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:05.315 10:18:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:30:05.315 10:18:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:30:05.315 10:18:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:30:05.315 10:18:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:05.315 10:18:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:05.315 bdev_null2 00:30:05.315 10:18:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:05.315 10:18:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:30:05.315 10:18:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:05.315 10:18:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:05.315 10:18:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:05.315 10:18:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:30:05.315 10:18:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:05.315 10:18:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:05.315 10:18:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:05.315 10:18:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:30:05.315 10:18:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:05.315 10:18:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:05.315 10:18:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:05.315 10:18:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:30:05.315 10:18:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:30:05.315 10:18:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:30:05.316 10:18:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:30:05.316 10:18:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:30:05.316 10:18:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:05.316 10:18:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:05.316 { 00:30:05.316 "params": { 00:30:05.316 "name": "Nvme$subsystem", 00:30:05.316 "trtype": "$TEST_TRANSPORT", 00:30:05.316 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:05.316 "adrfam": "ipv4", 00:30:05.316 "trsvcid": "$NVMF_PORT", 00:30:05.316 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:05.316 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:05.316 "hdgst": ${hdgst:-false}, 00:30:05.316 "ddgst": ${ddgst:-false} 00:30:05.316 }, 00:30:05.316 "method": "bdev_nvme_attach_controller" 00:30:05.316 } 00:30:05.316 EOF 00:30:05.316 )") 00:30:05.316 10:18:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:05.316 10:18:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:05.316 10:18:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:30:05.316 10:18:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:05.316 10:18:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:30:05.316 10:18:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:05.316 10:18:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:30:05.316 10:18:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:05.316 10:18:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:05.316 10:18:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:30:05.316 10:18:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:05.316 10:18:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:05.316 10:18:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:05.316 10:18:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:05.316 10:18:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:30:05.316 10:18:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:30:05.316 10:18:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:05.316 10:18:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:05.316 10:18:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:30:05.316 10:18:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:05.316 10:18:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:05.316 { 00:30:05.316 "params": { 00:30:05.316 "name": "Nvme$subsystem", 00:30:05.316 "trtype": "$TEST_TRANSPORT", 00:30:05.316 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:05.316 "adrfam": "ipv4", 00:30:05.316 "trsvcid": "$NVMF_PORT", 00:30:05.316 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:05.316 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:05.316 "hdgst": ${hdgst:-false}, 00:30:05.316 "ddgst": ${ddgst:-false} 00:30:05.316 }, 00:30:05.316 "method": "bdev_nvme_attach_controller" 00:30:05.316 } 00:30:05.316 EOF 00:30:05.316 )") 00:30:05.316 10:18:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:05.316 10:18:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:30:05.316 10:18:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:05.316 10:18:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:30:05.316 10:18:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:05.316 10:18:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:05.316 { 00:30:05.316 "params": { 00:30:05.316 "name": "Nvme$subsystem", 00:30:05.316 "trtype": "$TEST_TRANSPORT", 00:30:05.316 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:05.316 "adrfam": "ipv4", 00:30:05.316 "trsvcid": "$NVMF_PORT", 00:30:05.316 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:05.316 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:05.316 "hdgst": ${hdgst:-false}, 00:30:05.316 "ddgst": ${ddgst:-false} 00:30:05.316 }, 00:30:05.316 "method": "bdev_nvme_attach_controller" 00:30:05.316 } 00:30:05.316 EOF 00:30:05.316 )") 00:30:05.316 10:18:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:30:05.316 10:18:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:05.316 10:18:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:05.316 10:18:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:30:05.316 10:18:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:30:05.316 10:18:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:05.316 "params": { 00:30:05.316 "name": "Nvme0", 00:30:05.316 "trtype": "tcp", 00:30:05.316 "traddr": "10.0.0.2", 00:30:05.316 "adrfam": "ipv4", 00:30:05.316 "trsvcid": "4420", 00:30:05.316 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:05.316 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:05.316 "hdgst": false, 00:30:05.316 "ddgst": false 00:30:05.316 }, 00:30:05.316 "method": "bdev_nvme_attach_controller" 00:30:05.316 },{ 00:30:05.316 "params": { 00:30:05.316 "name": "Nvme1", 00:30:05.316 "trtype": "tcp", 00:30:05.316 "traddr": "10.0.0.2", 00:30:05.316 "adrfam": "ipv4", 00:30:05.316 "trsvcid": "4420", 00:30:05.316 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:05.316 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:05.316 "hdgst": false, 00:30:05.316 "ddgst": false 00:30:05.316 }, 00:30:05.316 "method": "bdev_nvme_attach_controller" 00:30:05.316 },{ 00:30:05.316 "params": { 00:30:05.316 "name": "Nvme2", 00:30:05.316 "trtype": "tcp", 00:30:05.316 "traddr": "10.0.0.2", 00:30:05.316 "adrfam": "ipv4", 00:30:05.316 "trsvcid": "4420", 00:30:05.316 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:30:05.316 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:30:05.316 "hdgst": false, 00:30:05.316 "ddgst": false 00:30:05.316 }, 00:30:05.317 "method": "bdev_nvme_attach_controller" 00:30:05.317 }' 00:30:05.317 10:18:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:05.317 10:18:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:05.317 10:18:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:05.317 10:18:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:05.317 10:18:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:05.317 10:18:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:05.317 10:18:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:05.317 10:18:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:05.317 10:18:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:05.317 10:18:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:05.317 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:30:05.317 ... 00:30:05.317 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:30:05.317 ... 00:30:05.317 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:30:05.317 ... 00:30:05.317 fio-3.35 00:30:05.317 Starting 24 threads 00:30:05.317 EAL: No free 2048 kB hugepages reported on node 1 00:30:17.563 00:30:17.563 filename0: (groupid=0, jobs=1): err= 0: pid=566856: Thu Jul 25 10:19:01 2024 00:30:17.563 read: IOPS=74, BW=298KiB/s (305kB/s)(3008KiB/10103msec) 00:30:17.563 slat (usec): min=8, max=109, avg=62.61, stdev=22.16 00:30:17.563 clat (msec): min=109, max=402, avg=214.40, stdev=51.49 00:30:17.563 lat (msec): min=109, max=402, avg=214.47, stdev=51.50 00:30:17.563 clat percentiles (msec): 00:30:17.563 | 1.00th=[ 110], 5.00th=[ 124], 10.00th=[ 129], 20.00th=[ 155], 00:30:17.563 | 30.00th=[ 205], 40.00th=[ 220], 50.00th=[ 230], 60.00th=[ 243], 00:30:17.563 | 70.00th=[ 247], 80.00th=[ 249], 90.00th=[ 255], 95.00th=[ 262], 00:30:17.563 | 99.00th=[ 376], 99.50th=[ 405], 99.90th=[ 405], 99.95th=[ 405], 00:30:17.563 | 99.99th=[ 405] 00:30:17.563 bw ( KiB/s): min= 176, max= 512, per=4.90%, avg=294.40, stdev=76.01, samples=20 00:30:17.563 iops : min= 44, max= 128, avg=73.60, stdev=19.00, samples=20 00:30:17.563 lat (msec) : 250=82.71%, 500=17.29% 00:30:17.563 cpu : usr=98.35%, sys=1.21%, ctx=14, majf=0, minf=28 00:30:17.563 IO depths : 1=0.9%, 2=2.7%, 4=11.3%, 8=73.5%, 16=11.6%, 32=0.0%, >=64=0.0% 00:30:17.563 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:17.563 complete : 0=0.0%, 4=90.2%, 8=4.3%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:17.563 issued rwts: total=752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:17.563 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:17.563 filename0: (groupid=0, jobs=1): err= 0: pid=566857: Thu Jul 25 10:19:01 2024 00:30:17.563 read: IOPS=49, BW=197KiB/s (201kB/s)(1984KiB/10089msec) 00:30:17.563 slat (nsec): min=5472, max=65469, avg=16276.82, stdev=8697.03 00:30:17.563 clat (msec): min=115, max=506, avg=325.28, stdev=62.92 00:30:17.563 lat (msec): min=115, max=506, avg=325.29, stdev=62.91 00:30:17.563 clat percentiles (msec): 00:30:17.563 | 1.00th=[ 201], 5.00th=[ 215], 10.00th=[ 220], 20.00th=[ 245], 00:30:17.563 | 30.00th=[ 321], 40.00th=[ 334], 50.00th=[ 342], 60.00th=[ 355], 00:30:17.563 | 70.00th=[ 368], 80.00th=[ 376], 90.00th=[ 384], 95.00th=[ 393], 00:30:17.563 | 99.00th=[ 443], 99.50th=[ 493], 99.90th=[ 506], 99.95th=[ 506], 00:30:17.563 | 99.99th=[ 506] 00:30:17.563 bw ( KiB/s): min= 128, max= 384, per=3.20%, avg=192.00, stdev=75.23, samples=20 00:30:17.563 iops : min= 32, max= 96, avg=48.00, stdev=18.81, samples=20 00:30:17.563 lat (msec) : 250=20.56%, 500=79.03%, 750=0.40% 00:30:17.563 cpu : usr=97.62%, sys=1.57%, ctx=69, majf=0, minf=26 00:30:17.563 IO depths : 1=4.6%, 2=10.9%, 4=25.0%, 8=51.6%, 16=7.9%, 32=0.0%, >=64=0.0% 00:30:17.563 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:17.563 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:17.563 issued rwts: total=496,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:17.563 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:17.563 filename0: (groupid=0, jobs=1): err= 0: pid=566858: Thu Jul 25 10:19:01 2024 00:30:17.563 read: IOPS=66, BW=267KiB/s (273kB/s)(2696KiB/10103msec) 00:30:17.563 slat (usec): min=8, max=102, avg=21.57, stdev=20.04 00:30:17.563 clat (msec): min=139, max=415, avg=239.63, stdev=37.71 00:30:17.563 lat (msec): min=139, max=415, avg=239.65, stdev=37.71 00:30:17.563 clat percentiles (msec): 00:30:17.563 | 1.00th=[ 188], 5.00th=[ 199], 10.00th=[ 205], 20.00th=[ 213], 00:30:17.563 | 30.00th=[ 220], 40.00th=[ 226], 50.00th=[ 241], 60.00th=[ 245], 00:30:17.563 | 70.00th=[ 249], 80.00th=[ 253], 90.00th=[ 266], 95.00th=[ 321], 00:30:17.563 | 99.00th=[ 409], 99.50th=[ 414], 99.90th=[ 414], 99.95th=[ 414], 00:30:17.563 | 99.99th=[ 414] 00:30:17.563 bw ( KiB/s): min= 208, max= 384, per=4.38%, avg=263.20, stdev=41.68, samples=20 00:30:17.563 iops : min= 52, max= 96, avg=65.80, stdev=10.42, samples=20 00:30:17.563 lat (msec) : 250=76.56%, 500=23.44% 00:30:17.563 cpu : usr=97.70%, sys=1.60%, ctx=59, majf=0, minf=26 00:30:17.563 IO depths : 1=1.3%, 2=5.2%, 4=17.8%, 8=64.2%, 16=11.4%, 32=0.0%, >=64=0.0% 00:30:17.563 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:17.563 complete : 0=0.0%, 4=92.2%, 8=2.5%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:17.563 issued rwts: total=674,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:17.563 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:17.563 filename0: (groupid=0, jobs=1): err= 0: pid=566859: Thu Jul 25 10:19:01 2024 00:30:17.564 read: IOPS=67, BW=271KiB/s (278kB/s)(2744KiB/10119msec) 00:30:17.564 slat (usec): min=6, max=274, avg=39.02, stdev=32.39 00:30:17.564 clat (msec): min=38, max=446, avg=235.43, stdev=75.37 00:30:17.564 lat (msec): min=38, max=446, avg=235.47, stdev=75.38 00:30:17.564 clat percentiles (msec): 00:30:17.564 | 1.00th=[ 39], 5.00th=[ 120], 10.00th=[ 138], 20.00th=[ 178], 00:30:17.564 | 30.00th=[ 209], 40.00th=[ 230], 50.00th=[ 245], 60.00th=[ 249], 00:30:17.564 | 70.00th=[ 255], 80.00th=[ 309], 90.00th=[ 347], 95.00th=[ 359], 00:30:17.564 | 99.00th=[ 384], 99.50th=[ 439], 99.90th=[ 447], 99.95th=[ 447], 00:30:17.564 | 99.99th=[ 447] 00:30:17.564 bw ( KiB/s): min= 128, max= 512, per=4.45%, avg=268.00, stdev=94.43, samples=20 00:30:17.564 iops : min= 32, max= 128, avg=67.00, stdev=23.61, samples=20 00:30:17.564 lat (msec) : 50=2.04%, 100=2.62%, 250=59.48%, 500=35.86% 00:30:17.564 cpu : usr=98.41%, sys=1.17%, ctx=17, majf=0, minf=57 00:30:17.564 IO depths : 1=1.5%, 2=5.0%, 4=16.5%, 8=65.9%, 16=11.2%, 32=0.0%, >=64=0.0% 00:30:17.564 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:17.564 complete : 0=0.0%, 4=91.6%, 8=3.0%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:17.564 issued rwts: total=686,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:17.564 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:17.564 filename0: (groupid=0, jobs=1): err= 0: pid=566860: Thu Jul 25 10:19:01 2024 00:30:17.564 read: IOPS=49, BW=197KiB/s (201kB/s)(1984KiB/10083msec) 00:30:17.564 slat (nsec): min=8506, max=90075, avg=25952.83, stdev=21222.46 00:30:17.564 clat (msec): min=143, max=520, avg=325.02, stdev=60.32 00:30:17.564 lat (msec): min=143, max=520, avg=325.04, stdev=60.30 00:30:17.564 clat percentiles (msec): 00:30:17.564 | 1.00th=[ 213], 5.00th=[ 215], 10.00th=[ 220], 20.00th=[ 262], 00:30:17.564 | 30.00th=[ 321], 40.00th=[ 334], 50.00th=[ 342], 60.00th=[ 359], 00:30:17.564 | 70.00th=[ 368], 80.00th=[ 376], 90.00th=[ 380], 95.00th=[ 393], 00:30:17.564 | 99.00th=[ 397], 99.50th=[ 405], 99.90th=[ 523], 99.95th=[ 523], 00:30:17.564 | 99.99th=[ 523] 00:30:17.564 bw ( KiB/s): min= 128, max= 384, per=3.20%, avg=192.00, stdev=73.96, samples=20 00:30:17.564 iops : min= 32, max= 96, avg=48.00, stdev=18.49, samples=20 00:30:17.564 lat (msec) : 250=19.76%, 500=79.84%, 750=0.40% 00:30:17.564 cpu : usr=98.05%, sys=1.29%, ctx=46, majf=0, minf=33 00:30:17.564 IO depths : 1=3.6%, 2=9.9%, 4=25.0%, 8=52.6%, 16=8.9%, 32=0.0%, >=64=0.0% 00:30:17.564 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:17.564 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:17.564 issued rwts: total=496,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:17.564 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:17.564 filename0: (groupid=0, jobs=1): err= 0: pid=566861: Thu Jul 25 10:19:01 2024 00:30:17.564 read: IOPS=56, BW=227KiB/s (232kB/s)(2288KiB/10093msec) 00:30:17.564 slat (usec): min=8, max=155, avg=47.58, stdev=33.65 00:30:17.564 clat (msec): min=205, max=453, avg=281.84, stdev=55.42 00:30:17.564 lat (msec): min=205, max=453, avg=281.89, stdev=55.44 00:30:17.564 clat percentiles (msec): 00:30:17.564 | 1.00th=[ 207], 5.00th=[ 213], 10.00th=[ 218], 20.00th=[ 239], 00:30:17.564 | 30.00th=[ 245], 40.00th=[ 251], 50.00th=[ 257], 60.00th=[ 305], 00:30:17.564 | 70.00th=[ 321], 80.00th=[ 342], 90.00th=[ 372], 95.00th=[ 376], 00:30:17.564 | 99.00th=[ 401], 99.50th=[ 405], 99.90th=[ 456], 99.95th=[ 456], 00:30:17.564 | 99.99th=[ 456] 00:30:17.564 bw ( KiB/s): min= 128, max= 384, per=3.70%, avg=222.40, stdev=68.26, samples=20 00:30:17.564 iops : min= 32, max= 96, avg=55.60, stdev=17.06, samples=20 00:30:17.564 lat (msec) : 250=40.56%, 500=59.44% 00:30:17.564 cpu : usr=98.31%, sys=1.28%, ctx=15, majf=0, minf=25 00:30:17.564 IO depths : 1=3.1%, 2=7.0%, 4=17.7%, 8=62.8%, 16=9.4%, 32=0.0%, >=64=0.0% 00:30:17.564 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:17.564 complete : 0=0.0%, 4=92.0%, 8=2.4%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:17.564 issued rwts: total=572,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:17.564 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:17.564 filename0: (groupid=0, jobs=1): err= 0: pid=566862: Thu Jul 25 10:19:01 2024 00:30:17.564 read: IOPS=60, BW=244KiB/s (249kB/s)(2456KiB/10084msec) 00:30:17.564 slat (usec): min=8, max=105, avg=34.10, stdev=30.06 00:30:17.564 clat (msec): min=166, max=428, avg=262.21, stdev=45.33 00:30:17.564 lat (msec): min=166, max=428, avg=262.25, stdev=45.34 00:30:17.564 clat percentiles (msec): 00:30:17.564 | 1.00th=[ 171], 5.00th=[ 213], 10.00th=[ 218], 20.00th=[ 224], 00:30:17.564 | 30.00th=[ 243], 40.00th=[ 247], 50.00th=[ 249], 60.00th=[ 253], 00:30:17.564 | 70.00th=[ 268], 80.00th=[ 305], 90.00th=[ 334], 95.00th=[ 342], 00:30:17.564 | 99.00th=[ 401], 99.50th=[ 409], 99.90th=[ 430], 99.95th=[ 430], 00:30:17.564 | 99.99th=[ 430] 00:30:17.564 bw ( KiB/s): min= 128, max= 384, per=3.98%, avg=239.20, stdev=63.68, samples=20 00:30:17.564 iops : min= 32, max= 96, avg=59.80, stdev=15.92, samples=20 00:30:17.564 lat (msec) : 250=54.07%, 500=45.93% 00:30:17.564 cpu : usr=98.19%, sys=1.38%, ctx=20, majf=0, minf=23 00:30:17.564 IO depths : 1=2.0%, 2=5.2%, 4=16.0%, 8=66.3%, 16=10.6%, 32=0.0%, >=64=0.0% 00:30:17.564 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:17.564 complete : 0=0.0%, 4=91.5%, 8=2.9%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:17.564 issued rwts: total=614,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:17.564 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:17.564 filename0: (groupid=0, jobs=1): err= 0: pid=566863: Thu Jul 25 10:19:01 2024 00:30:17.564 read: IOPS=69, BW=279KiB/s (285kB/s)(2816KiB/10103msec) 00:30:17.564 slat (nsec): min=8151, max=89661, avg=19123.35, stdev=17729.96 00:30:17.564 clat (msec): min=124, max=267, avg=229.36, stdev=28.62 00:30:17.564 lat (msec): min=124, max=267, avg=229.38, stdev=28.61 00:30:17.564 clat percentiles (msec): 00:30:17.564 | 1.00th=[ 125], 5.00th=[ 182], 10.00th=[ 199], 20.00th=[ 213], 00:30:17.564 | 30.00th=[ 222], 40.00th=[ 228], 50.00th=[ 241], 60.00th=[ 243], 00:30:17.564 | 70.00th=[ 247], 80.00th=[ 251], 90.00th=[ 255], 95.00th=[ 257], 00:30:17.564 | 99.00th=[ 266], 99.50th=[ 266], 99.90th=[ 268], 99.95th=[ 268], 00:30:17.564 | 99.99th=[ 268] 00:30:17.564 bw ( KiB/s): min= 256, max= 368, per=4.58%, avg=275.20, stdev=40.41, samples=20 00:30:17.564 iops : min= 64, max= 92, avg=68.80, stdev=10.10, samples=20 00:30:17.564 lat (msec) : 250=79.55%, 500=20.45% 00:30:17.564 cpu : usr=98.31%, sys=1.30%, ctx=11, majf=0, minf=33 00:30:17.564 IO depths : 1=0.9%, 2=7.1%, 4=25.0%, 8=55.4%, 16=11.6%, 32=0.0%, >=64=0.0% 00:30:17.564 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:17.564 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:17.564 issued rwts: total=704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:17.564 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:17.564 filename1: (groupid=0, jobs=1): err= 0: pid=566864: Thu Jul 25 10:19:01 2024 00:30:17.564 read: IOPS=65, BW=263KiB/s (269kB/s)(2656KiB/10103msec) 00:30:17.564 slat (usec): min=8, max=103, avg=24.89, stdev=24.26 00:30:17.564 clat (msec): min=124, max=433, avg=243.21, stdev=41.16 00:30:17.564 lat (msec): min=124, max=433, avg=243.23, stdev=41.17 00:30:17.564 clat percentiles (msec): 00:30:17.564 | 1.00th=[ 125], 5.00th=[ 197], 10.00th=[ 205], 20.00th=[ 215], 00:30:17.564 | 30.00th=[ 224], 40.00th=[ 241], 50.00th=[ 245], 60.00th=[ 249], 00:30:17.564 | 70.00th=[ 249], 80.00th=[ 255], 90.00th=[ 305], 95.00th=[ 321], 00:30:17.564 | 99.00th=[ 368], 99.50th=[ 372], 99.90th=[ 435], 99.95th=[ 435], 00:30:17.564 | 99.99th=[ 435] 00:30:17.564 bw ( KiB/s): min= 128, max= 384, per=4.32%, avg=259.20, stdev=49.14, samples=20 00:30:17.564 iops : min= 32, max= 96, avg=64.80, stdev=12.28, samples=20 00:30:17.564 lat (msec) : 250=72.59%, 500=27.41% 00:30:17.564 cpu : usr=98.23%, sys=1.35%, ctx=32, majf=0, minf=30 00:30:17.564 IO depths : 1=1.5%, 2=3.5%, 4=11.9%, 8=72.0%, 16=11.1%, 32=0.0%, >=64=0.0% 00:30:17.564 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:17.564 complete : 0=0.0%, 4=90.3%, 8=4.3%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:17.564 issued rwts: total=664,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:17.564 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:17.564 filename1: (groupid=0, jobs=1): err= 0: pid=566865: Thu Jul 25 10:19:01 2024 00:30:17.564 read: IOPS=69, BW=277KiB/s (283kB/s)(2800KiB/10120msec) 00:30:17.564 slat (usec): min=6, max=108, avg=27.20, stdev=25.35 00:30:17.564 clat (msec): min=31, max=454, avg=230.60, stdev=58.13 00:30:17.564 lat (msec): min=31, max=454, avg=230.63, stdev=58.13 00:30:17.564 clat percentiles (msec): 00:30:17.564 | 1.00th=[ 32], 5.00th=[ 116], 10.00th=[ 190], 20.00th=[ 213], 00:30:17.564 | 30.00th=[ 224], 40.00th=[ 232], 50.00th=[ 243], 60.00th=[ 247], 00:30:17.564 | 70.00th=[ 249], 80.00th=[ 251], 90.00th=[ 262], 95.00th=[ 317], 00:30:17.564 | 99.00th=[ 334], 99.50th=[ 435], 99.90th=[ 456], 99.95th=[ 456], 00:30:17.564 | 99.99th=[ 456] 00:30:17.564 bw ( KiB/s): min= 144, max= 512, per=4.55%, avg=273.60, stdev=68.46, samples=20 00:30:17.564 iops : min= 36, max= 128, avg=68.40, stdev=17.11, samples=20 00:30:17.564 lat (msec) : 50=4.29%, 100=0.29%, 250=73.43%, 500=22.00% 00:30:17.564 cpu : usr=98.36%, sys=1.23%, ctx=20, majf=0, minf=48 00:30:17.564 IO depths : 1=0.9%, 2=2.9%, 4=12.1%, 8=72.4%, 16=11.7%, 32=0.0%, >=64=0.0% 00:30:17.564 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:17.564 complete : 0=0.0%, 4=90.4%, 8=4.1%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:17.564 issued rwts: total=700,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:17.564 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:17.564 filename1: (groupid=0, jobs=1): err= 0: pid=566866: Thu Jul 25 10:19:01 2024 00:30:17.564 read: IOPS=49, BW=197KiB/s (202kB/s)(1984KiB/10082msec) 00:30:17.564 slat (nsec): min=8835, max=46826, avg=23474.36, stdev=5733.66 00:30:17.564 clat (msec): min=205, max=498, avg=324.99, stdev=62.10 00:30:17.564 lat (msec): min=206, max=498, avg=325.01, stdev=62.10 00:30:17.564 clat percentiles (msec): 00:30:17.564 | 1.00th=[ 207], 5.00th=[ 213], 10.00th=[ 218], 20.00th=[ 257], 00:30:17.564 | 30.00th=[ 305], 40.00th=[ 326], 50.00th=[ 342], 60.00th=[ 347], 00:30:17.564 | 70.00th=[ 372], 80.00th=[ 376], 90.00th=[ 384], 95.00th=[ 397], 00:30:17.564 | 99.00th=[ 489], 99.50th=[ 493], 99.90th=[ 498], 99.95th=[ 498], 00:30:17.564 | 99.99th=[ 498] 00:30:17.564 bw ( KiB/s): min= 128, max= 384, per=3.18%, avg=192.00, stdev=75.23, samples=20 00:30:17.564 iops : min= 32, max= 96, avg=48.00, stdev=18.81, samples=20 00:30:17.565 lat (msec) : 250=19.35%, 500=80.65% 00:30:17.565 cpu : usr=97.61%, sys=1.81%, ctx=9, majf=0, minf=30 00:30:17.565 IO depths : 1=5.6%, 2=11.9%, 4=25.0%, 8=50.6%, 16=6.9%, 32=0.0%, >=64=0.0% 00:30:17.565 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:17.565 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:17.565 issued rwts: total=496,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:17.565 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:17.565 filename1: (groupid=0, jobs=1): err= 0: pid=566867: Thu Jul 25 10:19:01 2024 00:30:17.565 read: IOPS=60, BW=244KiB/s (249kB/s)(2456KiB/10085msec) 00:30:17.565 slat (nsec): min=8435, max=93515, avg=21453.47, stdev=17127.61 00:30:17.565 clat (msec): min=168, max=439, avg=262.17, stdev=45.20 00:30:17.565 lat (msec): min=168, max=439, avg=262.19, stdev=45.20 00:30:17.565 clat percentiles (msec): 00:30:17.565 | 1.00th=[ 174], 5.00th=[ 207], 10.00th=[ 215], 20.00th=[ 224], 00:30:17.565 | 30.00th=[ 243], 40.00th=[ 245], 50.00th=[ 249], 60.00th=[ 253], 00:30:17.565 | 70.00th=[ 259], 80.00th=[ 309], 90.00th=[ 330], 95.00th=[ 347], 00:30:17.565 | 99.00th=[ 372], 99.50th=[ 372], 99.90th=[ 439], 99.95th=[ 439], 00:30:17.565 | 99.99th=[ 439] 00:30:17.565 bw ( KiB/s): min= 128, max= 384, per=3.98%, avg=239.20, stdev=56.50, samples=20 00:30:17.565 iops : min= 32, max= 96, avg=59.80, stdev=14.13, samples=20 00:30:17.565 lat (msec) : 250=56.68%, 500=43.32% 00:30:17.565 cpu : usr=98.09%, sys=1.26%, ctx=50, majf=0, minf=33 00:30:17.565 IO depths : 1=2.1%, 2=7.7%, 4=22.8%, 8=57.0%, 16=10.4%, 32=0.0%, >=64=0.0% 00:30:17.565 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:17.565 complete : 0=0.0%, 4=93.5%, 8=0.9%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:17.565 issued rwts: total=614,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:17.565 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:17.565 filename1: (groupid=0, jobs=1): err= 0: pid=566868: Thu Jul 25 10:19:01 2024 00:30:17.565 read: IOPS=67, BW=272KiB/s (278kB/s)(2744KiB/10095msec) 00:30:17.565 slat (nsec): min=8209, max=77006, avg=17970.60, stdev=15678.97 00:30:17.565 clat (msec): min=144, max=342, avg=235.10, stdev=31.56 00:30:17.565 lat (msec): min=144, max=342, avg=235.12, stdev=31.56 00:30:17.565 clat percentiles (msec): 00:30:17.565 | 1.00th=[ 144], 5.00th=[ 197], 10.00th=[ 201], 20.00th=[ 213], 00:30:17.565 | 30.00th=[ 220], 40.00th=[ 228], 50.00th=[ 243], 60.00th=[ 243], 00:30:17.565 | 70.00th=[ 247], 80.00th=[ 253], 90.00th=[ 257], 95.00th=[ 268], 00:30:17.565 | 99.00th=[ 342], 99.50th=[ 342], 99.90th=[ 342], 99.95th=[ 342], 00:30:17.565 | 99.99th=[ 342] 00:30:17.565 bw ( KiB/s): min= 128, max= 384, per=4.47%, avg=268.00, stdev=57.54, samples=20 00:30:17.565 iops : min= 32, max= 96, avg=67.00, stdev=14.39, samples=20 00:30:17.565 lat (msec) : 250=78.72%, 500=21.28% 00:30:17.565 cpu : usr=98.07%, sys=1.52%, ctx=17, majf=0, minf=31 00:30:17.565 IO depths : 1=4.1%, 2=10.3%, 4=25.1%, 8=52.2%, 16=8.3%, 32=0.0%, >=64=0.0% 00:30:17.565 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:17.565 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:17.565 issued rwts: total=686,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:17.565 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:17.565 filename1: (groupid=0, jobs=1): err= 0: pid=566869: Thu Jul 25 10:19:01 2024 00:30:17.565 read: IOPS=73, BW=295KiB/s (302kB/s)(2976KiB/10103msec) 00:30:17.565 slat (nsec): min=8200, max=83946, avg=18624.27, stdev=16392.35 00:30:17.565 clat (msec): min=97, max=370, avg=216.26, stdev=47.40 00:30:17.565 lat (msec): min=97, max=370, avg=216.28, stdev=47.39 00:30:17.565 clat percentiles (msec): 00:30:17.565 | 1.00th=[ 120], 5.00th=[ 125], 10.00th=[ 138], 20.00th=[ 174], 00:30:17.565 | 30.00th=[ 207], 40.00th=[ 220], 50.00th=[ 230], 60.00th=[ 241], 00:30:17.565 | 70.00th=[ 247], 80.00th=[ 249], 90.00th=[ 253], 95.00th=[ 262], 00:30:17.565 | 99.00th=[ 368], 99.50th=[ 372], 99.90th=[ 372], 99.95th=[ 372], 00:30:17.565 | 99.99th=[ 372] 00:30:17.565 bw ( KiB/s): min= 224, max= 432, per=4.85%, avg=291.20, stdev=59.78, samples=20 00:30:17.565 iops : min= 56, max= 108, avg=72.80, stdev=14.94, samples=20 00:30:17.565 lat (msec) : 100=0.81%, 250=80.11%, 500=19.09% 00:30:17.565 cpu : usr=98.21%, sys=1.40%, ctx=16, majf=0, minf=33 00:30:17.565 IO depths : 1=0.3%, 2=1.3%, 4=9.3%, 8=76.7%, 16=12.4%, 32=0.0%, >=64=0.0% 00:30:17.565 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:17.565 complete : 0=0.0%, 4=89.6%, 8=5.0%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:17.565 issued rwts: total=744,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:17.565 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:17.565 filename1: (groupid=0, jobs=1): err= 0: pid=566870: Thu Jul 25 10:19:01 2024 00:30:17.565 read: IOPS=67, BW=272KiB/s (278kB/s)(2744KiB/10104msec) 00:30:17.565 slat (usec): min=8, max=101, avg=22.83, stdev=22.30 00:30:17.565 clat (msec): min=121, max=428, avg=235.33, stdev=31.96 00:30:17.565 lat (msec): min=121, max=428, avg=235.35, stdev=31.96 00:30:17.565 clat percentiles (msec): 00:30:17.565 | 1.00th=[ 146], 5.00th=[ 190], 10.00th=[ 201], 20.00th=[ 215], 00:30:17.565 | 30.00th=[ 224], 40.00th=[ 234], 50.00th=[ 243], 60.00th=[ 245], 00:30:17.565 | 70.00th=[ 247], 80.00th=[ 249], 90.00th=[ 257], 95.00th=[ 266], 00:30:17.565 | 99.00th=[ 334], 99.50th=[ 334], 99.90th=[ 430], 99.95th=[ 430], 00:30:17.565 | 99.99th=[ 430] 00:30:17.565 bw ( KiB/s): min= 128, max= 384, per=4.45%, avg=268.00, stdev=65.02, samples=20 00:30:17.565 iops : min= 32, max= 96, avg=67.00, stdev=16.25, samples=20 00:30:17.565 lat (msec) : 250=80.76%, 500=19.24% 00:30:17.565 cpu : usr=98.41%, sys=1.21%, ctx=11, majf=0, minf=25 00:30:17.565 IO depths : 1=1.3%, 2=7.6%, 4=25.1%, 8=55.0%, 16=11.1%, 32=0.0%, >=64=0.0% 00:30:17.565 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:17.565 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:17.565 issued rwts: total=686,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:17.565 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:17.565 filename1: (groupid=0, jobs=1): err= 0: pid=566871: Thu Jul 25 10:19:01 2024 00:30:17.565 read: IOPS=50, BW=203KiB/s (208kB/s)(2048KiB/10085msec) 00:30:17.565 slat (usec): min=8, max=107, avg=37.17, stdev=30.55 00:30:17.565 clat (msec): min=143, max=402, avg=314.84, stdev=66.83 00:30:17.565 lat (msec): min=143, max=402, avg=314.88, stdev=66.82 00:30:17.565 clat percentiles (msec): 00:30:17.565 | 1.00th=[ 144], 5.00th=[ 213], 10.00th=[ 218], 20.00th=[ 232], 00:30:17.565 | 30.00th=[ 309], 40.00th=[ 326], 50.00th=[ 334], 60.00th=[ 347], 00:30:17.565 | 70.00th=[ 363], 80.00th=[ 372], 90.00th=[ 380], 95.00th=[ 393], 00:30:17.565 | 99.00th=[ 405], 99.50th=[ 405], 99.90th=[ 405], 99.95th=[ 405], 00:30:17.565 | 99.99th=[ 405] 00:30:17.565 bw ( KiB/s): min= 128, max= 368, per=3.30%, avg=198.40, stdev=72.19, samples=20 00:30:17.565 iops : min= 32, max= 92, avg=49.60, stdev=18.05, samples=20 00:30:17.565 lat (msec) : 250=27.34%, 500=72.66% 00:30:17.565 cpu : usr=98.46%, sys=1.13%, ctx=15, majf=0, minf=24 00:30:17.565 IO depths : 1=3.3%, 2=9.6%, 4=25.0%, 8=52.9%, 16=9.2%, 32=0.0%, >=64=0.0% 00:30:17.565 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:17.565 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:17.565 issued rwts: total=512,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:17.565 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:17.565 filename2: (groupid=0, jobs=1): err= 0: pid=566872: Thu Jul 25 10:19:01 2024 00:30:17.565 read: IOPS=60, BW=241KiB/s (247kB/s)(2432KiB/10092msec) 00:30:17.565 slat (usec): min=8, max=109, avg=39.45, stdev=31.68 00:30:17.565 clat (msec): min=143, max=377, avg=265.20, stdev=48.79 00:30:17.565 lat (msec): min=143, max=377, avg=265.24, stdev=48.81 00:30:17.565 clat percentiles (msec): 00:30:17.565 | 1.00th=[ 144], 5.00th=[ 207], 10.00th=[ 213], 20.00th=[ 224], 00:30:17.565 | 30.00th=[ 245], 40.00th=[ 247], 50.00th=[ 249], 60.00th=[ 255], 00:30:17.565 | 70.00th=[ 305], 80.00th=[ 321], 90.00th=[ 342], 95.00th=[ 342], 00:30:17.565 | 99.00th=[ 376], 99.50th=[ 376], 99.90th=[ 376], 99.95th=[ 376], 00:30:17.565 | 99.99th=[ 376] 00:30:17.565 bw ( KiB/s): min= 128, max= 384, per=3.93%, avg=236.80, stdev=75.15, samples=20 00:30:17.565 iops : min= 32, max= 96, avg=59.20, stdev=18.79, samples=20 00:30:17.565 lat (msec) : 250=55.26%, 500=44.74% 00:30:17.565 cpu : usr=98.13%, sys=1.44%, ctx=10, majf=0, minf=31 00:30:17.565 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:30:17.565 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:17.565 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:17.565 issued rwts: total=608,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:17.565 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:17.565 filename2: (groupid=0, jobs=1): err= 0: pid=566873: Thu Jul 25 10:19:01 2024 00:30:17.565 read: IOPS=59, BW=237KiB/s (242kB/s)(2392KiB/10103msec) 00:30:17.565 slat (usec): min=8, max=104, avg=38.24, stdev=30.02 00:30:17.565 clat (msec): min=139, max=422, avg=269.57, stdev=49.43 00:30:17.565 lat (msec): min=139, max=422, avg=269.60, stdev=49.45 00:30:17.565 clat percentiles (msec): 00:30:17.565 | 1.00th=[ 165], 5.00th=[ 209], 10.00th=[ 215], 20.00th=[ 239], 00:30:17.565 | 30.00th=[ 245], 40.00th=[ 247], 50.00th=[ 253], 60.00th=[ 255], 00:30:17.565 | 70.00th=[ 309], 80.00th=[ 326], 90.00th=[ 342], 95.00th=[ 359], 00:30:17.565 | 99.00th=[ 372], 99.50th=[ 376], 99.90th=[ 422], 99.95th=[ 422], 00:30:17.565 | 99.99th=[ 422] 00:30:17.565 bw ( KiB/s): min= 128, max= 256, per=3.87%, avg=232.80, stdev=47.43, samples=20 00:30:17.565 iops : min= 32, max= 64, avg=58.20, stdev=11.86, samples=20 00:30:17.565 lat (msec) : 250=46.49%, 500=53.51% 00:30:17.565 cpu : usr=98.08%, sys=1.45%, ctx=38, majf=0, minf=23 00:30:17.565 IO depths : 1=3.0%, 2=8.9%, 4=23.7%, 8=54.8%, 16=9.5%, 32=0.0%, >=64=0.0% 00:30:17.565 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:17.565 complete : 0=0.0%, 4=93.8%, 8=0.6%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:17.565 issued rwts: total=598,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:17.565 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:17.565 filename2: (groupid=0, jobs=1): err= 0: pid=566874: Thu Jul 25 10:19:01 2024 00:30:17.565 read: IOPS=60, BW=240KiB/s (246kB/s)(2424KiB/10086msec) 00:30:17.565 slat (usec): min=8, max=111, avg=24.61, stdev=22.05 00:30:17.565 clat (msec): min=143, max=376, avg=265.89, stdev=49.23 00:30:17.565 lat (msec): min=143, max=376, avg=265.92, stdev=49.23 00:30:17.565 clat percentiles (msec): 00:30:17.566 | 1.00th=[ 144], 5.00th=[ 213], 10.00th=[ 215], 20.00th=[ 224], 00:30:17.566 | 30.00th=[ 245], 40.00th=[ 247], 50.00th=[ 247], 60.00th=[ 257], 00:30:17.566 | 70.00th=[ 305], 80.00th=[ 321], 90.00th=[ 342], 95.00th=[ 347], 00:30:17.566 | 99.00th=[ 376], 99.50th=[ 376], 99.90th=[ 376], 99.95th=[ 376], 00:30:17.566 | 99.99th=[ 376] 00:30:17.566 bw ( KiB/s): min= 128, max= 384, per=3.93%, avg=236.00, stdev=62.48, samples=20 00:30:17.566 iops : min= 32, max= 96, avg=59.00, stdev=15.62, samples=20 00:30:17.566 lat (msec) : 250=54.13%, 500=45.87% 00:30:17.566 cpu : usr=97.80%, sys=1.47%, ctx=74, majf=0, minf=22 00:30:17.566 IO depths : 1=1.0%, 2=7.3%, 4=25.1%, 8=55.3%, 16=11.4%, 32=0.0%, >=64=0.0% 00:30:17.566 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:17.566 complete : 0=0.0%, 4=94.4%, 8=0.0%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:17.566 issued rwts: total=606,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:17.566 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:17.566 filename2: (groupid=0, jobs=1): err= 0: pid=566875: Thu Jul 25 10:19:01 2024 00:30:17.566 read: IOPS=67, BW=270KiB/s (276kB/s)(2728KiB/10104msec) 00:30:17.566 slat (nsec): min=8229, max=94007, avg=20832.57, stdev=19884.75 00:30:17.566 clat (msec): min=119, max=418, avg=236.72, stdev=41.24 00:30:17.566 lat (msec): min=119, max=418, avg=236.74, stdev=41.23 00:30:17.566 clat percentiles (msec): 00:30:17.566 | 1.00th=[ 146], 5.00th=[ 174], 10.00th=[ 201], 20.00th=[ 213], 00:30:17.566 | 30.00th=[ 222], 40.00th=[ 226], 50.00th=[ 236], 60.00th=[ 245], 00:30:17.566 | 70.00th=[ 247], 80.00th=[ 251], 90.00th=[ 262], 95.00th=[ 313], 00:30:17.566 | 99.00th=[ 418], 99.50th=[ 418], 99.90th=[ 418], 99.95th=[ 418], 00:30:17.566 | 99.99th=[ 418] 00:30:17.566 bw ( KiB/s): min= 208, max= 384, per=4.43%, avg=266.40, stdev=38.94, samples=20 00:30:17.566 iops : min= 52, max= 96, avg=66.60, stdev= 9.74, samples=20 00:30:17.566 lat (msec) : 250=78.01%, 500=21.99% 00:30:17.566 cpu : usr=97.81%, sys=1.63%, ctx=55, majf=0, minf=26 00:30:17.566 IO depths : 1=0.9%, 2=3.2%, 4=13.3%, 8=70.8%, 16=11.7%, 32=0.0%, >=64=0.0% 00:30:17.566 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:17.566 complete : 0=0.0%, 4=91.0%, 8=3.7%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:17.566 issued rwts: total=682,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:17.566 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:17.566 filename2: (groupid=0, jobs=1): err= 0: pid=566876: Thu Jul 25 10:19:01 2024 00:30:17.566 read: IOPS=63, BW=254KiB/s (260kB/s)(2568KiB/10103msec) 00:30:17.566 slat (usec): min=8, max=106, avg=59.42, stdev=28.07 00:30:17.566 clat (msec): min=109, max=395, avg=251.13, stdev=74.80 00:30:17.566 lat (msec): min=109, max=395, avg=251.19, stdev=74.82 00:30:17.566 clat percentiles (msec): 00:30:17.566 | 1.00th=[ 110], 5.00th=[ 124], 10.00th=[ 140], 20.00th=[ 182], 00:30:17.566 | 30.00th=[ 224], 40.00th=[ 245], 50.00th=[ 251], 60.00th=[ 255], 00:30:17.566 | 70.00th=[ 309], 80.00th=[ 326], 90.00th=[ 359], 95.00th=[ 363], 00:30:17.566 | 99.00th=[ 397], 99.50th=[ 397], 99.90th=[ 397], 99.95th=[ 397], 00:30:17.566 | 99.99th=[ 397] 00:30:17.566 bw ( KiB/s): min= 128, max= 496, per=4.17%, avg=250.40, stdev=82.12, samples=20 00:30:17.566 iops : min= 32, max= 124, avg=62.60, stdev=20.53, samples=20 00:30:17.566 lat (msec) : 250=50.00%, 500=50.00% 00:30:17.566 cpu : usr=98.10%, sys=1.45%, ctx=23, majf=0, minf=48 00:30:17.566 IO depths : 1=3.7%, 2=9.0%, 4=22.0%, 8=56.4%, 16=8.9%, 32=0.0%, >=64=0.0% 00:30:17.566 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:17.566 complete : 0=0.0%, 4=93.1%, 8=1.3%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:17.566 issued rwts: total=642,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:17.566 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:17.566 filename2: (groupid=0, jobs=1): err= 0: pid=566877: Thu Jul 25 10:19:01 2024 00:30:17.566 read: IOPS=49, BW=197KiB/s (202kB/s)(1984KiB/10079msec) 00:30:17.566 slat (nsec): min=8320, max=85141, avg=25094.55, stdev=21367.48 00:30:17.566 clat (msec): min=212, max=490, avg=324.91, stdev=62.56 00:30:17.566 lat (msec): min=212, max=490, avg=324.93, stdev=62.55 00:30:17.566 clat percentiles (msec): 00:30:17.566 | 1.00th=[ 213], 5.00th=[ 215], 10.00th=[ 222], 20.00th=[ 232], 00:30:17.566 | 30.00th=[ 321], 40.00th=[ 334], 50.00th=[ 342], 60.00th=[ 347], 00:30:17.566 | 70.00th=[ 372], 80.00th=[ 376], 90.00th=[ 384], 95.00th=[ 397], 00:30:17.566 | 99.00th=[ 481], 99.50th=[ 489], 99.90th=[ 489], 99.95th=[ 489], 00:30:17.566 | 99.99th=[ 489] 00:30:17.566 bw ( KiB/s): min= 128, max= 384, per=3.18%, avg=192.00, stdev=77.69, samples=20 00:30:17.566 iops : min= 32, max= 96, avg=48.00, stdev=19.42, samples=20 00:30:17.566 lat (msec) : 250=22.18%, 500=77.82% 00:30:17.566 cpu : usr=97.74%, sys=1.52%, ctx=34, majf=0, minf=30 00:30:17.566 IO depths : 1=4.0%, 2=10.3%, 4=25.0%, 8=52.2%, 16=8.5%, 32=0.0%, >=64=0.0% 00:30:17.566 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:17.566 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:17.566 issued rwts: total=496,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:17.566 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:17.566 filename2: (groupid=0, jobs=1): err= 0: pid=566878: Thu Jul 25 10:19:01 2024 00:30:17.566 read: IOPS=75, BW=302KiB/s (309kB/s)(3056KiB/10122msec) 00:30:17.566 slat (nsec): min=6511, max=77429, avg=14854.41, stdev=12715.03 00:30:17.566 clat (msec): min=39, max=366, avg=211.29, stdev=52.96 00:30:17.566 lat (msec): min=39, max=366, avg=211.31, stdev=52.96 00:30:17.566 clat percentiles (msec): 00:30:17.566 | 1.00th=[ 40], 5.00th=[ 120], 10.00th=[ 130], 20.00th=[ 169], 00:30:17.566 | 30.00th=[ 207], 40.00th=[ 220], 50.00th=[ 230], 60.00th=[ 241], 00:30:17.566 | 70.00th=[ 247], 80.00th=[ 249], 90.00th=[ 253], 95.00th=[ 255], 00:30:17.566 | 99.00th=[ 317], 99.50th=[ 368], 99.90th=[ 368], 99.95th=[ 368], 00:30:17.566 | 99.99th=[ 368] 00:30:17.566 bw ( KiB/s): min= 256, max= 496, per=4.98%, avg=299.20, stdev=72.32, samples=20 00:30:17.566 iops : min= 64, max= 124, avg=74.80, stdev=18.08, samples=20 00:30:17.566 lat (msec) : 50=2.09%, 100=2.09%, 250=78.53%, 500=17.28% 00:30:17.566 cpu : usr=97.82%, sys=1.51%, ctx=37, majf=0, minf=36 00:30:17.566 IO depths : 1=0.4%, 2=1.8%, 4=10.5%, 8=75.1%, 16=12.2%, 32=0.0%, >=64=0.0% 00:30:17.566 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:17.566 complete : 0=0.0%, 4=90.0%, 8=4.5%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:17.566 issued rwts: total=764,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:17.566 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:17.566 filename2: (groupid=0, jobs=1): err= 0: pid=566879: Thu Jul 25 10:19:01 2024 00:30:17.566 read: IOPS=67, BW=272KiB/s (278kB/s)(2744KiB/10103msec) 00:30:17.566 slat (usec): min=8, max=104, avg=19.76, stdev=18.85 00:30:17.566 clat (msec): min=124, max=380, avg=234.80, stdev=37.25 00:30:17.566 lat (msec): min=124, max=380, avg=234.82, stdev=37.24 00:30:17.566 clat percentiles (msec): 00:30:17.566 | 1.00th=[ 125], 5.00th=[ 190], 10.00th=[ 201], 20.00th=[ 209], 00:30:17.566 | 30.00th=[ 215], 40.00th=[ 226], 50.00th=[ 239], 60.00th=[ 245], 00:30:17.566 | 70.00th=[ 249], 80.00th=[ 251], 90.00th=[ 262], 95.00th=[ 305], 00:30:17.566 | 99.00th=[ 380], 99.50th=[ 380], 99.90th=[ 380], 99.95th=[ 380], 00:30:17.566 | 99.99th=[ 380] 00:30:17.566 bw ( KiB/s): min= 224, max= 384, per=4.45%, avg=268.00, stdev=40.17, samples=20 00:30:17.566 iops : min= 56, max= 96, avg=67.00, stdev=10.04, samples=20 00:30:17.566 lat (msec) : 250=76.68%, 500=23.32% 00:30:17.566 cpu : usr=97.76%, sys=1.45%, ctx=45, majf=0, minf=31 00:30:17.566 IO depths : 1=1.5%, 2=4.5%, 4=15.2%, 8=67.6%, 16=11.2%, 32=0.0%, >=64=0.0% 00:30:17.566 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:17.566 complete : 0=0.0%, 4=91.2%, 8=3.4%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:17.566 issued rwts: total=686,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:17.566 latency : target=0, window=0, percentile=100.00%, depth=16 00:30:17.566 00:30:17.566 Run status group 0 (all jobs): 00:30:17.566 READ: bw=5998KiB/s (6142kB/s), 197KiB/s-302KiB/s (201kB/s-309kB/s), io=59.3MiB (62.2MB), run=10079-10122msec 00:30:17.566 10:19:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:30:17.566 10:19:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:30:17.566 10:19:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:17.566 10:19:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:17.566 10:19:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:30:17.566 10:19:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:17.566 10:19:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:17.566 10:19:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:17.566 10:19:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:17.566 10:19:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:17.566 10:19:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:17.566 10:19:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:17.566 10:19:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:17.566 10:19:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:17.566 10:19:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:30:17.566 10:19:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:30:17.566 10:19:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:17.566 10:19:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:17.566 10:19:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:17.566 10:19:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:17.566 10:19:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:30:17.566 10:19:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:17.566 10:19:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:17.566 10:19:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:17.566 10:19:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:17.566 10:19:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:30:17.567 10:19:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:30:17.567 10:19:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:30:17.567 10:19:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:17.567 10:19:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:17.567 10:19:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:17.567 10:19:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:30:17.567 10:19:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:17.567 10:19:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:17.567 10:19:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:17.567 10:19:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:30:17.567 10:19:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:30:17.567 10:19:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:30:17.567 10:19:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:30:17.567 10:19:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:30:17.567 10:19:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:30:17.567 10:19:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:30:17.567 10:19:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:30:17.567 10:19:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:17.567 10:19:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:30:17.567 10:19:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:30:17.567 10:19:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:30:17.567 10:19:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:17.567 10:19:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:17.567 bdev_null0 00:30:17.567 10:19:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:17.567 10:19:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:17.567 10:19:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:17.567 10:19:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:17.567 10:19:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:17.567 10:19:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:17.567 10:19:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:17.567 10:19:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:17.567 10:19:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:17.567 10:19:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:17.567 10:19:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:17.567 10:19:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:17.567 [2024-07-25 10:19:01.724190] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:17.567 10:19:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:17.567 10:19:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:30:17.567 10:19:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:30:17.567 10:19:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:30:17.567 10:19:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:30:17.567 10:19:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:17.567 10:19:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:17.567 bdev_null1 00:30:17.567 10:19:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:17.567 10:19:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:30:17.567 10:19:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:17.567 10:19:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:17.567 10:19:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:17.567 10:19:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:30:17.567 10:19:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:17.567 10:19:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:17.567 10:19:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:17.567 10:19:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:17.567 10:19:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:17.567 10:19:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:17.567 10:19:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:17.567 10:19:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:30:17.567 10:19:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:30:17.567 10:19:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:30:17.567 10:19:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:30:17.567 10:19:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:30:17.567 10:19:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:17.567 10:19:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:17.567 { 00:30:17.567 "params": { 00:30:17.567 "name": "Nvme$subsystem", 00:30:17.567 "trtype": "$TEST_TRANSPORT", 00:30:17.567 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:17.567 "adrfam": "ipv4", 00:30:17.567 "trsvcid": "$NVMF_PORT", 00:30:17.567 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:17.567 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:17.567 "hdgst": ${hdgst:-false}, 00:30:17.567 "ddgst": ${ddgst:-false} 00:30:17.567 }, 00:30:17.567 "method": "bdev_nvme_attach_controller" 00:30:17.567 } 00:30:17.567 EOF 00:30:17.567 )") 00:30:17.567 10:19:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:17.567 10:19:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:17.567 10:19:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:30:17.567 10:19:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:17.567 10:19:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:17.567 10:19:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:30:17.567 10:19:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:17.567 10:19:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:30:17.567 10:19:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:17.567 10:19:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:30:17.567 10:19:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:17.567 10:19:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:17.567 10:19:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:17.567 10:19:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:17.567 10:19:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:30:17.567 10:19:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:17.567 10:19:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:30:17.567 10:19:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:17.567 10:19:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:30:17.567 10:19:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:17.567 10:19:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:17.567 { 00:30:17.567 "params": { 00:30:17.567 "name": "Nvme$subsystem", 00:30:17.567 "trtype": "$TEST_TRANSPORT", 00:30:17.567 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:17.567 "adrfam": "ipv4", 00:30:17.567 "trsvcid": "$NVMF_PORT", 00:30:17.567 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:17.567 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:17.567 "hdgst": ${hdgst:-false}, 00:30:17.567 "ddgst": ${ddgst:-false} 00:30:17.567 }, 00:30:17.567 "method": "bdev_nvme_attach_controller" 00:30:17.567 } 00:30:17.567 EOF 00:30:17.567 )") 00:30:17.567 10:19:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:30:17.567 10:19:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:30:17.567 10:19:01 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:30:17.567 10:19:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:30:17.567 10:19:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:30:17.567 10:19:01 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:17.567 "params": { 00:30:17.567 "name": "Nvme0", 00:30:17.567 "trtype": "tcp", 00:30:17.567 "traddr": "10.0.0.2", 00:30:17.567 "adrfam": "ipv4", 00:30:17.567 "trsvcid": "4420", 00:30:17.567 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:17.567 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:17.567 "hdgst": false, 00:30:17.567 "ddgst": false 00:30:17.567 }, 00:30:17.567 "method": "bdev_nvme_attach_controller" 00:30:17.567 },{ 00:30:17.567 "params": { 00:30:17.567 "name": "Nvme1", 00:30:17.567 "trtype": "tcp", 00:30:17.567 "traddr": "10.0.0.2", 00:30:17.567 "adrfam": "ipv4", 00:30:17.567 "trsvcid": "4420", 00:30:17.567 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:17.567 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:17.567 "hdgst": false, 00:30:17.567 "ddgst": false 00:30:17.567 }, 00:30:17.567 "method": "bdev_nvme_attach_controller" 00:30:17.567 }' 00:30:17.568 10:19:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:17.568 10:19:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:17.568 10:19:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:17.568 10:19:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:17.568 10:19:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:17.568 10:19:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:17.568 10:19:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:17.568 10:19:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:17.568 10:19:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:17.568 10:19:01 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:17.568 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:30:17.568 ... 00:30:17.568 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:30:17.568 ... 00:30:17.568 fio-3.35 00:30:17.568 Starting 4 threads 00:30:17.568 EAL: No free 2048 kB hugepages reported on node 1 00:30:22.828 00:30:22.828 filename0: (groupid=0, jobs=1): err= 0: pid=568265: Thu Jul 25 10:19:07 2024 00:30:22.828 read: IOPS=1836, BW=14.3MiB/s (15.0MB/s)(71.8MiB/5001msec) 00:30:22.828 slat (nsec): min=5141, max=87518, avg=22918.66, stdev=10933.44 00:30:22.828 clat (usec): min=1355, max=7405, avg=4277.34, stdev=394.95 00:30:22.828 lat (usec): min=1373, max=7421, avg=4300.26, stdev=394.62 00:30:22.828 clat percentiles (usec): 00:30:22.828 | 1.00th=[ 3261], 5.00th=[ 3752], 10.00th=[ 3916], 20.00th=[ 4047], 00:30:22.828 | 30.00th=[ 4146], 40.00th=[ 4228], 50.00th=[ 4293], 60.00th=[ 4359], 00:30:22.828 | 70.00th=[ 4424], 80.00th=[ 4490], 90.00th=[ 4621], 95.00th=[ 4686], 00:30:22.828 | 99.00th=[ 5735], 99.50th=[ 6259], 99.90th=[ 7111], 99.95th=[ 7242], 00:30:22.828 | 99.99th=[ 7439] 00:30:22.828 bw ( KiB/s): min=13851, max=15024, per=24.99%, avg=14653.67, stdev=415.70, samples=9 00:30:22.828 iops : min= 1731, max= 1878, avg=1831.67, stdev=52.05, samples=9 00:30:22.828 lat (msec) : 2=0.10%, 4=14.25%, 10=85.65% 00:30:22.828 cpu : usr=94.32%, sys=4.68%, ctx=155, majf=0, minf=47 00:30:22.828 IO depths : 1=0.1%, 2=15.9%, 4=58.1%, 8=25.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:22.828 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:22.828 complete : 0=0.0%, 4=90.9%, 8=9.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:22.828 issued rwts: total=9184,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:22.828 latency : target=0, window=0, percentile=100.00%, depth=8 00:30:22.828 filename0: (groupid=0, jobs=1): err= 0: pid=568266: Thu Jul 25 10:19:07 2024 00:30:22.828 read: IOPS=1835, BW=14.3MiB/s (15.0MB/s)(71.7MiB/5001msec) 00:30:22.828 slat (nsec): min=5153, max=73479, avg=23089.45, stdev=11321.99 00:30:22.828 clat (usec): min=719, max=8463, avg=4277.30, stdev=448.80 00:30:22.828 lat (usec): min=742, max=8486, avg=4300.39, stdev=449.15 00:30:22.828 clat percentiles (usec): 00:30:22.828 | 1.00th=[ 3032], 5.00th=[ 3720], 10.00th=[ 3916], 20.00th=[ 4047], 00:30:22.828 | 30.00th=[ 4146], 40.00th=[ 4228], 50.00th=[ 4228], 60.00th=[ 4359], 00:30:22.828 | 70.00th=[ 4424], 80.00th=[ 4490], 90.00th=[ 4621], 95.00th=[ 4752], 00:30:22.828 | 99.00th=[ 5866], 99.50th=[ 6652], 99.90th=[ 7308], 99.95th=[ 7504], 00:30:22.828 | 99.99th=[ 8455] 00:30:22.828 bw ( KiB/s): min=13808, max=15072, per=24.87%, avg=14584.89, stdev=437.16, samples=9 00:30:22.828 iops : min= 1726, max= 1884, avg=1823.11, stdev=54.65, samples=9 00:30:22.828 lat (usec) : 750=0.01%, 1000=0.01% 00:30:22.828 lat (msec) : 2=0.25%, 4=13.53%, 10=86.20% 00:30:22.828 cpu : usr=94.06%, sys=5.28%, ctx=29, majf=0, minf=46 00:30:22.828 IO depths : 1=0.2%, 2=15.0%, 4=59.1%, 8=25.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:22.828 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:22.828 complete : 0=0.0%, 4=90.9%, 8=9.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:22.828 issued rwts: total=9180,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:22.828 latency : target=0, window=0, percentile=100.00%, depth=8 00:30:22.828 filename1: (groupid=0, jobs=1): err= 0: pid=568267: Thu Jul 25 10:19:07 2024 00:30:22.828 read: IOPS=1835, BW=14.3MiB/s (15.0MB/s)(71.7MiB/5002msec) 00:30:22.828 slat (nsec): min=4953, max=87593, avg=23282.21, stdev=10597.11 00:30:22.828 clat (usec): min=720, max=8107, avg=4278.49, stdev=474.15 00:30:22.828 lat (usec): min=755, max=8124, avg=4301.78, stdev=473.83 00:30:22.828 clat percentiles (usec): 00:30:22.828 | 1.00th=[ 2966], 5.00th=[ 3687], 10.00th=[ 3916], 20.00th=[ 4047], 00:30:22.828 | 30.00th=[ 4146], 40.00th=[ 4228], 50.00th=[ 4293], 60.00th=[ 4359], 00:30:22.828 | 70.00th=[ 4424], 80.00th=[ 4490], 90.00th=[ 4621], 95.00th=[ 4752], 00:30:22.828 | 99.00th=[ 5997], 99.50th=[ 6718], 99.90th=[ 7832], 99.95th=[ 7963], 00:30:22.828 | 99.99th=[ 8094] 00:30:22.828 bw ( KiB/s): min=13680, max=15104, per=24.87%, avg=14583.11, stdev=552.06, samples=9 00:30:22.828 iops : min= 1710, max= 1888, avg=1822.89, stdev=69.01, samples=9 00:30:22.828 lat (usec) : 750=0.02%, 1000=0.02% 00:30:22.828 lat (msec) : 2=0.27%, 4=15.36%, 10=84.32% 00:30:22.828 cpu : usr=93.90%, sys=5.36%, ctx=67, majf=0, minf=51 00:30:22.828 IO depths : 1=0.1%, 2=14.8%, 4=59.0%, 8=26.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:22.828 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:22.828 complete : 0=0.0%, 4=91.1%, 8=8.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:22.828 issued rwts: total=9180,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:22.828 latency : target=0, window=0, percentile=100.00%, depth=8 00:30:22.828 filename1: (groupid=0, jobs=1): err= 0: pid=568268: Thu Jul 25 10:19:07 2024 00:30:22.828 read: IOPS=1823, BW=14.2MiB/s (14.9MB/s)(71.3MiB/5003msec) 00:30:22.828 slat (nsec): min=5041, max=76043, avg=23082.65, stdev=11651.18 00:30:22.828 clat (usec): min=743, max=8320, avg=4303.60, stdev=471.59 00:30:22.828 lat (usec): min=757, max=8329, avg=4326.68, stdev=470.65 00:30:22.828 clat percentiles (usec): 00:30:22.828 | 1.00th=[ 3294], 5.00th=[ 3785], 10.00th=[ 3949], 20.00th=[ 4047], 00:30:22.828 | 30.00th=[ 4146], 40.00th=[ 4228], 50.00th=[ 4293], 60.00th=[ 4359], 00:30:22.828 | 70.00th=[ 4424], 80.00th=[ 4555], 90.00th=[ 4621], 95.00th=[ 4817], 00:30:22.828 | 99.00th=[ 6194], 99.50th=[ 6718], 99.90th=[ 7373], 99.95th=[ 7963], 00:30:22.828 | 99.99th=[ 8291] 00:30:22.828 bw ( KiB/s): min=13456, max=14976, per=24.76%, avg=14519.11, stdev=584.66, samples=9 00:30:22.828 iops : min= 1682, max= 1872, avg=1814.89, stdev=73.08, samples=9 00:30:22.828 lat (usec) : 750=0.01%, 1000=0.03% 00:30:22.828 lat (msec) : 2=0.16%, 4=13.60%, 10=86.19% 00:30:22.828 cpu : usr=93.96%, sys=5.40%, ctx=57, majf=0, minf=67 00:30:22.828 IO depths : 1=0.1%, 2=14.7%, 4=59.1%, 8=26.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:22.828 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:22.828 complete : 0=0.0%, 4=91.0%, 8=9.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:22.828 issued rwts: total=9124,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:22.828 latency : target=0, window=0, percentile=100.00%, depth=8 00:30:22.828 00:30:22.828 Run status group 0 (all jobs): 00:30:22.828 READ: bw=57.3MiB/s (60.0MB/s), 14.2MiB/s-14.3MiB/s (14.9MB/s-15.0MB/s), io=286MiB (300MB), run=5001-5003msec 00:30:23.086 10:19:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:30:23.086 10:19:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:30:23.086 10:19:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:23.086 10:19:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:23.086 10:19:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:30:23.086 10:19:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:23.086 10:19:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:23.086 10:19:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:23.086 10:19:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:23.086 10:19:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:23.086 10:19:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:23.086 10:19:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:23.086 10:19:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:23.086 10:19:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:30:23.086 10:19:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:30:23.086 10:19:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:30:23.086 10:19:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:23.086 10:19:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:23.086 10:19:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:23.086 10:19:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:23.086 10:19:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:30:23.086 10:19:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:23.086 10:19:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:23.086 10:19:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:23.086 00:30:23.086 real 0m24.711s 00:30:23.086 user 4m35.278s 00:30:23.086 sys 0m6.625s 00:30:23.086 10:19:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:23.086 10:19:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:30:23.086 ************************************ 00:30:23.086 END TEST fio_dif_rand_params 00:30:23.086 ************************************ 00:30:23.086 10:19:08 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:30:23.086 10:19:08 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:30:23.086 10:19:08 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:23.086 10:19:08 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:23.345 ************************************ 00:30:23.345 START TEST fio_dif_digest 00:30:23.345 ************************************ 00:30:23.345 10:19:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1125 -- # fio_dif_digest 00:30:23.345 10:19:08 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:30:23.345 10:19:08 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:30:23.345 10:19:08 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:30:23.345 10:19:08 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:30:23.345 10:19:08 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:30:23.345 10:19:08 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:30:23.345 10:19:08 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:30:23.345 10:19:08 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:30:23.345 10:19:08 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:30:23.345 10:19:08 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:30:23.345 10:19:08 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:30:23.345 10:19:08 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:30:23.345 10:19:08 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:30:23.345 10:19:08 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:30:23.345 10:19:08 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:30:23.345 10:19:08 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:30:23.345 10:19:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:23.345 10:19:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:23.345 bdev_null0 00:30:23.345 10:19:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:23.346 10:19:08 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:30:23.346 10:19:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:23.346 10:19:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:23.346 10:19:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:23.346 10:19:08 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:30:23.346 10:19:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:23.346 10:19:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:23.346 10:19:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:23.346 10:19:08 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:23.346 10:19:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:23.346 10:19:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:23.346 [2024-07-25 10:19:08.291458] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:23.346 10:19:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:23.346 10:19:08 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:30:23.346 10:19:08 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:30:23.346 10:19:08 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:30:23.346 10:19:08 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:30:23.346 10:19:08 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:30:23.346 10:19:08 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:30:23.346 10:19:08 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:30:23.346 { 00:30:23.346 "params": { 00:30:23.346 "name": "Nvme$subsystem", 00:30:23.346 "trtype": "$TEST_TRANSPORT", 00:30:23.346 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:23.346 "adrfam": "ipv4", 00:30:23.346 "trsvcid": "$NVMF_PORT", 00:30:23.346 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:23.346 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:23.346 "hdgst": ${hdgst:-false}, 00:30:23.346 "ddgst": ${ddgst:-false} 00:30:23.346 }, 00:30:23.346 "method": "bdev_nvme_attach_controller" 00:30:23.346 } 00:30:23.346 EOF 00:30:23.346 )") 00:30:23.346 10:19:08 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:23.346 10:19:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:23.346 10:19:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:23.346 10:19:08 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:30:23.346 10:19:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:23.346 10:19:08 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:30:23.346 10:19:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:23.346 10:19:08 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:30:23.346 10:19:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:23.346 10:19:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:30:23.346 10:19:08 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:30:23.346 10:19:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:23.346 10:19:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:23.346 10:19:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:23.346 10:19:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:30:23.346 10:19:08 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:30:23.346 10:19:08 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:30:23.346 10:19:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:23.346 10:19:08 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:30:23.346 10:19:08 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:30:23.346 10:19:08 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:30:23.346 "params": { 00:30:23.346 "name": "Nvme0", 00:30:23.346 "trtype": "tcp", 00:30:23.346 "traddr": "10.0.0.2", 00:30:23.346 "adrfam": "ipv4", 00:30:23.346 "trsvcid": "4420", 00:30:23.346 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:23.346 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:23.346 "hdgst": true, 00:30:23.346 "ddgst": true 00:30:23.346 }, 00:30:23.346 "method": "bdev_nvme_attach_controller" 00:30:23.346 }' 00:30:23.346 10:19:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:23.346 10:19:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:23.346 10:19:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:23.346 10:19:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:30:23.346 10:19:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:23.346 10:19:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:23.346 10:19:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:23.346 10:19:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:23.346 10:19:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:30:23.346 10:19:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:30:23.634 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:30:23.634 ... 00:30:23.634 fio-3.35 00:30:23.634 Starting 3 threads 00:30:23.634 EAL: No free 2048 kB hugepages reported on node 1 00:30:35.829 00:30:35.829 filename0: (groupid=0, jobs=1): err= 0: pid=569135: Thu Jul 25 10:19:19 2024 00:30:35.829 read: IOPS=196, BW=24.6MiB/s (25.8MB/s)(246MiB/10006msec) 00:30:35.829 slat (nsec): min=5259, max=76582, avg=21128.44, stdev=4849.19 00:30:35.829 clat (usec): min=9191, max=21068, avg=15233.76, stdev=1230.23 00:30:35.829 lat (usec): min=9216, max=21087, avg=15254.89, stdev=1229.97 00:30:35.829 clat percentiles (usec): 00:30:35.829 | 1.00th=[11469], 5.00th=[13304], 10.00th=[13829], 20.00th=[14353], 00:30:35.829 | 30.00th=[14746], 40.00th=[15008], 50.00th=[15270], 60.00th=[15533], 00:30:35.829 | 70.00th=[15795], 80.00th=[16188], 90.00th=[16581], 95.00th=[17171], 00:30:35.829 | 99.00th=[17957], 99.50th=[18482], 99.90th=[21103], 99.95th=[21103], 00:30:35.829 | 99.99th=[21103] 00:30:35.829 bw ( KiB/s): min=24064, max=27648, per=34.36%, avg=25152.00, stdev=825.89, samples=20 00:30:35.829 iops : min= 188, max= 216, avg=196.50, stdev= 6.45, samples=20 00:30:35.829 lat (msec) : 10=0.36%, 20=99.49%, 50=0.15% 00:30:35.829 cpu : usr=93.64%, sys=5.80%, ctx=27, majf=0, minf=154 00:30:35.829 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:35.829 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:35.829 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:35.829 issued rwts: total=1967,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:35.829 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:35.829 filename0: (groupid=0, jobs=1): err= 0: pid=569136: Thu Jul 25 10:19:19 2024 00:30:35.829 read: IOPS=186, BW=23.4MiB/s (24.5MB/s)(235MiB/10045msec) 00:30:35.829 slat (nsec): min=4474, max=74557, avg=21862.97, stdev=3960.76 00:30:35.829 clat (usec): min=10448, max=53602, avg=15985.51, stdev=1550.13 00:30:35.829 lat (usec): min=10472, max=53626, avg=16007.37, stdev=1550.15 00:30:35.829 clat percentiles (usec): 00:30:35.829 | 1.00th=[12125], 5.00th=[13960], 10.00th=[14484], 20.00th=[15008], 00:30:35.829 | 30.00th=[15401], 40.00th=[15664], 50.00th=[15926], 60.00th=[16188], 00:30:35.829 | 70.00th=[16581], 80.00th=[16909], 90.00th=[17433], 95.00th=[17957], 00:30:35.829 | 99.00th=[19006], 99.50th=[19268], 99.90th=[21890], 99.95th=[53740], 00:30:35.829 | 99.99th=[53740] 00:30:35.829 bw ( KiB/s): min=23086, max=26112, per=32.77%, avg=23989.50, stdev=726.22, samples=20 00:30:35.829 iops : min= 180, max= 204, avg=187.40, stdev= 5.70, samples=20 00:30:35.829 lat (msec) : 20=99.68%, 50=0.27%, 100=0.05% 00:30:35.829 cpu : usr=94.25%, sys=5.26%, ctx=29, majf=0, minf=205 00:30:35.829 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:35.829 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:35.829 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:35.829 issued rwts: total=1877,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:35.829 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:35.829 filename0: (groupid=0, jobs=1): err= 0: pid=569137: Thu Jul 25 10:19:19 2024 00:30:35.829 read: IOPS=189, BW=23.7MiB/s (24.9MB/s)(238MiB/10008msec) 00:30:35.829 slat (nsec): min=4311, max=46816, avg=20497.98, stdev=4520.51 00:30:35.829 clat (usec): min=8668, max=59181, avg=15775.77, stdev=2696.86 00:30:35.829 lat (usec): min=8681, max=59203, avg=15796.26, stdev=2696.96 00:30:35.829 clat percentiles (usec): 00:30:35.829 | 1.00th=[12518], 5.00th=[13566], 10.00th=[14091], 20.00th=[14746], 00:30:35.829 | 30.00th=[15139], 40.00th=[15401], 50.00th=[15664], 60.00th=[15926], 00:30:35.829 | 70.00th=[16319], 80.00th=[16581], 90.00th=[17171], 95.00th=[17695], 00:30:35.829 | 99.00th=[18482], 99.50th=[20055], 99.90th=[58983], 99.95th=[58983], 00:30:35.829 | 99.99th=[58983] 00:30:35.829 bw ( KiB/s): min=22272, max=27904, per=33.18%, avg=24283.90, stdev=1169.96, samples=20 00:30:35.829 iops : min= 174, max= 218, avg=189.70, stdev= 9.16, samples=20 00:30:35.829 lat (msec) : 10=0.11%, 20=99.37%, 50=0.21%, 100=0.32% 00:30:35.829 cpu : usr=94.49%, sys=4.97%, ctx=23, majf=0, minf=84 00:30:35.829 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:35.829 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:35.829 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:35.829 issued rwts: total=1900,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:35.829 latency : target=0, window=0, percentile=100.00%, depth=3 00:30:35.829 00:30:35.829 Run status group 0 (all jobs): 00:30:35.829 READ: bw=71.5MiB/s (74.9MB/s), 23.4MiB/s-24.6MiB/s (24.5MB/s-25.8MB/s), io=718MiB (753MB), run=10006-10045msec 00:30:35.829 10:19:19 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:30:35.829 10:19:19 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:30:35.829 10:19:19 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:30:35.829 10:19:19 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:30:35.829 10:19:19 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:30:35.829 10:19:19 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:35.829 10:19:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:35.829 10:19:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:35.829 10:19:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:35.829 10:19:19 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:30:35.829 10:19:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:35.829 10:19:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:35.829 10:19:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:35.829 00:30:35.829 real 0m11.203s 00:30:35.829 user 0m29.467s 00:30:35.829 sys 0m1.947s 00:30:35.829 10:19:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:35.829 10:19:19 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:30:35.829 ************************************ 00:30:35.829 END TEST fio_dif_digest 00:30:35.829 ************************************ 00:30:35.829 10:19:19 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:30:35.829 10:19:19 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:30:35.829 10:19:19 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:35.829 10:19:19 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:30:35.829 10:19:19 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:35.829 10:19:19 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:30:35.829 10:19:19 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:35.829 10:19:19 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:35.829 rmmod nvme_tcp 00:30:35.829 rmmod nvme_fabrics 00:30:35.830 rmmod nvme_keyring 00:30:35.830 10:19:19 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:35.830 10:19:19 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:30:35.830 10:19:19 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:30:35.830 10:19:19 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 562973 ']' 00:30:35.830 10:19:19 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 562973 00:30:35.830 10:19:19 nvmf_dif -- common/autotest_common.sh@950 -- # '[' -z 562973 ']' 00:30:35.830 10:19:19 nvmf_dif -- common/autotest_common.sh@954 -- # kill -0 562973 00:30:35.830 10:19:19 nvmf_dif -- common/autotest_common.sh@955 -- # uname 00:30:35.830 10:19:19 nvmf_dif -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:35.830 10:19:19 nvmf_dif -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 562973 00:30:35.830 10:19:19 nvmf_dif -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:35.830 10:19:19 nvmf_dif -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:35.830 10:19:19 nvmf_dif -- common/autotest_common.sh@968 -- # echo 'killing process with pid 562973' 00:30:35.830 killing process with pid 562973 00:30:35.830 10:19:19 nvmf_dif -- common/autotest_common.sh@969 -- # kill 562973 00:30:35.830 10:19:19 nvmf_dif -- common/autotest_common.sh@974 -- # wait 562973 00:30:35.830 10:19:19 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:30:35.830 10:19:19 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:30:36.087 Waiting for block devices as requested 00:30:36.088 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:30:36.346 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:30:36.346 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:30:36.606 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:30:36.606 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:30:36.606 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:30:36.606 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:30:36.865 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:30:36.865 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:30:36.865 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:30:36.865 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:30:37.123 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:30:37.123 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:30:37.123 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:30:37.383 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:30:37.383 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:30:37.383 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:30:37.383 10:19:22 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:37.383 10:19:22 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:37.383 10:19:22 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:37.383 10:19:22 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:37.383 10:19:22 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:37.383 10:19:22 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:37.383 10:19:22 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:39.915 10:19:24 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:39.915 00:30:39.915 real 1m8.152s 00:30:39.915 user 6m33.089s 00:30:39.915 sys 0m18.753s 00:30:39.915 10:19:24 nvmf_dif -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:39.915 10:19:24 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:30:39.915 ************************************ 00:30:39.915 END TEST nvmf_dif 00:30:39.915 ************************************ 00:30:39.915 10:19:24 -- spdk/autotest.sh@297 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:30:39.915 10:19:24 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:30:39.915 10:19:24 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:39.915 10:19:24 -- common/autotest_common.sh@10 -- # set +x 00:30:39.915 ************************************ 00:30:39.915 START TEST nvmf_abort_qd_sizes 00:30:39.915 ************************************ 00:30:39.915 10:19:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:30:39.915 * Looking for test storage... 00:30:39.915 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:39.915 10:19:24 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:39.915 10:19:24 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:30:39.915 10:19:24 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:39.915 10:19:24 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:39.915 10:19:24 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:39.915 10:19:24 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:39.915 10:19:24 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:39.915 10:19:24 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:39.915 10:19:24 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:39.915 10:19:24 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:39.915 10:19:24 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:39.915 10:19:24 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:39.915 10:19:24 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:30:39.915 10:19:24 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:30:39.915 10:19:24 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:39.915 10:19:24 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:39.915 10:19:24 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:39.915 10:19:24 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:39.915 10:19:24 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:39.915 10:19:24 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:39.915 10:19:24 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:39.915 10:19:24 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:39.915 10:19:24 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:39.915 10:19:24 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:39.915 10:19:24 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:39.915 10:19:24 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:30:39.915 10:19:24 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:39.915 10:19:24 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:30:39.915 10:19:24 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:39.915 10:19:24 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:39.915 10:19:24 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:39.915 10:19:24 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:39.915 10:19:24 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:39.915 10:19:24 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:39.915 10:19:24 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:39.915 10:19:24 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:39.915 10:19:24 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:30:39.915 10:19:24 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:39.915 10:19:24 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:39.915 10:19:24 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:39.915 10:19:24 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:39.915 10:19:24 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:39.915 10:19:24 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:39.915 10:19:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:39.915 10:19:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:39.915 10:19:24 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:39.915 10:19:24 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:39.915 10:19:24 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:30:39.915 10:19:24 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:30:42.477 10:19:27 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:42.477 10:19:27 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:30:42.477 10:19:27 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:42.477 10:19:27 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:42.477 10:19:27 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:42.477 10:19:27 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:42.477 10:19:27 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:42.477 10:19:27 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:30:42.477 10:19:27 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:42.477 10:19:27 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:30:42.477 10:19:27 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:30:42.477 10:19:27 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:30:42.477 10:19:27 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:30:42.477 10:19:27 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:30:42.477 10:19:27 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:30:42.477 10:19:27 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:42.477 10:19:27 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:42.477 10:19:27 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:42.477 10:19:27 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:42.477 10:19:27 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:42.477 10:19:27 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:42.477 10:19:27 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:42.477 10:19:27 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:42.477 10:19:27 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:42.477 10:19:27 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:42.477 10:19:27 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:42.477 10:19:27 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:42.477 10:19:27 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:42.477 10:19:27 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:42.477 10:19:27 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:42.477 10:19:27 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:42.477 10:19:27 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:42.477 10:19:27 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:42.477 10:19:27 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.0 (0x8086 - 0x159b)' 00:30:42.477 Found 0000:84:00.0 (0x8086 - 0x159b) 00:30:42.477 10:19:27 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:42.477 10:19:27 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:42.477 10:19:27 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:42.477 10:19:27 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:42.477 10:19:27 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:42.477 10:19:27 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:42.477 10:19:27 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:84:00.1 (0x8086 - 0x159b)' 00:30:42.477 Found 0000:84:00.1 (0x8086 - 0x159b) 00:30:42.477 10:19:27 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:42.477 10:19:27 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:42.477 10:19:27 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:42.477 10:19:27 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:42.477 10:19:27 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:42.477 10:19:27 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:42.477 10:19:27 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:42.477 10:19:27 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:42.477 10:19:27 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:42.477 10:19:27 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:42.477 10:19:27 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:42.477 10:19:27 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:42.477 10:19:27 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:42.477 10:19:27 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:42.477 10:19:27 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:42.477 10:19:27 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.0: cvl_0_0' 00:30:42.477 Found net devices under 0000:84:00.0: cvl_0_0 00:30:42.477 10:19:27 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:42.477 10:19:27 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:42.477 10:19:27 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:42.477 10:19:27 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:42.477 10:19:27 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:42.477 10:19:27 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:42.477 10:19:27 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:42.477 10:19:27 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:42.477 10:19:27 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:84:00.1: cvl_0_1' 00:30:42.477 Found net devices under 0000:84:00.1: cvl_0_1 00:30:42.478 10:19:27 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:42.478 10:19:27 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:42.478 10:19:27 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:30:42.478 10:19:27 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:42.478 10:19:27 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:42.478 10:19:27 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:42.478 10:19:27 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:42.478 10:19:27 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:42.478 10:19:27 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:42.478 10:19:27 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:42.478 10:19:27 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:42.478 10:19:27 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:42.478 10:19:27 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:42.478 10:19:27 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:42.478 10:19:27 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:42.478 10:19:27 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:42.478 10:19:27 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:42.478 10:19:27 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:42.478 10:19:27 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:42.478 10:19:27 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:42.478 10:19:27 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:42.478 10:19:27 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:42.478 10:19:27 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:42.478 10:19:27 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:42.478 10:19:27 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:42.478 10:19:27 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:42.478 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:42.478 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.121 ms 00:30:42.478 00:30:42.478 --- 10.0.0.2 ping statistics --- 00:30:42.478 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:42.478 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:30:42.478 10:19:27 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:42.478 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:42.478 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:30:42.478 00:30:42.478 --- 10.0.0.1 ping statistics --- 00:30:42.478 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:42.478 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:30:42.478 10:19:27 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:42.478 10:19:27 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:30:42.478 10:19:27 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:30:42.478 10:19:27 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:30:43.415 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:30:43.415 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:30:43.415 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:30:43.674 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:30:43.674 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:30:43.674 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:30:43.674 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:30:43.674 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:30:43.674 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:30:43.674 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:30:43.674 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:30:43.674 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:30:43.674 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:30:43.674 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:30:43.674 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:30:43.674 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:30:44.665 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:30:44.665 10:19:29 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:44.665 10:19:29 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:44.665 10:19:29 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:44.665 10:19:29 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:44.665 10:19:29 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:44.665 10:19:29 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:44.665 10:19:29 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:30:44.665 10:19:29 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:44.665 10:19:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:44.665 10:19:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:30:44.665 10:19:29 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=574086 00:30:44.665 10:19:29 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:30:44.665 10:19:29 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 574086 00:30:44.665 10:19:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # '[' -z 574086 ']' 00:30:44.665 10:19:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:44.665 10:19:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # local max_retries=100 00:30:44.665 10:19:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:44.665 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:44.665 10:19:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # xtrace_disable 00:30:44.665 10:19:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:30:44.665 [2024-07-25 10:19:29.827949] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:30:44.665 [2024-07-25 10:19:29.828053] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:44.924 EAL: No free 2048 kB hugepages reported on node 1 00:30:44.924 [2024-07-25 10:19:29.913628] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:44.924 [2024-07-25 10:19:30.045351] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:44.924 [2024-07-25 10:19:30.045425] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:44.924 [2024-07-25 10:19:30.045451] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:44.924 [2024-07-25 10:19:30.045465] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:44.924 [2024-07-25 10:19:30.045477] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:44.924 [2024-07-25 10:19:30.045547] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:44.924 [2024-07-25 10:19:30.045602] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:44.924 [2024-07-25 10:19:30.045652] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:30:44.924 [2024-07-25 10:19:30.045656] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:45.182 10:19:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:30:45.182 10:19:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # return 0 00:30:45.182 10:19:30 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:45.182 10:19:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:45.182 10:19:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:30:45.182 10:19:30 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:45.182 10:19:30 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:30:45.182 10:19:30 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:30:45.182 10:19:30 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:30:45.182 10:19:30 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:30:45.182 10:19:30 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:30:45.182 10:19:30 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:82:00.0 ]] 00:30:45.182 10:19:30 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:30:45.182 10:19:30 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:30:45.182 10:19:30 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:82:00.0 ]] 00:30:45.182 10:19:30 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:30:45.182 10:19:30 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:30:45.182 10:19:30 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:30:45.182 10:19:30 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:30:45.182 10:19:30 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:82:00.0 00:30:45.182 10:19:30 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:30:45.182 10:19:30 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:82:00.0 00:30:45.182 10:19:30 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:30:45.182 10:19:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:30:45.182 10:19:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:45.182 10:19:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:30:45.182 ************************************ 00:30:45.182 START TEST spdk_target_abort 00:30:45.182 ************************************ 00:30:45.182 10:19:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1125 -- # spdk_target 00:30:45.182 10:19:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:30:45.182 10:19:30 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:82:00.0 -b spdk_target 00:30:45.182 10:19:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:45.182 10:19:30 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:48.463 spdk_targetn1 00:30:48.463 10:19:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:48.463 10:19:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:48.463 10:19:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:48.463 10:19:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:48.463 [2024-07-25 10:19:33.188134] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:48.463 10:19:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:48.463 10:19:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:30:48.463 10:19:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:48.463 10:19:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:48.463 10:19:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:48.463 10:19:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:30:48.463 10:19:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:48.463 10:19:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:48.463 10:19:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:48.463 10:19:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:30:48.463 10:19:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:48.463 10:19:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:48.463 [2024-07-25 10:19:33.220391] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:48.463 10:19:33 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:48.463 10:19:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:30:48.463 10:19:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:30:48.463 10:19:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:30:48.463 10:19:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:30:48.463 10:19:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:30:48.463 10:19:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:30:48.463 10:19:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:30:48.463 10:19:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:30:48.463 10:19:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:30:48.463 10:19:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:48.463 10:19:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:30:48.463 10:19:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:48.463 10:19:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:30:48.463 10:19:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:48.463 10:19:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:30:48.463 10:19:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:48.463 10:19:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:48.463 10:19:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:30:48.463 10:19:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:48.463 10:19:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:48.463 10:19:33 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:48.463 EAL: No free 2048 kB hugepages reported on node 1 00:30:51.738 Initializing NVMe Controllers 00:30:51.738 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:30:51.738 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:30:51.738 Initialization complete. Launching workers. 00:30:51.738 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 11360, failed: 0 00:30:51.738 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1313, failed to submit 10047 00:30:51.738 success 743, unsuccess 570, failed 0 00:30:51.738 10:19:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:51.738 10:19:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:51.738 EAL: No free 2048 kB hugepages reported on node 1 00:30:55.015 Initializing NVMe Controllers 00:30:55.015 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:30:55.015 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:30:55.015 Initialization complete. Launching workers. 00:30:55.015 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8540, failed: 0 00:30:55.015 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1253, failed to submit 7287 00:30:55.015 success 300, unsuccess 953, failed 0 00:30:55.015 10:19:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:30:55.015 10:19:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:30:55.015 EAL: No free 2048 kB hugepages reported on node 1 00:30:58.303 Initializing NVMe Controllers 00:30:58.303 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:30:58.303 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:30:58.303 Initialization complete. Launching workers. 00:30:58.303 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31795, failed: 0 00:30:58.303 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2761, failed to submit 29034 00:30:58.303 success 538, unsuccess 2223, failed 0 00:30:58.303 10:19:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:30:58.303 10:19:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:58.303 10:19:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:58.303 10:19:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:58.303 10:19:43 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:30:58.303 10:19:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:58.303 10:19:43 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:59.676 10:19:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:59.676 10:19:44 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 574086 00:30:59.676 10:19:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # '[' -z 574086 ']' 00:30:59.676 10:19:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # kill -0 574086 00:30:59.676 10:19:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # uname 00:30:59.676 10:19:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:30:59.676 10:19:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 574086 00:30:59.676 10:19:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:30:59.676 10:19:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:30:59.676 10:19:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 574086' 00:30:59.676 killing process with pid 574086 00:30:59.676 10:19:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@969 -- # kill 574086 00:30:59.676 10:19:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@974 -- # wait 574086 00:30:59.676 00:30:59.676 real 0m14.412s 00:30:59.676 user 0m54.867s 00:30:59.676 sys 0m2.871s 00:30:59.676 10:19:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:30:59.676 10:19:44 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:59.676 ************************************ 00:30:59.676 END TEST spdk_target_abort 00:30:59.676 ************************************ 00:30:59.676 10:19:44 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:30:59.676 10:19:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:30:59.676 10:19:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:30:59.676 10:19:44 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:30:59.676 ************************************ 00:30:59.676 START TEST kernel_target_abort 00:30:59.676 ************************************ 00:30:59.676 10:19:44 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1125 -- # kernel_target 00:30:59.676 10:19:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:30:59.676 10:19:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:30:59.676 10:19:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:30:59.676 10:19:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:30:59.676 10:19:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:30:59.676 10:19:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:30:59.676 10:19:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:30:59.676 10:19:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:30:59.676 10:19:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:30:59.676 10:19:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:30:59.676 10:19:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:30:59.676 10:19:44 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:30:59.676 10:19:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:30:59.676 10:19:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:30:59.676 10:19:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:30:59.676 10:19:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:30:59.676 10:19:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:30:59.676 10:19:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:30:59.676 10:19:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:30:59.676 10:19:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:30:59.676 10:19:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:30:59.676 10:19:44 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:31:01.054 Waiting for block devices as requested 00:31:01.054 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:31:01.313 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:31:01.313 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:31:01.313 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:31:01.572 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:31:01.572 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:31:01.572 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:31:01.572 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:31:01.572 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:31:01.829 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:31:01.829 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:31:01.829 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:31:01.830 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:31:02.089 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:31:02.089 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:31:02.089 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:31:02.089 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:31:02.348 10:19:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:31:02.348 10:19:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:31:02.348 10:19:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:31:02.348 10:19:47 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:31:02.348 10:19:47 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:31:02.348 10:19:47 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:31:02.348 10:19:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:31:02.348 10:19:47 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:31:02.348 10:19:47 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:31:02.348 No valid GPT data, bailing 00:31:02.348 10:19:47 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:31:02.348 10:19:47 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:31:02.348 10:19:47 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:31:02.348 10:19:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:31:02.348 10:19:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:31:02.348 10:19:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:02.348 10:19:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:02.348 10:19:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:31:02.348 10:19:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:31:02.348 10:19:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:31:02.348 10:19:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:31:02.348 10:19:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:31:02.348 10:19:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:31:02.348 10:19:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:31:02.348 10:19:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:31:02.348 10:19:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:31:02.348 10:19:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:31:02.348 10:19:47 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 --hostid=cd6acfbe-4794-e311-a299-001e67a97b02 -a 10.0.0.1 -t tcp -s 4420 00:31:02.607 00:31:02.607 Discovery Log Number of Records 2, Generation counter 2 00:31:02.607 =====Discovery Log Entry 0====== 00:31:02.607 trtype: tcp 00:31:02.607 adrfam: ipv4 00:31:02.607 subtype: current discovery subsystem 00:31:02.607 treq: not specified, sq flow control disable supported 00:31:02.607 portid: 1 00:31:02.607 trsvcid: 4420 00:31:02.607 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:31:02.607 traddr: 10.0.0.1 00:31:02.607 eflags: none 00:31:02.607 sectype: none 00:31:02.607 =====Discovery Log Entry 1====== 00:31:02.607 trtype: tcp 00:31:02.607 adrfam: ipv4 00:31:02.607 subtype: nvme subsystem 00:31:02.607 treq: not specified, sq flow control disable supported 00:31:02.607 portid: 1 00:31:02.607 trsvcid: 4420 00:31:02.607 subnqn: nqn.2016-06.io.spdk:testnqn 00:31:02.607 traddr: 10.0.0.1 00:31:02.607 eflags: none 00:31:02.607 sectype: none 00:31:02.607 10:19:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:31:02.607 10:19:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:31:02.607 10:19:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:31:02.607 10:19:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:31:02.607 10:19:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:31:02.607 10:19:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:31:02.607 10:19:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:31:02.607 10:19:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:31:02.607 10:19:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:31:02.607 10:19:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:02.607 10:19:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:31:02.607 10:19:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:02.607 10:19:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:31:02.607 10:19:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:02.607 10:19:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:31:02.607 10:19:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:02.607 10:19:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:31:02.607 10:19:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:31:02.607 10:19:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:02.607 10:19:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:02.607 10:19:47 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:02.607 EAL: No free 2048 kB hugepages reported on node 1 00:31:05.895 Initializing NVMe Controllers 00:31:05.895 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:31:05.895 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:05.895 Initialization complete. Launching workers. 00:31:05.895 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 34974, failed: 0 00:31:05.895 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 34974, failed to submit 0 00:31:05.895 success 0, unsuccess 34974, failed 0 00:31:05.895 10:19:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:05.895 10:19:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:05.895 EAL: No free 2048 kB hugepages reported on node 1 00:31:09.186 Initializing NVMe Controllers 00:31:09.186 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:31:09.186 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:09.186 Initialization complete. Launching workers. 00:31:09.186 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 67458, failed: 0 00:31:09.186 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 17030, failed to submit 50428 00:31:09.186 success 0, unsuccess 17030, failed 0 00:31:09.186 10:19:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:31:09.186 10:19:53 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:31:09.186 EAL: No free 2048 kB hugepages reported on node 1 00:31:12.466 Initializing NVMe Controllers 00:31:12.466 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:31:12.466 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:31:12.466 Initialization complete. Launching workers. 00:31:12.466 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 65588, failed: 0 00:31:12.466 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 16402, failed to submit 49186 00:31:12.466 success 0, unsuccess 16402, failed 0 00:31:12.466 10:19:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:31:12.466 10:19:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:31:12.466 10:19:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:31:12.466 10:19:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:12.466 10:19:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:31:12.466 10:19:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:31:12.466 10:19:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:31:12.466 10:19:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:31:12.466 10:19:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:31:12.466 10:19:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:31:13.844 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:31:13.844 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:31:13.844 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:31:13.844 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:31:13.844 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:31:13.844 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:31:13.844 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:31:13.844 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:31:13.844 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:31:13.844 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:31:13.844 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:31:13.844 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:31:13.844 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:31:13.844 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:31:13.844 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:31:13.844 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:31:14.780 0000:82:00.0 (8086 0a54): nvme -> vfio-pci 00:31:14.780 00:31:14.780 real 0m14.984s 00:31:14.780 user 0m5.841s 00:31:14.780 sys 0m3.773s 00:31:14.780 10:19:59 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:14.780 10:19:59 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:31:14.780 ************************************ 00:31:14.780 END TEST kernel_target_abort 00:31:14.780 ************************************ 00:31:14.780 10:19:59 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:31:14.780 10:19:59 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:31:14.780 10:19:59 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:14.780 10:19:59 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:31:14.780 10:19:59 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:14.780 10:19:59 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:31:14.780 10:19:59 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:14.780 10:19:59 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:14.780 rmmod nvme_tcp 00:31:14.780 rmmod nvme_fabrics 00:31:14.780 rmmod nvme_keyring 00:31:14.780 10:19:59 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:14.780 10:19:59 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:31:14.780 10:19:59 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:31:14.780 10:19:59 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 574086 ']' 00:31:14.780 10:19:59 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 574086 00:31:14.780 10:19:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # '[' -z 574086 ']' 00:31:14.780 10:19:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # kill -0 574086 00:31:14.780 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (574086) - No such process 00:31:14.780 10:19:59 nvmf_abort_qd_sizes -- common/autotest_common.sh@977 -- # echo 'Process with pid 574086 is not found' 00:31:14.780 Process with pid 574086 is not found 00:31:14.780 10:19:59 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:31:14.780 10:19:59 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:31:16.157 Waiting for block devices as requested 00:31:16.157 0000:82:00.0 (8086 0a54): vfio-pci -> nvme 00:31:16.416 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:31:16.416 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:31:16.675 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:31:16.675 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:31:16.675 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:31:16.935 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:31:16.935 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:31:16.935 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:31:16.935 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:31:17.195 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:31:17.195 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:31:17.195 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:31:17.195 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:31:17.454 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:31:17.454 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:31:17.454 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:31:17.714 10:20:02 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:17.714 10:20:02 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:17.714 10:20:02 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:17.714 10:20:02 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:17.714 10:20:02 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:17.714 10:20:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:17.714 10:20:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:19.618 10:20:04 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:19.618 00:31:19.618 real 0m40.064s 00:31:19.618 user 1m3.302s 00:31:19.618 sys 0m10.924s 00:31:19.618 10:20:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:19.618 10:20:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:31:19.618 ************************************ 00:31:19.618 END TEST nvmf_abort_qd_sizes 00:31:19.618 ************************************ 00:31:19.618 10:20:04 -- spdk/autotest.sh@299 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:31:19.618 10:20:04 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:31:19.618 10:20:04 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:19.618 10:20:04 -- common/autotest_common.sh@10 -- # set +x 00:31:19.618 ************************************ 00:31:19.618 START TEST keyring_file 00:31:19.618 ************************************ 00:31:19.618 10:20:04 keyring_file -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:31:19.877 * Looking for test storage... 00:31:19.877 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:31:19.877 10:20:04 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:31:19.877 10:20:04 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:19.877 10:20:04 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:31:19.877 10:20:04 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:19.877 10:20:04 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:19.877 10:20:04 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:19.877 10:20:04 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:19.877 10:20:04 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:19.877 10:20:04 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:19.877 10:20:04 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:19.877 10:20:04 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:19.877 10:20:04 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:19.877 10:20:04 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:19.877 10:20:04 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:31:19.877 10:20:04 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:31:19.877 10:20:04 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:19.877 10:20:04 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:19.877 10:20:04 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:19.877 10:20:04 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:19.877 10:20:04 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:19.877 10:20:04 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:19.877 10:20:04 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:19.877 10:20:04 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:19.877 10:20:04 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:19.877 10:20:04 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:19.877 10:20:04 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:19.877 10:20:04 keyring_file -- paths/export.sh@5 -- # export PATH 00:31:19.877 10:20:04 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:19.877 10:20:04 keyring_file -- nvmf/common.sh@47 -- # : 0 00:31:19.877 10:20:04 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:19.877 10:20:04 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:19.877 10:20:04 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:19.877 10:20:04 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:19.877 10:20:04 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:19.877 10:20:04 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:19.877 10:20:04 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:19.877 10:20:04 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:19.877 10:20:04 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:31:19.877 10:20:04 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:31:19.877 10:20:04 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:31:19.877 10:20:04 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:31:19.877 10:20:04 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:31:19.877 10:20:04 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:31:19.877 10:20:04 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:31:19.877 10:20:04 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:31:19.877 10:20:04 keyring_file -- keyring/common.sh@17 -- # name=key0 00:31:19.877 10:20:04 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:31:19.877 10:20:04 keyring_file -- keyring/common.sh@17 -- # digest=0 00:31:19.877 10:20:04 keyring_file -- keyring/common.sh@18 -- # mktemp 00:31:19.877 10:20:04 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.nLxxl64JwM 00:31:19.877 10:20:04 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:31:19.877 10:20:04 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:31:19.877 10:20:04 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:31:19.877 10:20:04 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:31:19.877 10:20:04 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:31:19.877 10:20:04 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:31:19.877 10:20:04 keyring_file -- nvmf/common.sh@705 -- # python - 00:31:19.877 10:20:04 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.nLxxl64JwM 00:31:19.877 10:20:04 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.nLxxl64JwM 00:31:19.877 10:20:04 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.nLxxl64JwM 00:31:19.877 10:20:04 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:31:19.877 10:20:04 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:31:19.877 10:20:04 keyring_file -- keyring/common.sh@17 -- # name=key1 00:31:19.877 10:20:04 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:31:19.877 10:20:04 keyring_file -- keyring/common.sh@17 -- # digest=0 00:31:19.877 10:20:04 keyring_file -- keyring/common.sh@18 -- # mktemp 00:31:19.877 10:20:04 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.XjDQzweIWV 00:31:19.877 10:20:04 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:31:19.877 10:20:04 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:31:19.877 10:20:04 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:31:19.877 10:20:04 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:31:19.877 10:20:04 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:31:19.877 10:20:04 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:31:19.877 10:20:04 keyring_file -- nvmf/common.sh@705 -- # python - 00:31:19.877 10:20:04 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.XjDQzweIWV 00:31:19.877 10:20:04 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.XjDQzweIWV 00:31:19.877 10:20:04 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.XjDQzweIWV 00:31:19.877 10:20:04 keyring_file -- keyring/file.sh@30 -- # tgtpid=579871 00:31:19.877 10:20:04 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:31:19.877 10:20:04 keyring_file -- keyring/file.sh@32 -- # waitforlisten 579871 00:31:19.877 10:20:04 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 579871 ']' 00:31:19.877 10:20:04 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:19.877 10:20:04 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:19.877 10:20:04 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:19.877 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:19.877 10:20:04 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:19.877 10:20:04 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:31:20.136 [2024-07-25 10:20:05.047802] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:31:20.136 [2024-07-25 10:20:05.047909] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid579871 ] 00:31:20.136 EAL: No free 2048 kB hugepages reported on node 1 00:31:20.136 [2024-07-25 10:20:05.120897] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:20.136 [2024-07-25 10:20:05.244952] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:20.394 10:20:05 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:20.394 10:20:05 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:31:20.394 10:20:05 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:31:20.394 10:20:05 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:20.394 10:20:05 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:31:20.394 [2024-07-25 10:20:05.528154] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:20.394 null0 00:31:20.394 [2024-07-25 10:20:05.560210] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:31:20.394 [2024-07-25 10:20:05.560568] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:31:20.677 [2024-07-25 10:20:05.568205] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:31:20.677 10:20:05 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:20.677 10:20:05 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:31:20.677 10:20:05 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:31:20.677 10:20:05 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:31:20.677 10:20:05 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:31:20.677 10:20:05 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:20.677 10:20:05 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:31:20.677 10:20:05 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:20.677 10:20:05 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:31:20.677 10:20:05 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:20.677 10:20:05 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:31:20.677 [2024-07-25 10:20:05.580232] nvmf_rpc.c: 788:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:31:20.677 request: 00:31:20.677 { 00:31:20.677 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:31:20.677 "secure_channel": false, 00:31:20.677 "listen_address": { 00:31:20.677 "trtype": "tcp", 00:31:20.677 "traddr": "127.0.0.1", 00:31:20.677 "trsvcid": "4420" 00:31:20.677 }, 00:31:20.677 "method": "nvmf_subsystem_add_listener", 00:31:20.677 "req_id": 1 00:31:20.677 } 00:31:20.677 Got JSON-RPC error response 00:31:20.677 response: 00:31:20.677 { 00:31:20.677 "code": -32602, 00:31:20.677 "message": "Invalid parameters" 00:31:20.677 } 00:31:20.677 10:20:05 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:31:20.677 10:20:05 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:31:20.677 10:20:05 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:20.677 10:20:05 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:20.677 10:20:05 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:20.677 10:20:05 keyring_file -- keyring/file.sh@46 -- # bperfpid=579938 00:31:20.677 10:20:05 keyring_file -- keyring/file.sh@48 -- # waitforlisten 579938 /var/tmp/bperf.sock 00:31:20.677 10:20:05 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:31:20.677 10:20:05 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 579938 ']' 00:31:20.677 10:20:05 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:20.677 10:20:05 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:20.678 10:20:05 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:20.678 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:20.678 10:20:05 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:20.678 10:20:05 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:31:20.678 [2024-07-25 10:20:05.632566] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:31:20.678 [2024-07-25 10:20:05.632653] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid579938 ] 00:31:20.678 EAL: No free 2048 kB hugepages reported on node 1 00:31:20.678 [2024-07-25 10:20:05.699708] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:20.678 [2024-07-25 10:20:05.821978] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:20.938 10:20:05 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:20.938 10:20:05 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:31:20.938 10:20:05 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.nLxxl64JwM 00:31:20.938 10:20:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.nLxxl64JwM 00:31:21.503 10:20:06 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.XjDQzweIWV 00:31:21.503 10:20:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.XjDQzweIWV 00:31:21.761 10:20:06 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:31:21.761 10:20:06 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:31:21.761 10:20:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:21.761 10:20:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:21.761 10:20:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:22.018 10:20:07 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.nLxxl64JwM == \/\t\m\p\/\t\m\p\.\n\L\x\x\l\6\4\J\w\M ]] 00:31:22.018 10:20:07 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:31:22.018 10:20:07 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:31:22.018 10:20:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:22.018 10:20:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:22.018 10:20:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:31:22.592 10:20:07 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.XjDQzweIWV == \/\t\m\p\/\t\m\p\.\X\j\D\Q\z\w\e\I\W\V ]] 00:31:22.592 10:20:07 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:31:22.592 10:20:07 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:22.592 10:20:07 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:22.592 10:20:07 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:22.592 10:20:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:22.592 10:20:07 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:23.163 10:20:08 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:31:23.163 10:20:08 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:31:23.163 10:20:08 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:31:23.163 10:20:08 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:23.164 10:20:08 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:23.164 10:20:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:23.164 10:20:08 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:31:23.421 10:20:08 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:31:23.421 10:20:08 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:23.421 10:20:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:23.679 [2024-07-25 10:20:08.761362] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:31:23.679 nvme0n1 00:31:23.937 10:20:08 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:31:23.937 10:20:08 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:23.937 10:20:08 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:23.937 10:20:08 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:23.937 10:20:08 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:23.937 10:20:08 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:24.195 10:20:09 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:31:24.195 10:20:09 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:31:24.195 10:20:09 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:31:24.195 10:20:09 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:24.195 10:20:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:24.195 10:20:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:24.195 10:20:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:31:24.760 10:20:09 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:31:24.760 10:20:09 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:24.760 Running I/O for 1 seconds... 00:31:26.132 00:31:26.132 Latency(us) 00:31:26.132 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:26.132 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:31:26.132 nvme0n1 : 1.01 5593.15 21.85 0.00 0.00 22770.03 9806.13 36311.80 00:31:26.132 =================================================================================================================== 00:31:26.132 Total : 5593.15 21.85 0.00 0.00 22770.03 9806.13 36311.80 00:31:26.132 0 00:31:26.132 10:20:10 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:31:26.132 10:20:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:31:26.132 10:20:11 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:31:26.132 10:20:11 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:26.132 10:20:11 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:26.133 10:20:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:26.133 10:20:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:26.133 10:20:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:26.697 10:20:11 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:31:26.697 10:20:11 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:31:26.697 10:20:11 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:31:26.697 10:20:11 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:26.697 10:20:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:26.697 10:20:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:26.697 10:20:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:31:27.262 10:20:12 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:31:27.262 10:20:12 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:31:27.262 10:20:12 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:31:27.262 10:20:12 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:31:27.262 10:20:12 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:31:27.263 10:20:12 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:27.263 10:20:12 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:31:27.263 10:20:12 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:27.263 10:20:12 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:31:27.263 10:20:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:31:27.523 [2024-07-25 10:20:12.656864] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:31:27.523 [2024-07-25 10:20:12.657361] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e7a0 (107): Transport endpoint is not connected 00:31:27.523 [2024-07-25 10:20:12.658348] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x169e7a0 (9): Bad file descriptor 00:31:27.523 [2024-07-25 10:20:12.659346] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:27.523 [2024-07-25 10:20:12.659369] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:31:27.523 [2024-07-25 10:20:12.659384] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:27.523 request: 00:31:27.523 { 00:31:27.523 "name": "nvme0", 00:31:27.523 "trtype": "tcp", 00:31:27.523 "traddr": "127.0.0.1", 00:31:27.523 "adrfam": "ipv4", 00:31:27.523 "trsvcid": "4420", 00:31:27.523 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:27.523 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:27.523 "prchk_reftag": false, 00:31:27.523 "prchk_guard": false, 00:31:27.523 "hdgst": false, 00:31:27.523 "ddgst": false, 00:31:27.523 "psk": "key1", 00:31:27.523 "method": "bdev_nvme_attach_controller", 00:31:27.523 "req_id": 1 00:31:27.523 } 00:31:27.523 Got JSON-RPC error response 00:31:27.523 response: 00:31:27.523 { 00:31:27.523 "code": -5, 00:31:27.523 "message": "Input/output error" 00:31:27.523 } 00:31:27.523 10:20:12 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:31:27.523 10:20:12 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:27.523 10:20:12 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:27.523 10:20:12 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:27.523 10:20:12 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:31:27.523 10:20:12 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:27.523 10:20:12 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:27.523 10:20:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:27.523 10:20:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:27.523 10:20:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:27.780 10:20:12 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:31:27.780 10:20:12 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:31:27.780 10:20:12 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:31:27.780 10:20:12 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:27.780 10:20:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:27.780 10:20:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:27.780 10:20:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:31:28.347 10:20:13 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:31:28.347 10:20:13 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:31:28.347 10:20:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:31:28.604 10:20:13 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:31:28.604 10:20:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:31:28.862 10:20:13 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:31:28.862 10:20:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:28.862 10:20:13 keyring_file -- keyring/file.sh@77 -- # jq length 00:31:29.428 10:20:14 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:31:29.428 10:20:14 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.nLxxl64JwM 00:31:29.428 10:20:14 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.nLxxl64JwM 00:31:29.428 10:20:14 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:31:29.428 10:20:14 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.nLxxl64JwM 00:31:29.428 10:20:14 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:31:29.428 10:20:14 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:29.428 10:20:14 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:31:29.428 10:20:14 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:29.428 10:20:14 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.nLxxl64JwM 00:31:29.428 10:20:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.nLxxl64JwM 00:31:29.687 [2024-07-25 10:20:14.722528] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.nLxxl64JwM': 0100660 00:31:29.687 [2024-07-25 10:20:14.722569] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:31:29.687 request: 00:31:29.687 { 00:31:29.687 "name": "key0", 00:31:29.687 "path": "/tmp/tmp.nLxxl64JwM", 00:31:29.687 "method": "keyring_file_add_key", 00:31:29.687 "req_id": 1 00:31:29.687 } 00:31:29.687 Got JSON-RPC error response 00:31:29.687 response: 00:31:29.687 { 00:31:29.687 "code": -1, 00:31:29.687 "message": "Operation not permitted" 00:31:29.687 } 00:31:29.687 10:20:14 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:31:29.687 10:20:14 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:29.687 10:20:14 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:29.687 10:20:14 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:29.687 10:20:14 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.nLxxl64JwM 00:31:29.687 10:20:14 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.nLxxl64JwM 00:31:29.687 10:20:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.nLxxl64JwM 00:31:29.946 10:20:15 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.nLxxl64JwM 00:31:29.946 10:20:15 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:31:29.946 10:20:15 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:29.946 10:20:15 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:29.946 10:20:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:29.946 10:20:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:29.946 10:20:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:30.204 10:20:15 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:31:30.204 10:20:15 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:30.204 10:20:15 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:31:30.204 10:20:15 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:30.204 10:20:15 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:31:30.204 10:20:15 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:30.204 10:20:15 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:31:30.204 10:20:15 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:30.204 10:20:15 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:30.204 10:20:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:30.462 [2024-07-25 10:20:15.480593] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.nLxxl64JwM': No such file or directory 00:31:30.462 [2024-07-25 10:20:15.480633] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:31:30.462 [2024-07-25 10:20:15.480664] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:31:30.462 [2024-07-25 10:20:15.480677] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:31:30.462 [2024-07-25 10:20:15.480690] bdev_nvme.c:6296:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:31:30.462 request: 00:31:30.462 { 00:31:30.462 "name": "nvme0", 00:31:30.462 "trtype": "tcp", 00:31:30.462 "traddr": "127.0.0.1", 00:31:30.462 "adrfam": "ipv4", 00:31:30.462 "trsvcid": "4420", 00:31:30.462 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:30.462 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:30.462 "prchk_reftag": false, 00:31:30.462 "prchk_guard": false, 00:31:30.462 "hdgst": false, 00:31:30.462 "ddgst": false, 00:31:30.462 "psk": "key0", 00:31:30.462 "method": "bdev_nvme_attach_controller", 00:31:30.462 "req_id": 1 00:31:30.462 } 00:31:30.462 Got JSON-RPC error response 00:31:30.462 response: 00:31:30.462 { 00:31:30.462 "code": -19, 00:31:30.462 "message": "No such device" 00:31:30.462 } 00:31:30.462 10:20:15 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:31:30.462 10:20:15 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:30.462 10:20:15 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:30.462 10:20:15 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:30.462 10:20:15 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:31:30.462 10:20:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:31:30.720 10:20:15 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:31:30.720 10:20:15 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:31:30.720 10:20:15 keyring_file -- keyring/common.sh@17 -- # name=key0 00:31:30.720 10:20:15 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:31:30.720 10:20:15 keyring_file -- keyring/common.sh@17 -- # digest=0 00:31:30.720 10:20:15 keyring_file -- keyring/common.sh@18 -- # mktemp 00:31:30.720 10:20:15 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.yvm47SNcUP 00:31:30.720 10:20:15 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:31:30.720 10:20:15 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:31:30.720 10:20:15 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:31:30.720 10:20:15 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:31:30.720 10:20:15 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:31:30.720 10:20:15 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:31:30.720 10:20:15 keyring_file -- nvmf/common.sh@705 -- # python - 00:31:30.720 10:20:15 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.yvm47SNcUP 00:31:30.720 10:20:15 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.yvm47SNcUP 00:31:30.720 10:20:15 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.yvm47SNcUP 00:31:30.720 10:20:15 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.yvm47SNcUP 00:31:30.720 10:20:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.yvm47SNcUP 00:31:31.284 10:20:16 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:31.284 10:20:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:31.542 nvme0n1 00:31:31.542 10:20:16 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:31:31.542 10:20:16 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:31.542 10:20:16 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:31.542 10:20:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:31.542 10:20:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:31.542 10:20:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:32.108 10:20:17 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:31:32.108 10:20:17 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:31:32.108 10:20:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:31:32.673 10:20:17 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:31:32.673 10:20:17 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:31:32.673 10:20:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:32.673 10:20:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:32.673 10:20:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:32.931 10:20:17 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:31:32.931 10:20:17 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:31:32.931 10:20:17 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:32.931 10:20:17 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:32.931 10:20:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:32.931 10:20:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:32.931 10:20:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:33.189 10:20:18 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:31:33.189 10:20:18 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:31:33.189 10:20:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:31:33.446 10:20:18 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:31:33.446 10:20:18 keyring_file -- keyring/file.sh@104 -- # jq length 00:31:33.446 10:20:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:33.704 10:20:18 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:31:33.704 10:20:18 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.yvm47SNcUP 00:31:33.704 10:20:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.yvm47SNcUP 00:31:33.961 10:20:19 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.XjDQzweIWV 00:31:33.961 10:20:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.XjDQzweIWV 00:31:34.240 10:20:19 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:34.240 10:20:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:31:34.504 nvme0n1 00:31:34.504 10:20:19 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:31:34.504 10:20:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:31:35.070 10:20:20 keyring_file -- keyring/file.sh@112 -- # config='{ 00:31:35.070 "subsystems": [ 00:31:35.070 { 00:31:35.070 "subsystem": "keyring", 00:31:35.070 "config": [ 00:31:35.070 { 00:31:35.070 "method": "keyring_file_add_key", 00:31:35.070 "params": { 00:31:35.070 "name": "key0", 00:31:35.070 "path": "/tmp/tmp.yvm47SNcUP" 00:31:35.070 } 00:31:35.070 }, 00:31:35.070 { 00:31:35.070 "method": "keyring_file_add_key", 00:31:35.070 "params": { 00:31:35.070 "name": "key1", 00:31:35.070 "path": "/tmp/tmp.XjDQzweIWV" 00:31:35.070 } 00:31:35.070 } 00:31:35.070 ] 00:31:35.070 }, 00:31:35.070 { 00:31:35.070 "subsystem": "iobuf", 00:31:35.070 "config": [ 00:31:35.070 { 00:31:35.070 "method": "iobuf_set_options", 00:31:35.070 "params": { 00:31:35.070 "small_pool_count": 8192, 00:31:35.070 "large_pool_count": 1024, 00:31:35.070 "small_bufsize": 8192, 00:31:35.070 "large_bufsize": 135168 00:31:35.070 } 00:31:35.070 } 00:31:35.070 ] 00:31:35.070 }, 00:31:35.070 { 00:31:35.070 "subsystem": "sock", 00:31:35.070 "config": [ 00:31:35.070 { 00:31:35.070 "method": "sock_set_default_impl", 00:31:35.070 "params": { 00:31:35.070 "impl_name": "posix" 00:31:35.070 } 00:31:35.070 }, 00:31:35.070 { 00:31:35.070 "method": "sock_impl_set_options", 00:31:35.070 "params": { 00:31:35.070 "impl_name": "ssl", 00:31:35.070 "recv_buf_size": 4096, 00:31:35.070 "send_buf_size": 4096, 00:31:35.070 "enable_recv_pipe": true, 00:31:35.070 "enable_quickack": false, 00:31:35.070 "enable_placement_id": 0, 00:31:35.070 "enable_zerocopy_send_server": true, 00:31:35.070 "enable_zerocopy_send_client": false, 00:31:35.070 "zerocopy_threshold": 0, 00:31:35.070 "tls_version": 0, 00:31:35.070 "enable_ktls": false 00:31:35.070 } 00:31:35.070 }, 00:31:35.070 { 00:31:35.070 "method": "sock_impl_set_options", 00:31:35.070 "params": { 00:31:35.070 "impl_name": "posix", 00:31:35.070 "recv_buf_size": 2097152, 00:31:35.070 "send_buf_size": 2097152, 00:31:35.070 "enable_recv_pipe": true, 00:31:35.070 "enable_quickack": false, 00:31:35.070 "enable_placement_id": 0, 00:31:35.070 "enable_zerocopy_send_server": true, 00:31:35.070 "enable_zerocopy_send_client": false, 00:31:35.070 "zerocopy_threshold": 0, 00:31:35.070 "tls_version": 0, 00:31:35.070 "enable_ktls": false 00:31:35.070 } 00:31:35.070 } 00:31:35.070 ] 00:31:35.070 }, 00:31:35.070 { 00:31:35.070 "subsystem": "vmd", 00:31:35.070 "config": [] 00:31:35.070 }, 00:31:35.070 { 00:31:35.070 "subsystem": "accel", 00:31:35.070 "config": [ 00:31:35.070 { 00:31:35.070 "method": "accel_set_options", 00:31:35.070 "params": { 00:31:35.070 "small_cache_size": 128, 00:31:35.070 "large_cache_size": 16, 00:31:35.070 "task_count": 2048, 00:31:35.070 "sequence_count": 2048, 00:31:35.070 "buf_count": 2048 00:31:35.070 } 00:31:35.070 } 00:31:35.070 ] 00:31:35.070 }, 00:31:35.070 { 00:31:35.070 "subsystem": "bdev", 00:31:35.070 "config": [ 00:31:35.070 { 00:31:35.070 "method": "bdev_set_options", 00:31:35.070 "params": { 00:31:35.070 "bdev_io_pool_size": 65535, 00:31:35.070 "bdev_io_cache_size": 256, 00:31:35.070 "bdev_auto_examine": true, 00:31:35.070 "iobuf_small_cache_size": 128, 00:31:35.070 "iobuf_large_cache_size": 16 00:31:35.070 } 00:31:35.070 }, 00:31:35.070 { 00:31:35.070 "method": "bdev_raid_set_options", 00:31:35.070 "params": { 00:31:35.070 "process_window_size_kb": 1024, 00:31:35.070 "process_max_bandwidth_mb_sec": 0 00:31:35.070 } 00:31:35.070 }, 00:31:35.070 { 00:31:35.070 "method": "bdev_iscsi_set_options", 00:31:35.070 "params": { 00:31:35.070 "timeout_sec": 30 00:31:35.070 } 00:31:35.070 }, 00:31:35.070 { 00:31:35.070 "method": "bdev_nvme_set_options", 00:31:35.070 "params": { 00:31:35.070 "action_on_timeout": "none", 00:31:35.070 "timeout_us": 0, 00:31:35.070 "timeout_admin_us": 0, 00:31:35.070 "keep_alive_timeout_ms": 10000, 00:31:35.070 "arbitration_burst": 0, 00:31:35.070 "low_priority_weight": 0, 00:31:35.070 "medium_priority_weight": 0, 00:31:35.070 "high_priority_weight": 0, 00:31:35.070 "nvme_adminq_poll_period_us": 10000, 00:31:35.070 "nvme_ioq_poll_period_us": 0, 00:31:35.070 "io_queue_requests": 512, 00:31:35.070 "delay_cmd_submit": true, 00:31:35.070 "transport_retry_count": 4, 00:31:35.070 "bdev_retry_count": 3, 00:31:35.070 "transport_ack_timeout": 0, 00:31:35.070 "ctrlr_loss_timeout_sec": 0, 00:31:35.070 "reconnect_delay_sec": 0, 00:31:35.070 "fast_io_fail_timeout_sec": 0, 00:31:35.070 "disable_auto_failback": false, 00:31:35.070 "generate_uuids": false, 00:31:35.070 "transport_tos": 0, 00:31:35.070 "nvme_error_stat": false, 00:31:35.070 "rdma_srq_size": 0, 00:31:35.070 "io_path_stat": false, 00:31:35.070 "allow_accel_sequence": false, 00:31:35.070 "rdma_max_cq_size": 0, 00:31:35.070 "rdma_cm_event_timeout_ms": 0, 00:31:35.070 "dhchap_digests": [ 00:31:35.070 "sha256", 00:31:35.070 "sha384", 00:31:35.070 "sha512" 00:31:35.070 ], 00:31:35.070 "dhchap_dhgroups": [ 00:31:35.070 "null", 00:31:35.070 "ffdhe2048", 00:31:35.070 "ffdhe3072", 00:31:35.070 "ffdhe4096", 00:31:35.070 "ffdhe6144", 00:31:35.070 "ffdhe8192" 00:31:35.070 ] 00:31:35.070 } 00:31:35.070 }, 00:31:35.070 { 00:31:35.070 "method": "bdev_nvme_attach_controller", 00:31:35.070 "params": { 00:31:35.070 "name": "nvme0", 00:31:35.070 "trtype": "TCP", 00:31:35.070 "adrfam": "IPv4", 00:31:35.070 "traddr": "127.0.0.1", 00:31:35.070 "trsvcid": "4420", 00:31:35.070 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:35.070 "prchk_reftag": false, 00:31:35.070 "prchk_guard": false, 00:31:35.070 "ctrlr_loss_timeout_sec": 0, 00:31:35.070 "reconnect_delay_sec": 0, 00:31:35.070 "fast_io_fail_timeout_sec": 0, 00:31:35.070 "psk": "key0", 00:31:35.070 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:35.070 "hdgst": false, 00:31:35.070 "ddgst": false 00:31:35.070 } 00:31:35.070 }, 00:31:35.070 { 00:31:35.070 "method": "bdev_nvme_set_hotplug", 00:31:35.070 "params": { 00:31:35.070 "period_us": 100000, 00:31:35.070 "enable": false 00:31:35.070 } 00:31:35.070 }, 00:31:35.070 { 00:31:35.070 "method": "bdev_wait_for_examine" 00:31:35.070 } 00:31:35.070 ] 00:31:35.070 }, 00:31:35.070 { 00:31:35.070 "subsystem": "nbd", 00:31:35.070 "config": [] 00:31:35.070 } 00:31:35.070 ] 00:31:35.070 }' 00:31:35.070 10:20:20 keyring_file -- keyring/file.sh@114 -- # killprocess 579938 00:31:35.070 10:20:20 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 579938 ']' 00:31:35.071 10:20:20 keyring_file -- common/autotest_common.sh@954 -- # kill -0 579938 00:31:35.071 10:20:20 keyring_file -- common/autotest_common.sh@955 -- # uname 00:31:35.071 10:20:20 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:35.071 10:20:20 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 579938 00:31:35.071 10:20:20 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:31:35.071 10:20:20 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:31:35.071 10:20:20 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 579938' 00:31:35.071 killing process with pid 579938 00:31:35.071 10:20:20 keyring_file -- common/autotest_common.sh@969 -- # kill 579938 00:31:35.071 Received shutdown signal, test time was about 1.000000 seconds 00:31:35.071 00:31:35.071 Latency(us) 00:31:35.071 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:35.071 =================================================================================================================== 00:31:35.071 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:35.071 10:20:20 keyring_file -- common/autotest_common.sh@974 -- # wait 579938 00:31:35.328 10:20:20 keyring_file -- keyring/file.sh@117 -- # bperfpid=581741 00:31:35.328 10:20:20 keyring_file -- keyring/file.sh@119 -- # waitforlisten 581741 /var/tmp/bperf.sock 00:31:35.329 10:20:20 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 581741 ']' 00:31:35.329 10:20:20 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:35.329 10:20:20 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:31:35.329 10:20:20 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:35.329 10:20:20 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:35.329 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:35.329 10:20:20 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:31:35.329 "subsystems": [ 00:31:35.329 { 00:31:35.329 "subsystem": "keyring", 00:31:35.329 "config": [ 00:31:35.329 { 00:31:35.329 "method": "keyring_file_add_key", 00:31:35.329 "params": { 00:31:35.329 "name": "key0", 00:31:35.329 "path": "/tmp/tmp.yvm47SNcUP" 00:31:35.329 } 00:31:35.329 }, 00:31:35.329 { 00:31:35.329 "method": "keyring_file_add_key", 00:31:35.329 "params": { 00:31:35.329 "name": "key1", 00:31:35.329 "path": "/tmp/tmp.XjDQzweIWV" 00:31:35.329 } 00:31:35.329 } 00:31:35.329 ] 00:31:35.329 }, 00:31:35.329 { 00:31:35.329 "subsystem": "iobuf", 00:31:35.329 "config": [ 00:31:35.329 { 00:31:35.329 "method": "iobuf_set_options", 00:31:35.329 "params": { 00:31:35.329 "small_pool_count": 8192, 00:31:35.329 "large_pool_count": 1024, 00:31:35.329 "small_bufsize": 8192, 00:31:35.329 "large_bufsize": 135168 00:31:35.329 } 00:31:35.329 } 00:31:35.329 ] 00:31:35.329 }, 00:31:35.329 { 00:31:35.329 "subsystem": "sock", 00:31:35.329 "config": [ 00:31:35.329 { 00:31:35.329 "method": "sock_set_default_impl", 00:31:35.329 "params": { 00:31:35.329 "impl_name": "posix" 00:31:35.329 } 00:31:35.329 }, 00:31:35.329 { 00:31:35.329 "method": "sock_impl_set_options", 00:31:35.329 "params": { 00:31:35.329 "impl_name": "ssl", 00:31:35.329 "recv_buf_size": 4096, 00:31:35.329 "send_buf_size": 4096, 00:31:35.329 "enable_recv_pipe": true, 00:31:35.329 "enable_quickack": false, 00:31:35.329 "enable_placement_id": 0, 00:31:35.329 "enable_zerocopy_send_server": true, 00:31:35.329 "enable_zerocopy_send_client": false, 00:31:35.329 "zerocopy_threshold": 0, 00:31:35.329 "tls_version": 0, 00:31:35.329 "enable_ktls": false 00:31:35.329 } 00:31:35.329 }, 00:31:35.329 { 00:31:35.329 "method": "sock_impl_set_options", 00:31:35.329 "params": { 00:31:35.329 "impl_name": "posix", 00:31:35.329 "recv_buf_size": 2097152, 00:31:35.329 "send_buf_size": 2097152, 00:31:35.329 "enable_recv_pipe": true, 00:31:35.329 "enable_quickack": false, 00:31:35.329 "enable_placement_id": 0, 00:31:35.329 "enable_zerocopy_send_server": true, 00:31:35.329 "enable_zerocopy_send_client": false, 00:31:35.329 "zerocopy_threshold": 0, 00:31:35.329 "tls_version": 0, 00:31:35.329 "enable_ktls": false 00:31:35.329 } 00:31:35.329 } 00:31:35.329 ] 00:31:35.329 }, 00:31:35.329 { 00:31:35.329 "subsystem": "vmd", 00:31:35.329 "config": [] 00:31:35.329 }, 00:31:35.329 { 00:31:35.329 "subsystem": "accel", 00:31:35.329 "config": [ 00:31:35.329 { 00:31:35.329 "method": "accel_set_options", 00:31:35.329 "params": { 00:31:35.329 "small_cache_size": 128, 00:31:35.329 "large_cache_size": 16, 00:31:35.329 "task_count": 2048, 00:31:35.329 "sequence_count": 2048, 00:31:35.329 "buf_count": 2048 00:31:35.329 } 00:31:35.329 } 00:31:35.329 ] 00:31:35.329 }, 00:31:35.329 { 00:31:35.329 "subsystem": "bdev", 00:31:35.329 "config": [ 00:31:35.329 { 00:31:35.329 "method": "bdev_set_options", 00:31:35.329 "params": { 00:31:35.329 "bdev_io_pool_size": 65535, 00:31:35.329 "bdev_io_cache_size": 256, 00:31:35.329 "bdev_auto_examine": true, 00:31:35.329 "iobuf_small_cache_size": 128, 00:31:35.329 "iobuf_large_cache_size": 16 00:31:35.329 } 00:31:35.329 }, 00:31:35.329 { 00:31:35.329 "method": "bdev_raid_set_options", 00:31:35.329 "params": { 00:31:35.329 "process_window_size_kb": 1024, 00:31:35.329 "process_max_bandwidth_mb_sec": 0 00:31:35.329 } 00:31:35.329 }, 00:31:35.329 { 00:31:35.329 "method": "bdev_iscsi_set_options", 00:31:35.329 "params": { 00:31:35.329 "timeout_sec": 30 00:31:35.329 } 00:31:35.329 }, 00:31:35.329 { 00:31:35.329 "method": "bdev_nvme_set_options", 00:31:35.329 "params": { 00:31:35.329 "action_on_timeout": "none", 00:31:35.329 "timeout_us": 0, 00:31:35.329 "timeout_admin_us": 0, 00:31:35.329 "keep_alive_timeout_ms": 10000, 00:31:35.329 "arbitration_burst": 0, 00:31:35.329 "low_priority_weight": 0, 00:31:35.329 "medium_priority_weight": 0, 00:31:35.329 "high_priority_weight": 0, 00:31:35.329 "nvme_adminq_poll_period_us": 10000, 00:31:35.329 "nvme_ioq_poll_period_us": 0, 00:31:35.329 "io_queue_requests": 512, 00:31:35.329 "delay_cmd_submit": true, 00:31:35.329 "transport_retry_count": 4, 00:31:35.329 "bdev_retry_count": 3, 00:31:35.329 "transport_ack_timeout": 0, 00:31:35.329 "ctrlr_loss_timeout_sec": 0, 00:31:35.329 "reconnect_delay_sec": 0, 00:31:35.329 "fast_io_fail_timeout_sec": 0, 00:31:35.329 "disable_auto_failback": false, 00:31:35.329 "generate_uuids": false, 00:31:35.329 "transport_tos": 0, 00:31:35.329 "nvme_error_stat": false, 00:31:35.329 "rdma_srq_size": 0, 00:31:35.329 "io_path_stat": false, 00:31:35.329 "allow_accel_sequence": false, 00:31:35.329 "rdma_max_cq_size": 0, 00:31:35.329 "rdma_cm_event_timeout_ms": 0, 00:31:35.329 "dhchap_digests": [ 00:31:35.329 "sha256", 00:31:35.329 "sha384", 00:31:35.329 "sha512" 00:31:35.329 ], 00:31:35.329 "dhchap_dhgroups": [ 00:31:35.329 "null", 00:31:35.329 "ffdhe2048", 00:31:35.329 "ffdhe3072", 00:31:35.329 "ffdhe4096", 00:31:35.329 "ffdhe6144", 00:31:35.329 "ffdhe8192" 00:31:35.329 ] 00:31:35.329 } 00:31:35.329 }, 00:31:35.329 { 00:31:35.329 "method": "bdev_nvme_attach_controller", 00:31:35.329 "params": { 00:31:35.329 "name": "nvme0", 00:31:35.329 "trtype": "TCP", 00:31:35.329 "adrfam": "IPv4", 00:31:35.329 "traddr": "127.0.0.1", 00:31:35.329 "trsvcid": "4420", 00:31:35.329 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:35.329 "prchk_reftag": false, 00:31:35.329 "prchk_guard": false, 00:31:35.329 "ctrlr_loss_timeout_sec": 0, 00:31:35.329 "reconnect_delay_sec": 0, 00:31:35.329 "fast_io_fail_timeout_sec": 0, 00:31:35.329 "psk": "key0", 00:31:35.329 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:35.329 "hdgst": false, 00:31:35.329 "ddgst": false 00:31:35.329 } 00:31:35.329 }, 00:31:35.329 { 00:31:35.329 "method": "bdev_nvme_set_hotplug", 00:31:35.329 "params": { 00:31:35.329 "period_us": 100000, 00:31:35.329 "enable": false 00:31:35.329 } 00:31:35.329 }, 00:31:35.329 { 00:31:35.329 "method": "bdev_wait_for_examine" 00:31:35.329 } 00:31:35.329 ] 00:31:35.329 }, 00:31:35.329 { 00:31:35.329 "subsystem": "nbd", 00:31:35.329 "config": [] 00:31:35.329 } 00:31:35.329 ] 00:31:35.329 }' 00:31:35.329 10:20:20 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:35.329 10:20:20 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:31:35.329 [2024-07-25 10:20:20.427632] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:31:35.329 [2024-07-25 10:20:20.427812] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid581741 ] 00:31:35.329 EAL: No free 2048 kB hugepages reported on node 1 00:31:35.588 [2024-07-25 10:20:20.530590] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:35.588 [2024-07-25 10:20:20.653591] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:35.846 [2024-07-25 10:20:20.846334] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:31:35.846 10:20:20 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:35.846 10:20:20 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:31:35.846 10:20:20 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:31:35.846 10:20:20 keyring_file -- keyring/file.sh@120 -- # jq length 00:31:35.846 10:20:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:36.411 10:20:21 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:31:36.411 10:20:21 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:31:36.411 10:20:21 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:31:36.411 10:20:21 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:36.411 10:20:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:36.411 10:20:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:36.411 10:20:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:31:36.669 10:20:21 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:31:36.669 10:20:21 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:31:36.669 10:20:21 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:31:36.669 10:20:21 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:31:36.669 10:20:21 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:36.669 10:20:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:36.669 10:20:21 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:31:36.927 10:20:21 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:31:36.927 10:20:21 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:31:36.927 10:20:21 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:31:36.927 10:20:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:31:37.184 10:20:22 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:31:37.184 10:20:22 keyring_file -- keyring/file.sh@1 -- # cleanup 00:31:37.184 10:20:22 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.yvm47SNcUP /tmp/tmp.XjDQzweIWV 00:31:37.184 10:20:22 keyring_file -- keyring/file.sh@20 -- # killprocess 581741 00:31:37.184 10:20:22 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 581741 ']' 00:31:37.184 10:20:22 keyring_file -- common/autotest_common.sh@954 -- # kill -0 581741 00:31:37.184 10:20:22 keyring_file -- common/autotest_common.sh@955 -- # uname 00:31:37.184 10:20:22 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:37.184 10:20:22 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 581741 00:31:37.184 10:20:22 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:31:37.184 10:20:22 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:31:37.184 10:20:22 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 581741' 00:31:37.184 killing process with pid 581741 00:31:37.184 10:20:22 keyring_file -- common/autotest_common.sh@969 -- # kill 581741 00:31:37.184 Received shutdown signal, test time was about 1.000000 seconds 00:31:37.184 00:31:37.185 Latency(us) 00:31:37.185 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:37.185 =================================================================================================================== 00:31:37.185 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:31:37.185 10:20:22 keyring_file -- common/autotest_common.sh@974 -- # wait 581741 00:31:37.750 10:20:22 keyring_file -- keyring/file.sh@21 -- # killprocess 579871 00:31:37.750 10:20:22 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 579871 ']' 00:31:37.750 10:20:22 keyring_file -- common/autotest_common.sh@954 -- # kill -0 579871 00:31:37.750 10:20:22 keyring_file -- common/autotest_common.sh@955 -- # uname 00:31:37.750 10:20:22 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:37.750 10:20:22 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 579871 00:31:37.750 10:20:22 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:37.750 10:20:22 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:37.750 10:20:22 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 579871' 00:31:37.750 killing process with pid 579871 00:31:37.750 10:20:22 keyring_file -- common/autotest_common.sh@969 -- # kill 579871 00:31:37.750 [2024-07-25 10:20:22.645267] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:31:37.750 10:20:22 keyring_file -- common/autotest_common.sh@974 -- # wait 579871 00:31:38.008 00:31:38.008 real 0m18.359s 00:31:38.008 user 0m47.097s 00:31:38.008 sys 0m3.978s 00:31:38.008 10:20:23 keyring_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:38.008 10:20:23 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:31:38.008 ************************************ 00:31:38.008 END TEST keyring_file 00:31:38.008 ************************************ 00:31:38.008 10:20:23 -- spdk/autotest.sh@300 -- # [[ y == y ]] 00:31:38.008 10:20:23 -- spdk/autotest.sh@301 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:31:38.008 10:20:23 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:31:38.008 10:20:23 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:31:38.008 10:20:23 -- common/autotest_common.sh@10 -- # set +x 00:31:38.267 ************************************ 00:31:38.267 START TEST keyring_linux 00:31:38.267 ************************************ 00:31:38.267 10:20:23 keyring_linux -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:31:38.267 * Looking for test storage... 00:31:38.267 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:31:38.267 10:20:23 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:31:38.267 10:20:23 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:38.267 10:20:23 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:31:38.267 10:20:23 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:38.267 10:20:23 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:38.267 10:20:23 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:38.267 10:20:23 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:38.267 10:20:23 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:38.267 10:20:23 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:38.267 10:20:23 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:38.267 10:20:23 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:38.267 10:20:23 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:38.267 10:20:23 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:38.267 10:20:23 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:cd6acfbe-4794-e311-a299-001e67a97b02 00:31:38.267 10:20:23 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=cd6acfbe-4794-e311-a299-001e67a97b02 00:31:38.267 10:20:23 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:38.267 10:20:23 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:38.267 10:20:23 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:38.267 10:20:23 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:38.267 10:20:23 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:38.267 10:20:23 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:38.267 10:20:23 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:38.267 10:20:23 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:38.267 10:20:23 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:38.267 10:20:23 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:38.267 10:20:23 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:38.267 10:20:23 keyring_linux -- paths/export.sh@5 -- # export PATH 00:31:38.267 10:20:23 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:38.267 10:20:23 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:31:38.267 10:20:23 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:38.267 10:20:23 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:38.267 10:20:23 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:38.267 10:20:23 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:38.267 10:20:23 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:38.267 10:20:23 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:38.267 10:20:23 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:38.267 10:20:23 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:38.267 10:20:23 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:31:38.267 10:20:23 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:31:38.267 10:20:23 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:31:38.267 10:20:23 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:31:38.267 10:20:23 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:31:38.267 10:20:23 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:31:38.267 10:20:23 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:31:38.267 10:20:23 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:31:38.267 10:20:23 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:31:38.267 10:20:23 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:31:38.267 10:20:23 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:31:38.267 10:20:23 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:31:38.267 10:20:23 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:31:38.267 10:20:23 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:31:38.267 10:20:23 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:31:38.267 10:20:23 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:31:38.267 10:20:23 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:31:38.267 10:20:23 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:31:38.267 10:20:23 keyring_linux -- nvmf/common.sh@705 -- # python - 00:31:38.267 10:20:23 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:31:38.267 10:20:23 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:31:38.267 /tmp/:spdk-test:key0 00:31:38.267 10:20:23 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:31:38.267 10:20:23 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:31:38.267 10:20:23 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:31:38.267 10:20:23 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:31:38.267 10:20:23 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:31:38.267 10:20:23 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:31:38.267 10:20:23 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:31:38.267 10:20:23 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:31:38.267 10:20:23 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:31:38.267 10:20:23 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:31:38.267 10:20:23 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:31:38.267 10:20:23 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:31:38.267 10:20:23 keyring_linux -- nvmf/common.sh@705 -- # python - 00:31:38.267 10:20:23 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:31:38.267 10:20:23 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:31:38.267 /tmp/:spdk-test:key1 00:31:38.267 10:20:23 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=582233 00:31:38.267 10:20:23 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:31:38.267 10:20:23 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 582233 00:31:38.267 10:20:23 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 582233 ']' 00:31:38.267 10:20:23 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:38.267 10:20:23 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:38.267 10:20:23 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:38.267 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:38.267 10:20:23 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:38.267 10:20:23 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:31:38.268 [2024-07-25 10:20:23.377174] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:31:38.268 [2024-07-25 10:20:23.377265] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid582233 ] 00:31:38.268 EAL: No free 2048 kB hugepages reported on node 1 00:31:38.525 [2024-07-25 10:20:23.434965] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:38.525 [2024-07-25 10:20:23.554799] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:38.783 10:20:23 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:38.783 10:20:23 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:31:38.783 10:20:23 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:31:38.783 10:20:23 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:38.783 10:20:23 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:31:38.783 [2024-07-25 10:20:23.816919] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:38.783 null0 00:31:38.783 [2024-07-25 10:20:23.848983] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:31:38.783 [2024-07-25 10:20:23.849520] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:31:38.783 10:20:23 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:38.783 10:20:23 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:31:38.783 335280257 00:31:38.783 10:20:23 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:31:38.783 605568877 00:31:38.783 10:20:23 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=582244 00:31:38.783 10:20:23 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:31:38.783 10:20:23 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 582244 /var/tmp/bperf.sock 00:31:38.783 10:20:23 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 582244 ']' 00:31:38.783 10:20:23 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:31:38.783 10:20:23 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:31:38.783 10:20:23 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:31:38.783 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:31:38.783 10:20:23 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:31:38.783 10:20:23 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:31:38.783 [2024-07-25 10:20:23.917611] Starting SPDK v24.09-pre git sha1 704257090 / DPDK 24.03.0 initialization... 00:31:38.783 [2024-07-25 10:20:23.917686] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid582244 ] 00:31:38.783 EAL: No free 2048 kB hugepages reported on node 1 00:31:39.041 [2024-07-25 10:20:23.984186] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:39.041 [2024-07-25 10:20:24.107822] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:39.298 10:20:24 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:31:39.298 10:20:24 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:31:39.298 10:20:24 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:31:39.298 10:20:24 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:31:39.556 10:20:24 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:31:39.556 10:20:24 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:31:39.815 10:20:24 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:31:39.815 10:20:24 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:31:40.380 [2024-07-25 10:20:25.379747] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:31:40.380 nvme0n1 00:31:40.380 10:20:25 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:31:40.380 10:20:25 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:31:40.380 10:20:25 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:31:40.380 10:20:25 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:31:40.380 10:20:25 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:40.380 10:20:25 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:31:40.637 10:20:25 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:31:40.637 10:20:25 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:31:40.637 10:20:25 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:31:40.637 10:20:25 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:31:40.637 10:20:25 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:31:40.637 10:20:25 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:40.637 10:20:25 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:31:41.203 10:20:26 keyring_linux -- keyring/linux.sh@25 -- # sn=335280257 00:31:41.203 10:20:26 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:31:41.203 10:20:26 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:31:41.203 10:20:26 keyring_linux -- keyring/linux.sh@26 -- # [[ 335280257 == \3\3\5\2\8\0\2\5\7 ]] 00:31:41.203 10:20:26 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 335280257 00:31:41.203 10:20:26 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:31:41.203 10:20:26 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:31:41.203 Running I/O for 1 seconds... 00:31:42.137 00:31:42.137 Latency(us) 00:31:42.137 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:42.137 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:31:42.137 nvme0n1 : 1.02 5427.08 21.20 0.00 0.00 23400.90 12427.57 36894.34 00:31:42.137 =================================================================================================================== 00:31:42.137 Total : 5427.08 21.20 0.00 0.00 23400.90 12427.57 36894.34 00:31:42.137 0 00:31:42.137 10:20:27 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:31:42.137 10:20:27 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:31:42.702 10:20:27 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:31:42.702 10:20:27 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:31:42.702 10:20:27 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:31:42.702 10:20:27 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:31:42.702 10:20:27 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:31:42.702 10:20:27 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:31:42.960 10:20:27 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:31:42.960 10:20:27 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:31:42.960 10:20:27 keyring_linux -- keyring/linux.sh@23 -- # return 00:31:42.960 10:20:27 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:31:42.960 10:20:27 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:31:42.960 10:20:27 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:31:42.960 10:20:27 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:31:42.960 10:20:27 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:42.960 10:20:27 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:31:42.960 10:20:27 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:42.960 10:20:27 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:31:42.960 10:20:27 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:31:43.527 [2024-07-25 10:20:28.586938] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:31:43.527 [2024-07-25 10:20:28.587146] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc55fe0 (107): Transport endpoint is not connected 00:31:43.527 [2024-07-25 10:20:28.588138] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc55fe0 (9): Bad file descriptor 00:31:43.527 [2024-07-25 10:20:28.589136] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:43.527 [2024-07-25 10:20:28.589157] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:31:43.527 [2024-07-25 10:20:28.589172] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:43.527 request: 00:31:43.527 { 00:31:43.527 "name": "nvme0", 00:31:43.527 "trtype": "tcp", 00:31:43.527 "traddr": "127.0.0.1", 00:31:43.527 "adrfam": "ipv4", 00:31:43.527 "trsvcid": "4420", 00:31:43.527 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:43.527 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:43.527 "prchk_reftag": false, 00:31:43.527 "prchk_guard": false, 00:31:43.527 "hdgst": false, 00:31:43.527 "ddgst": false, 00:31:43.527 "psk": ":spdk-test:key1", 00:31:43.527 "method": "bdev_nvme_attach_controller", 00:31:43.527 "req_id": 1 00:31:43.527 } 00:31:43.527 Got JSON-RPC error response 00:31:43.527 response: 00:31:43.527 { 00:31:43.527 "code": -5, 00:31:43.527 "message": "Input/output error" 00:31:43.527 } 00:31:43.527 10:20:28 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:31:43.527 10:20:28 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:43.527 10:20:28 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:43.527 10:20:28 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:43.527 10:20:28 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:31:43.527 10:20:28 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:31:43.527 10:20:28 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:31:43.527 10:20:28 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:31:43.527 10:20:28 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:31:43.527 10:20:28 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:31:43.527 10:20:28 keyring_linux -- keyring/linux.sh@33 -- # sn=335280257 00:31:43.527 10:20:28 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 335280257 00:31:43.527 1 links removed 00:31:43.527 10:20:28 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:31:43.527 10:20:28 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:31:43.527 10:20:28 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:31:43.527 10:20:28 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:31:43.527 10:20:28 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:31:43.527 10:20:28 keyring_linux -- keyring/linux.sh@33 -- # sn=605568877 00:31:43.527 10:20:28 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 605568877 00:31:43.527 1 links removed 00:31:43.527 10:20:28 keyring_linux -- keyring/linux.sh@41 -- # killprocess 582244 00:31:43.527 10:20:28 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 582244 ']' 00:31:43.527 10:20:28 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 582244 00:31:43.527 10:20:28 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:31:43.527 10:20:28 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:43.527 10:20:28 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 582244 00:31:43.527 10:20:28 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:31:43.527 10:20:28 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:31:43.527 10:20:28 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 582244' 00:31:43.527 killing process with pid 582244 00:31:43.527 10:20:28 keyring_linux -- common/autotest_common.sh@969 -- # kill 582244 00:31:43.527 Received shutdown signal, test time was about 1.000000 seconds 00:31:43.527 00:31:43.527 Latency(us) 00:31:43.527 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:43.527 =================================================================================================================== 00:31:43.527 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:43.527 10:20:28 keyring_linux -- common/autotest_common.sh@974 -- # wait 582244 00:31:43.786 10:20:28 keyring_linux -- keyring/linux.sh@42 -- # killprocess 582233 00:31:43.786 10:20:28 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 582233 ']' 00:31:43.786 10:20:28 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 582233 00:31:43.786 10:20:28 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:31:44.043 10:20:28 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:31:44.043 10:20:28 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 582233 00:31:44.043 10:20:28 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:31:44.043 10:20:28 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:31:44.043 10:20:28 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 582233' 00:31:44.043 killing process with pid 582233 00:31:44.043 10:20:28 keyring_linux -- common/autotest_common.sh@969 -- # kill 582233 00:31:44.043 10:20:28 keyring_linux -- common/autotest_common.sh@974 -- # wait 582233 00:31:44.609 00:31:44.609 real 0m6.306s 00:31:44.609 user 0m13.059s 00:31:44.609 sys 0m1.748s 00:31:44.609 10:20:29 keyring_linux -- common/autotest_common.sh@1126 -- # xtrace_disable 00:31:44.609 10:20:29 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:31:44.609 ************************************ 00:31:44.609 END TEST keyring_linux 00:31:44.609 ************************************ 00:31:44.609 10:20:29 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:31:44.609 10:20:29 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:31:44.609 10:20:29 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:31:44.609 10:20:29 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:31:44.609 10:20:29 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:31:44.609 10:20:29 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:31:44.609 10:20:29 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:31:44.609 10:20:29 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:31:44.609 10:20:29 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:31:44.609 10:20:29 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:31:44.609 10:20:29 -- spdk/autotest.sh@360 -- # '[' 0 -eq 1 ']' 00:31:44.609 10:20:29 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:31:44.609 10:20:29 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:31:44.609 10:20:29 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:31:44.609 10:20:29 -- spdk/autotest.sh@379 -- # [[ 0 -eq 1 ]] 00:31:44.609 10:20:29 -- spdk/autotest.sh@384 -- # trap - SIGINT SIGTERM EXIT 00:31:44.609 10:20:29 -- spdk/autotest.sh@386 -- # timing_enter post_cleanup 00:31:44.609 10:20:29 -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:44.609 10:20:29 -- common/autotest_common.sh@10 -- # set +x 00:31:44.609 10:20:29 -- spdk/autotest.sh@387 -- # autotest_cleanup 00:31:44.609 10:20:29 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:31:44.609 10:20:29 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:31:44.609 10:20:29 -- common/autotest_common.sh@10 -- # set +x 00:31:47.138 INFO: APP EXITING 00:31:47.138 INFO: killing all VMs 00:31:47.138 INFO: killing vhost app 00:31:47.138 INFO: EXIT DONE 00:31:48.137 0000:82:00.0 (8086 0a54): Already using the nvme driver 00:31:48.440 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:31:48.440 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:31:48.440 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:31:48.440 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:31:48.440 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:31:48.440 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:31:48.440 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:31:48.440 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:31:48.440 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:31:48.440 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:31:48.440 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:31:48.440 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:31:48.440 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:31:48.440 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:31:48.440 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:31:48.440 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:31:50.343 Cleaning 00:31:50.343 Removing: /var/run/dpdk/spdk0/config 00:31:50.343 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:31:50.343 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:31:50.343 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:31:50.343 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:31:50.343 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:31:50.343 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:31:50.343 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:31:50.343 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:31:50.343 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:31:50.343 Removing: /var/run/dpdk/spdk0/hugepage_info 00:31:50.343 Removing: /var/run/dpdk/spdk1/config 00:31:50.343 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:31:50.343 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:31:50.343 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:31:50.343 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:31:50.343 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:31:50.343 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:31:50.343 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:31:50.343 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:31:50.343 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:31:50.343 Removing: /var/run/dpdk/spdk1/hugepage_info 00:31:50.343 Removing: /var/run/dpdk/spdk1/mp_socket 00:31:50.343 Removing: /var/run/dpdk/spdk2/config 00:31:50.343 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:31:50.343 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:31:50.343 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:31:50.343 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:31:50.343 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:31:50.343 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:31:50.343 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:31:50.343 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:31:50.343 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:31:50.343 Removing: /var/run/dpdk/spdk2/hugepage_info 00:31:50.343 Removing: /var/run/dpdk/spdk3/config 00:31:50.343 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:31:50.343 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:31:50.343 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:31:50.343 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:31:50.343 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:31:50.343 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:31:50.344 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:31:50.344 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:31:50.344 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:31:50.344 Removing: /var/run/dpdk/spdk3/hugepage_info 00:31:50.344 Removing: /var/run/dpdk/spdk4/config 00:31:50.344 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:31:50.344 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:31:50.344 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:31:50.344 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:31:50.344 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:31:50.344 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:31:50.344 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:31:50.344 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:31:50.344 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:31:50.344 Removing: /var/run/dpdk/spdk4/hugepage_info 00:31:50.344 Removing: /dev/shm/bdev_svc_trace.1 00:31:50.344 Removing: /dev/shm/nvmf_trace.0 00:31:50.344 Removing: /dev/shm/spdk_tgt_trace.pid310767 00:31:50.344 Removing: /var/run/dpdk/spdk0 00:31:50.344 Removing: /var/run/dpdk/spdk1 00:31:50.344 Removing: /var/run/dpdk/spdk2 00:31:50.344 Removing: /var/run/dpdk/spdk3 00:31:50.344 Removing: /var/run/dpdk/spdk4 00:31:50.344 Removing: /var/run/dpdk/spdk_pid308594 00:31:50.344 Removing: /var/run/dpdk/spdk_pid309834 00:31:50.344 Removing: /var/run/dpdk/spdk_pid310767 00:31:50.344 Removing: /var/run/dpdk/spdk_pid311204 00:31:50.344 Removing: /var/run/dpdk/spdk_pid311891 00:31:50.344 Removing: /var/run/dpdk/spdk_pid312040 00:31:50.344 Removing: /var/run/dpdk/spdk_pid312747 00:31:50.344 Removing: /var/run/dpdk/spdk_pid312764 00:31:50.344 Removing: /var/run/dpdk/spdk_pid313034 00:31:50.344 Removing: /var/run/dpdk/spdk_pid314590 00:31:50.344 Removing: /var/run/dpdk/spdk_pid315505 00:31:50.344 Removing: /var/run/dpdk/spdk_pid315826 00:31:50.344 Removing: /var/run/dpdk/spdk_pid316128 00:31:50.344 Removing: /var/run/dpdk/spdk_pid316340 00:31:50.344 Removing: /var/run/dpdk/spdk_pid316534 00:31:50.344 Removing: /var/run/dpdk/spdk_pid316805 00:31:50.344 Removing: /var/run/dpdk/spdk_pid316965 00:31:50.344 Removing: /var/run/dpdk/spdk_pid317143 00:31:50.344 Removing: /var/run/dpdk/spdk_pid317681 00:31:50.344 Removing: /var/run/dpdk/spdk_pid320517 00:31:50.344 Removing: /var/run/dpdk/spdk_pid320804 00:31:50.344 Removing: /var/run/dpdk/spdk_pid321072 00:31:50.344 Removing: /var/run/dpdk/spdk_pid321092 00:31:50.344 Removing: /var/run/dpdk/spdk_pid321530 00:31:50.344 Removing: /var/run/dpdk/spdk_pid321650 00:31:50.344 Removing: /var/run/dpdk/spdk_pid322096 00:31:50.344 Removing: /var/run/dpdk/spdk_pid322217 00:31:50.344 Removing: /var/run/dpdk/spdk_pid322395 00:31:50.344 Removing: /var/run/dpdk/spdk_pid322521 00:31:50.344 Removing: /var/run/dpdk/spdk_pid322687 00:31:50.344 Removing: /var/run/dpdk/spdk_pid322821 00:31:50.344 Removing: /var/run/dpdk/spdk_pid323190 00:31:50.344 Removing: /var/run/dpdk/spdk_pid323357 00:31:50.344 Removing: /var/run/dpdk/spdk_pid323666 00:31:50.344 Removing: /var/run/dpdk/spdk_pid325894 00:31:50.344 Removing: /var/run/dpdk/spdk_pid328665 00:31:50.344 Removing: /var/run/dpdk/spdk_pid335658 00:31:50.344 Removing: /var/run/dpdk/spdk_pid336248 00:31:50.344 Removing: /var/run/dpdk/spdk_pid339313 00:31:50.344 Removing: /var/run/dpdk/spdk_pid339475 00:31:50.344 Removing: /var/run/dpdk/spdk_pid342256 00:31:50.344 Removing: /var/run/dpdk/spdk_pid346232 00:31:50.344 Removing: /var/run/dpdk/spdk_pid348550 00:31:50.344 Removing: /var/run/dpdk/spdk_pid355259 00:31:50.344 Removing: /var/run/dpdk/spdk_pid360744 00:31:50.344 Removing: /var/run/dpdk/spdk_pid361946 00:31:50.344 Removing: /var/run/dpdk/spdk_pid362615 00:31:50.344 Removing: /var/run/dpdk/spdk_pid373521 00:31:50.344 Removing: /var/run/dpdk/spdk_pid376317 00:31:50.344 Removing: /var/run/dpdk/spdk_pid402908 00:31:50.344 Removing: /var/run/dpdk/spdk_pid406120 00:31:50.344 Removing: /var/run/dpdk/spdk_pid410200 00:31:50.344 Removing: /var/run/dpdk/spdk_pid415059 00:31:50.344 Removing: /var/run/dpdk/spdk_pid415061 00:31:50.344 Removing: /var/run/dpdk/spdk_pid415714 00:31:50.344 Removing: /var/run/dpdk/spdk_pid416248 00:31:50.344 Removing: /var/run/dpdk/spdk_pid416909 00:31:50.344 Removing: /var/run/dpdk/spdk_pid417308 00:31:50.344 Removing: /var/run/dpdk/spdk_pid417310 00:31:50.344 Removing: /var/run/dpdk/spdk_pid417524 00:31:50.344 Removing: /var/run/dpdk/spdk_pid417597 00:31:50.344 Removing: /var/run/dpdk/spdk_pid417712 00:31:50.344 Removing: /var/run/dpdk/spdk_pid418247 00:31:50.344 Removing: /var/run/dpdk/spdk_pid418905 00:31:50.344 Removing: /var/run/dpdk/spdk_pid419479 00:31:50.344 Removing: /var/run/dpdk/spdk_pid419965 00:31:50.344 Removing: /var/run/dpdk/spdk_pid419972 00:31:50.344 Removing: /var/run/dpdk/spdk_pid420234 00:31:50.344 Removing: /var/run/dpdk/spdk_pid421277 00:31:50.344 Removing: /var/run/dpdk/spdk_pid422100 00:31:50.344 Removing: /var/run/dpdk/spdk_pid427432 00:31:50.344 Removing: /var/run/dpdk/spdk_pid458441 00:31:50.344 Removing: /var/run/dpdk/spdk_pid462111 00:31:50.344 Removing: /var/run/dpdk/spdk_pid463290 00:31:50.344 Removing: /var/run/dpdk/spdk_pid464555 00:31:50.344 Removing: /var/run/dpdk/spdk_pid464629 00:31:50.344 Removing: /var/run/dpdk/spdk_pid464778 00:31:50.344 Removing: /var/run/dpdk/spdk_pid464914 00:31:50.344 Removing: /var/run/dpdk/spdk_pid465479 00:31:50.344 Removing: /var/run/dpdk/spdk_pid466793 00:31:50.344 Removing: /var/run/dpdk/spdk_pid467662 00:31:50.344 Removing: /var/run/dpdk/spdk_pid468088 00:31:50.344 Removing: /var/run/dpdk/spdk_pid469823 00:31:50.344 Removing: /var/run/dpdk/spdk_pid470256 00:31:50.344 Removing: /var/run/dpdk/spdk_pid470836 00:31:50.344 Removing: /var/run/dpdk/spdk_pid473502 00:31:50.344 Removing: /var/run/dpdk/spdk_pid479556 00:31:50.344 Removing: /var/run/dpdk/spdk_pid482445 00:31:50.344 Removing: /var/run/dpdk/spdk_pid486232 00:31:50.344 Removing: /var/run/dpdk/spdk_pid487277 00:31:50.344 Removing: /var/run/dpdk/spdk_pid488292 00:31:50.344 Removing: /var/run/dpdk/spdk_pid491597 00:31:50.344 Removing: /var/run/dpdk/spdk_pid493968 00:31:50.344 Removing: /var/run/dpdk/spdk_pid498341 00:31:50.344 Removing: /var/run/dpdk/spdk_pid498350 00:31:50.344 Removing: /var/run/dpdk/spdk_pid501379 00:31:50.344 Removing: /var/run/dpdk/spdk_pid501519 00:31:50.344 Removing: /var/run/dpdk/spdk_pid501650 00:31:50.344 Removing: /var/run/dpdk/spdk_pid501916 00:31:50.344 Removing: /var/run/dpdk/spdk_pid501921 00:31:50.344 Removing: /var/run/dpdk/spdk_pid504820 00:31:50.344 Removing: /var/run/dpdk/spdk_pid505157 00:31:50.344 Removing: /var/run/dpdk/spdk_pid507959 00:31:50.344 Removing: /var/run/dpdk/spdk_pid509811 00:31:50.344 Removing: /var/run/dpdk/spdk_pid513495 00:31:50.344 Removing: /var/run/dpdk/spdk_pid516953 00:31:50.344 Removing: /var/run/dpdk/spdk_pid524240 00:31:50.344 Removing: /var/run/dpdk/spdk_pid529211 00:31:50.344 Removing: /var/run/dpdk/spdk_pid529214 00:31:50.344 Removing: /var/run/dpdk/spdk_pid542533 00:31:50.344 Removing: /var/run/dpdk/spdk_pid543068 00:31:50.344 Removing: /var/run/dpdk/spdk_pid543511 00:31:50.344 Removing: /var/run/dpdk/spdk_pid544015 00:31:50.344 Removing: /var/run/dpdk/spdk_pid544703 00:31:50.344 Removing: /var/run/dpdk/spdk_pid545130 00:31:50.344 Removing: /var/run/dpdk/spdk_pid545663 00:31:50.344 Removing: /var/run/dpdk/spdk_pid546080 00:31:50.344 Removing: /var/run/dpdk/spdk_pid548710 00:31:50.344 Removing: /var/run/dpdk/spdk_pid548857 00:31:50.344 Removing: /var/run/dpdk/spdk_pid552775 00:31:50.344 Removing: /var/run/dpdk/spdk_pid552838 00:31:50.344 Removing: /var/run/dpdk/spdk_pid554557 00:31:50.344 Removing: /var/run/dpdk/spdk_pid560095 00:31:50.344 Removing: /var/run/dpdk/spdk_pid560100 00:31:50.344 Removing: /var/run/dpdk/spdk_pid563140 00:31:50.344 Removing: /var/run/dpdk/spdk_pid564538 00:31:50.344 Removing: /var/run/dpdk/spdk_pid565817 00:31:50.344 Removing: /var/run/dpdk/spdk_pid566668 00:31:50.344 Removing: /var/run/dpdk/spdk_pid568094 00:31:50.344 Removing: /var/run/dpdk/spdk_pid568962 00:31:50.344 Removing: /var/run/dpdk/spdk_pid574504 00:31:50.344 Removing: /var/run/dpdk/spdk_pid574777 00:31:50.344 Removing: /var/run/dpdk/spdk_pid575174 00:31:50.602 Removing: /var/run/dpdk/spdk_pid576730 00:31:50.602 Removing: /var/run/dpdk/spdk_pid577130 00:31:50.602 Removing: /var/run/dpdk/spdk_pid577409 00:31:50.602 Removing: /var/run/dpdk/spdk_pid579871 00:31:50.602 Removing: /var/run/dpdk/spdk_pid579938 00:31:50.602 Removing: /var/run/dpdk/spdk_pid581741 00:31:50.602 Removing: /var/run/dpdk/spdk_pid582233 00:31:50.602 Removing: /var/run/dpdk/spdk_pid582244 00:31:50.602 Clean 00:31:50.602 10:20:35 -- common/autotest_common.sh@1451 -- # return 0 00:31:50.602 10:20:35 -- spdk/autotest.sh@388 -- # timing_exit post_cleanup 00:31:50.602 10:20:35 -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:50.602 10:20:35 -- common/autotest_common.sh@10 -- # set +x 00:31:50.602 10:20:35 -- spdk/autotest.sh@390 -- # timing_exit autotest 00:31:50.602 10:20:35 -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:50.602 10:20:35 -- common/autotest_common.sh@10 -- # set +x 00:31:50.602 10:20:35 -- spdk/autotest.sh@391 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:31:50.602 10:20:35 -- spdk/autotest.sh@393 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:31:50.602 10:20:35 -- spdk/autotest.sh@393 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:31:50.602 10:20:35 -- spdk/autotest.sh@395 -- # hash lcov 00:31:50.603 10:20:35 -- spdk/autotest.sh@395 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:31:50.603 10:20:35 -- spdk/autotest.sh@397 -- # hostname 00:31:50.603 10:20:35 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-08 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:31:50.860 geninfo: WARNING: invalid characters removed from testname! 00:32:29.566 10:21:13 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:32:39.540 10:21:24 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:32:47.677 10:21:32 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:32:51.859 10:21:36 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:32:59.967 10:21:44 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:08.071 10:21:51 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:33:16.177 10:21:59 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:33:16.177 10:21:59 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:16.177 10:21:59 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:33:16.177 10:21:59 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:16.177 10:21:59 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:16.177 10:21:59 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:16.177 10:21:59 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:16.177 10:21:59 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:16.177 10:21:59 -- paths/export.sh@5 -- $ export PATH 00:33:16.177 10:21:59 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:16.177 10:21:59 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:33:16.177 10:21:59 -- common/autobuild_common.sh@447 -- $ date +%s 00:33:16.177 10:21:59 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721895719.XXXXXX 00:33:16.177 10:21:59 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721895719.LTnSv1 00:33:16.177 10:21:59 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:33:16.177 10:21:59 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:33:16.177 10:21:59 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:33:16.177 10:21:59 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:33:16.177 10:21:59 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:33:16.177 10:21:59 -- common/autobuild_common.sh@463 -- $ get_config_params 00:33:16.177 10:21:59 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:33:16.177 10:21:59 -- common/autotest_common.sh@10 -- $ set +x 00:33:16.177 10:21:59 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:33:16.177 10:21:59 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:33:16.177 10:21:59 -- pm/common@17 -- $ local monitor 00:33:16.177 10:21:59 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:16.177 10:21:59 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:16.177 10:21:59 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:16.177 10:21:59 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:16.177 10:21:59 -- pm/common@21 -- $ date +%s 00:33:16.177 10:21:59 -- pm/common@25 -- $ sleep 1 00:33:16.177 10:21:59 -- pm/common@21 -- $ date +%s 00:33:16.177 10:21:59 -- pm/common@21 -- $ date +%s 00:33:16.177 10:21:59 -- pm/common@21 -- $ date +%s 00:33:16.177 10:21:59 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721895719 00:33:16.177 10:21:59 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721895719 00:33:16.177 10:21:59 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721895719 00:33:16.177 10:21:59 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721895719 00:33:16.177 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721895719_collect-vmstat.pm.log 00:33:16.177 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721895719_collect-cpu-load.pm.log 00:33:16.177 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721895719_collect-cpu-temp.pm.log 00:33:16.177 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721895719_collect-bmc-pm.bmc.pm.log 00:33:16.177 10:22:00 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:33:16.177 10:22:00 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j48 00:33:16.177 10:22:00 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:16.177 10:22:00 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:33:16.177 10:22:00 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:33:16.177 10:22:00 -- spdk/autopackage.sh@19 -- $ timing_finish 00:33:16.177 10:22:00 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:33:16.177 10:22:00 -- common/autotest_common.sh@737 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:33:16.177 10:22:00 -- common/autotest_common.sh@739 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:33:16.177 10:22:01 -- spdk/autopackage.sh@20 -- $ exit 0 00:33:16.177 10:22:01 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:33:16.177 10:22:01 -- pm/common@29 -- $ signal_monitor_resources TERM 00:33:16.177 10:22:01 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:33:16.177 10:22:01 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:16.177 10:22:01 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:33:16.177 10:22:01 -- pm/common@44 -- $ pid=593143 00:33:16.177 10:22:01 -- pm/common@50 -- $ kill -TERM 593143 00:33:16.177 10:22:01 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:16.177 10:22:01 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:33:16.177 10:22:01 -- pm/common@44 -- $ pid=593145 00:33:16.177 10:22:01 -- pm/common@50 -- $ kill -TERM 593145 00:33:16.177 10:22:01 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:16.177 10:22:01 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:33:16.177 10:22:01 -- pm/common@44 -- $ pid=593147 00:33:16.177 10:22:01 -- pm/common@50 -- $ kill -TERM 593147 00:33:16.177 10:22:01 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:33:16.177 10:22:01 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:33:16.177 10:22:01 -- pm/common@44 -- $ pid=593174 00:33:16.177 10:22:01 -- pm/common@50 -- $ sudo -E kill -TERM 593174 00:33:16.177 + [[ -n 219956 ]] 00:33:16.177 + sudo kill 219956 00:33:16.187 [Pipeline] } 00:33:16.205 [Pipeline] // stage 00:33:16.210 [Pipeline] } 00:33:16.227 [Pipeline] // timeout 00:33:16.232 [Pipeline] } 00:33:16.249 [Pipeline] // catchError 00:33:16.254 [Pipeline] } 00:33:16.271 [Pipeline] // wrap 00:33:16.277 [Pipeline] } 00:33:16.294 [Pipeline] // catchError 00:33:16.303 [Pipeline] stage 00:33:16.305 [Pipeline] { (Epilogue) 00:33:16.320 [Pipeline] catchError 00:33:16.322 [Pipeline] { 00:33:16.336 [Pipeline] echo 00:33:16.338 Cleanup processes 00:33:16.344 [Pipeline] sh 00:33:16.653 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:16.653 593292 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:33:16.653 593410 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:16.667 [Pipeline] sh 00:33:16.949 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:33:16.950 ++ grep -v 'sudo pgrep' 00:33:16.950 ++ awk '{print $1}' 00:33:16.950 + sudo kill -9 593292 00:33:16.961 [Pipeline] sh 00:33:17.240 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:33:29.444 [Pipeline] sh 00:33:29.724 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:33:29.724 Artifacts sizes are good 00:33:29.739 [Pipeline] archiveArtifacts 00:33:29.746 Archiving artifacts 00:33:29.989 [Pipeline] sh 00:33:30.275 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:33:30.291 [Pipeline] cleanWs 00:33:30.300 [WS-CLEANUP] Deleting project workspace... 00:33:30.300 [WS-CLEANUP] Deferred wipeout is used... 00:33:30.307 [WS-CLEANUP] done 00:33:30.308 [Pipeline] } 00:33:30.324 [Pipeline] // catchError 00:33:30.333 [Pipeline] sh 00:33:30.612 + logger -p user.info -t JENKINS-CI 00:33:30.621 [Pipeline] } 00:33:30.637 [Pipeline] // stage 00:33:30.643 [Pipeline] } 00:33:30.661 [Pipeline] // node 00:33:30.666 [Pipeline] End of Pipeline 00:33:30.703 Finished: SUCCESS